When TOS Fails: Auditing AI Safety When the User Is a Military Contractor
Anthropic’s acceptable use policy explicitly prohibits Claude from being used in autonomous weapons systems. The U.S. military used it for target selection anyway. That gap is not a loophole. It is the entire problem.