US Secretary of War Pete Hegseth branded Anthropic a national security threat for refusing to drop contractual prohibitions on the use of their technology for autonomous killing and mass surveillance. OpenAI signed the contract hours later. This sets a dangerous precedent for European security and the rules-based order the EU has fought to protect.

The US Department of War (DoW) requires AI companies to permit “all lawful use” of their models without contractual exceptions. Anthropic wanted two such exceptions: no mass domestic surveillance, and no fully autonomous weapons. When Anthropic refused to abandon these safeguards, Hegseth designated the company a “supply chain risk” – a label never before aimed at an American company. Hours later, OpenAI signed a contract with the Pentagon to fill the gap.

The precedent set around the use of AI in the military domain is critical for the long-term security of European citizens. AI is already used in wars in Iran, Ukraine and any future conflict EU countries may be involved in.

The stakes are high: A King’s College London study found that frontier AI models deployed tactical nuclear weapons in 20 out of 21 war games, and never once chose de-escalation. More generally, AI systems are prone to hallucination, brittleness and escalation bias. In lethal applications, errors are irreversible.

European countries seek to prevent this via international commitments around the military use of AI, including requirements for ‘human control’ over lethal autonomous systems. Those statements carry weight if countries are willing to constrain themselves based on the trust their adversaries will do the same. Given recent developments, nobody will trust that the US is still committed to these principles. 

Europe needs to act now: favor AI developers that verifiably respect international law in procurement, invest defense budgets in military AI reliability and safety testing – especially for the US equipment we depend upon – and demand that the DoW and OpenAI clarify how they ensure human oversight.

The political ground is also fertile. Polling shows 79% of Americans want humans making final decisions on lethal force. Even most Trump voters agree with Anthropic’s position. A “QuitGPT” consumer boycott is costing OpenAI subscribers, and hundreds of OpenAI employees signed a letter in solidarity with Anthropic. OpenAI and the DoW already added language to their contract around domestic surveillance; autonomous killing should be next.

US assurances alone won’t keep Europeans safe. Multilateral commitments and verification mechanisms must bind allies and adversaries alike. That architecture cannot be built if the US walks away from the table, and the EU stays silent.

 

Jitse Goutbeek​​​​ is AI Fellow at the Europe’s Political Economy team at the EPC.

The support the European Policy Centre receives for its ongoing operations, or specifically for its publications, does not constitute an endorsement of their contents, which reflect the views of the authors only. Supporters and partners cannot be held responsible for any use that may be made of the information contained therein.