When Anthropic CEO Dario Amodei walked into the Pentagon on February 24, his company’s AI model, Claude, was already deep inside the national-security establishment. Anthropic had pushed to get it there, believing that keeping the best AI out of the military’s hands would only cede the advantage to America’s rivals. The first frontier AI system cleared for classified networks, Claude reportedly had been deployed in the capture of Venezuelan President Nicolás Maduro in January and would be used in the U.S. strikes on Iran days later. But Defense Secretary Pete Hegseth wanted more: he pressed Amodei to drop “red lines” barring Claude’s use for mass domestic surveillance and fully autonomous weapons systems, insisting the military should be able to use the model for “all lawful purposes.” When Amodei refused, the Trump Administration designated Anthropic a supply-chain risk—an unprecedented move against an American company—and ordered all federal agencies to stop using Claude. Anthropic challenged the designation in court, and a federal judge granted a preliminary injunction blocking it. When the dust settled, Anthropic appeared to have emerged stronger, with a surge of new customers drawn to the company’s willingness to hold its ground. That Anthropic had voluntarily put Claude inside the Pentagon in the first place seemed beside the point.