When Defense Secretary Pete Hegseth announced the Pentagon was adding Grok to its list of generative AI providers, he railed against AI models that “won’t allow you to fight wars.”

Hegseth wasn’t just riffing, a person familiar with his thinking said: He was specifically referring to Anthropic, the AI startup that spun out of OpenAI in an attempt to build safer AI technology.

In recent weeks, tension has built up between Anthropic and the military, according to two people with knowledge of the matter, as the Trump administration attempts to more quickly adopt new warfighting technology, including the most advanced AI models.

From Anthropic’s perspective, the company feels like it has a responsibility to ensure its models are not pushed beyond their capabilities, particularly in military actions that could have lethal consequences.

From the military’s point of view, though, Anthropic shouldn’t attempt to make the final call on how, exactly, its models are used in warfare. Those decisions should be left to the military, like any other technology or weapon the Pentagon purchases, it says.

A Defense Department official, speaking on background, said it would only deploy AI models that are “free from ideological constraints that limit lawful military applications. Our warfighters need to have access to the models that provide decision superiority in the battlefield.” Anthropic declined to comment.