Defense Secretary Pete Hegseth is moving closer to placing artificial-intelligence firm Anthropic on a U.S. “supply chain risk” list, a step that could sharply restrict its defence business and limit how contractors use its technology.
The Pentagon confirmed it is reviewing its relationship with the company on Tuesday, which develops the Claude large language model. Additionally, officials signalled that future contracts may depend on broader cooperation around military use.
Chief Pentagon spokesman Sean Parnell said the department expects partners to support lawful missions that protect U.S. troops and the public. He framed the review as a matter of national security and operational readiness.
A senior defence official said the Pentagon increasingly views Anthropic as a potential risk within critical technology supply chains. Consequently, the department may require vendors to certify that they do not rely on Anthropic’s models.
Such a rule would extend beyond a single contract. Furthermore, it could affect companies that embed Claude into software or analytics tools used by the military.
The Pentagon contract itself carries a ceiling value of up to USD$200 million. However, Anthropic says it generates revenue at an annual rate of about USD$14 billion, which would soften the direct financial blow.
Anthropic said it continues good-faith discussions with defence officials to clarify acceptable use cases. Additionally, the company noted it was the first AI developer to deploy models inside classified government networks.
The dispute centres on how the government can use Claude. Pentagon officials want AI tools available for all lawful military purposes.
Read more: Investment firm thinks TOTO is a highly undervalued AI player
Read more: Elon Musk insists the next frontier for AI is above the atmosphere
The Defence Department is doubling down on AI
Anthropic has drawn limits around fully autonomous weapons and broad domestic surveillance. Meanwhile, the government has expressed concern about negotiating individual use cases and navigating perceived grey areas.
Reports indicate the Pentagon used Claude through a Palantir Technologies (NASDAQ: PLTR) platform in an operation that led to the apprehension of Venezuelan President Nicolás Maduro. Anthropic has said it did not object to that mission.
Other AI developers also work with the Defence Department. Furthermore, models from Alphabet Inc’s (NASDAQ: GOOGL) Google and SpaceX’s xAI currently operate in unclassified settings.
Officials say those systems could soon expand into classified environments. Conversely, Anthropic’s stance on certain applications has complicated its path forward with the Pentagon.
Beyond the dispute with Anthropic, the U.S. government is rapidly expanding how it plans to use artificial intelligence across defence and civilian agencies.
The Defence Department continues to build on Project Maven, which uses machine learning to help analysts review drone and surveillance imagery. Furthermore, the military is advancing Joint All-Domain Command and Control, a system designed to connect sensors, weapons and commanders through AI-enabled data integration.
The Pentagon has also launched dedicated platforms such as GenAI.mil to speed internal adoption. Consequently, service branches can test and deploy AI tools in secure environments.
At the federal level, the White House has promoted a national AI strategy aimed at accelerating infrastructure development and strengthening global leadership. Additionally, executive actions have encouraged agencies to adopt AI while limiting conflicting state-level regulations.
Meanwhile, Washington is increasing recruitment of technical talent to modernize government systems. This broader push to embed AI across national security and civilian functions provides critical context for the Anthropic review and the Pentagon’s evolving expectations of its technology partners.