By Katrina Manson, Bloomberg
The Pentagon has struck agreements with four more technology companies for expanded use of advanced artificial intelligence tools on classified military networks, according to a Defense Department statement and two defense officials briefed on the matter.
Nvidia Corp., Microsoft Corp., Reflection AI Inc. and Amazon.com Inc. have all newly struck agreements with the US Defense Department “for lawful operational use,” according to the statement. The officials asked not to be named to discuss internal discussions.
The deals provide the Pentagon with wide leeway to potentially use powerful advanced AI technologies for secret combat operations, including to assist with targeting. The new terms of usage, including “lawful operational use,” substantially water down some of the limits sought by Anthropic PBC that torpedoed its agreement with the Pentagon earlier this year.
Many of the technology companies already provide AI tools to the US military, but defense officials have been seeking to expand the terms of use since the fall of 2025. Other technology companies that have recently agreed to similar deals include SpaceX, OpenAI and Google.
“These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force,” said the statement, which refers to all seven companies and which also marks the first official Pentagon confirmation of a new deal with Google reported earlier this week.
The effort to deliver new deals with technology companies for maximalist military use of advanced AI comes as the Pentagon is racing to develop viable alternatives to Anthropic’s Claude tool. An acrimonious fracture between Anthropic and senior defense officials exposed a recurring fault line between the Pentagon and Silicon Valley over the looming risks of AI at war.
The Pentagon negotiated its deal with Amazon Web Services late into Thursday, according to two Pentagon officials briefed on the talks.
AWS has been committed to supporting the US military for more than a decade, said Tim Barrett, an AWS spokesperson, when asked to comment on the new deal. “We look forward to continuing to support the Department of War’s modernization efforts, building AI solutions that help them accomplish their critical missions.”
Nvidia didn’t immediately provide comment on the new deal, and a Microsoft spokesperson declined to comment. A representative from Reflection wasn’t immediately available for comment.
The Pentagon refused to heed Anthropic’s stated red lines seeking to limit how the US military can use AI in classified operations during recent renegotiations and sought to eject the company from all defense supply lines. The company didn’t want its technology used for mass domestic surveillance of US citizens or for fully autonomous weapons systems.
Since the fallout with Anthropic, the Pentagon has accelerated its efforts to bring on other AI companies to agree to expanded usage terms for their models and infrastructure on secret and top-secret networks. In addition, defense officials are seeking to ensure the US military avoids depending on any one single company or set of limitations, according to one of the Pentagon officials briefed on the talks.
Nvidia’s new agreement, for instance, gives far greater license to the Pentagon than previous the terms of use in previous AI deals. The company has agreed not to impose any usage policies or model licenses that would restrict the Defense Department’s use of the company’s models beyond what is required by US law and constitutional authority, according to a person familiar with the agreement, who asked not to be named to discuss sensitive matters.
Nvidia agreed to provide “full and effective use of their capabilities in support of Department missions” including for autonomous weapons systems development, according to the person.
The Department’s use of any Nvidia models, weights, or other capabilities will be consistent with the civil liberties and constitutional rights of Americans under law, the person said, a commitment that stops short of any clearly stipulated monitoring and evaluation mechanisms.
The agency gave itself six months to replace Claude, which is being used for US military operations against Iran. The disagreement is now mired in a court battle.
On Thursday, Secretary of Defense Pete Hegseth described Anthropic’s leader as an “ideological lunatic” and defended his department’s use of AI.
“We follow the law and humans make decisions,” Hegseth told Congress. “AI is not making lethal decisions.”
The Pentagon’s effort to equip the US military with cutting-edge AI at the classified level will help “human-machine teams” that can handle immense volumes of data, said Cameron Stanley, the Pentagon’s chief digital and AI officer, in a statement referring to the new deals.
Although OpenAI signed a new agreement for expanded use of its models on classified networks with the Pentagon earlier this year, its tools are still not deployed on classified defense networks, according to an OpenAI spokesperson, who added that implementation is nevertheless underway.
Several campaign groups have highlighted the risks of relying on unpredictable AI-assisted systems in support of life-and-death decisions. AI systems can be prone to error and can lead to automation bias, or a tendency to trust machine outputs over human reasoning, the critics have argued.
Stanley didn’t specify the precise ways in which the Pentagon intends to use AI models in classified operations. He described them as digital tools that would make it easier for the Pentagon to crunch through data, increase understanding in complex environments and make “better decisions, faster.”
Claude is among the AI tools used on Maven Smart System, a digital platform used in support of targeting and battlefield operations during Iran operations. US Central Command has said it is using a variety of AI tools to speed processes.
(Updates with new information in third paragraph.)
More stories like this are available on bloomberg.com
©2026 Bloomberg L.P.