More than 600 staff sign open letter expressing concern

Google has signed a contract with the US Department of Defence to supply its AI for classified military use, despite protests from staff. The deal has reignited debate over the ethical implications of tech industry involvement in defence operations.
Google has gone where Anthropic refused to tread and has signed a contract with the US Department of Defence, allowing its AI models to be used across classified networks for “any lawful purpose”.
The deal includes access to Google’s cloud systems and APIs, giving the Pentagon wide scope to apply Google’s AI in logistics, cybersecurity, and critical infrastructure defence.
Anthropic refused to sign a very similar contract, with the question of who gets to define the ‘lawful’ in ‘any lawful use’ proving to be the key stumbling block.
Anthropic insisted on guardrails to prevent misuse, but the Pentagon and so-called Secretary of War Pete Hegseth had a very public meltdown and subsequently labelled Anthropic “supply‑chain risk”. That designation triggered a lawsuit from Anthropic, with a judge granting Anthropic temporary relief while the case continues.
According to The Information, which first broke this story, the agreement does not give Google the right to control or veto lawful government operational decision-making.
The Pentagon has not yet commented.
Ethics and the national interest
Anthropic stands alone in pushing back against Pentagon terms. xAI signed a deal last year and OpenAI stepped up to replace Anthropic earlier this year.
Not everyone at Google agrees with the decision.
On Monday, more than 600 Google workers signed an open letter to CEO, Sundar Pichai, expressing concerns about the negotiations.
“We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses,” they wrote. “Therefore, we ask you to refuse to make our AI systems available for classified workloads.”
Last year, Google amended its terms of use to remove a ban on its AI being used in weaponry and surveillance tools and despite employee concerns, Google has defended its position, saying it is proud to support national security while maintaining its commitment to responsible AI. It says it does not intend for its AI to be used in domestic mass surveillance or autonomous weapons without human oversight, but whether this is legally enforceable is a moot point according to legal experts.
Google’s deal is part of a much wider pattern. The Pentagon is building partnerships with major AI labs, and any company questioning terms faces exclusion. Anthropic managed to turn its exclusion into revenue, as Claude shot to the top of the app charts courtesy of disaffected ex Chat-GPT subscribers driven away by OpenAI agreeing to Pentagon terms.
Anthropic’s revenue surpassed that of OpenAI earlier this month.
Google, Palantir and Maven
It isn’t the first time Google has experienced tension between military application of its technology and the views and ethics of its employees. Maven is a targeting system used by the Pentagon. It was built in part by Palantir and identifies targets using a mix of satellite imagery, intelligence and other data sources.
Palantir took the Maven contract over in 2018, when Google abandoned it after thousands of employees signed letters of protest, some staged a strike and some resigned in protest.
It was the Palantir-built Maven targeting system that incorrectly identified a primary school in Minab, Southern Iran as a military facility and led to the killing of between 175 and 180 civilians, most of them girls younger than 12 years old.