Bloomberg spoke with three employees involved in organizing the letter, all of whom requested anonymity for fear of retaliation. The organizers shared some of those names for verification purposes, but Bloomberg wasn’t able to vet the entire list.
Get Starting Point
A guide through the most important stories of the morning, delivered Monday through Friday.
A spokesperson for Google didn’t immediately respond to a request for comment.
The protest letter follows closely a legal imbroglio between the Pentagon and Anthropic PBC over the use of AI for military applications. The Pentagon is seeking to eject Anthropic and its Claude AI tool from US defense supply chains and is casting around for new tech giant AI partners.
The workforce protest marks a new effort by workers in Silicon Valley to try to curtail the use of AI and the risks associated with such tools in classified national security settings. The Pentagon is seeking to pour billions of dollars into expanding military usage of AI and developing autonomous weapons.
Google employees were among the first to sound the alarm about the risks of AI warfare in 2018 and force the company to limit its defense work. But the company’s ties to the US defense industry have been re-established in recent years, and it has watered down its own AI red lines.
“We want to see AI benefit humanity, not to see it being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance but extends beyond,” the letter states.
“Currently, the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads. Otherwise, such uses may occur without our knowledge or the power to stop them.”
Sofia Liguori, an AI research engineer at Google DeepMind in the UK, said she signed the letter because she believes Google has failed to discuss with workers any concrete red lines about usage of its AI on classified or other networks. In addition, she believes it would be impossible for the company to monitor and limit how its AI tools are actually used on “air-gapped” classified systems — isolated from the public internet or other unsecured networks.
Liguori, a trained theoretical physicist from near Milan, said the main response to worker concerns about the US military’s use of Google AI has been to encourage the workforce to trust company leadership to sign good contracts.
“But it’s all left very broad,” she said. “Agentic AI is particularly concerning because of the level of independence it can get to. It’s like giving away a very powerful tool at the same time as giving up on any kind of control on its usage.”
Organizers said signatories of the letter include more than 20 directors, senior directors, and vice presidents, in addition to a number of senior employees at Google DeepMind, the company’s AI research laboratory that seeks to keep Google at the cutting edge of the AI race while unlocking applications that could benefit humanity.
A protest in 2018 by Google workers over the company’s work with the US military marked a previous high point of tension between the Pentagon and Silicon Valley. The employees said they were appalled to learn that Google had signed on to work on what they termed “the business of war” under Project Maven, a Pentagon effort to use AI to detect and analyze objects on drone video feeds.
In the face of protests and resignations, the company ultimately introduced new AI principles and decided against renewing its contract for Project Maven. But last year, Google removed a passage from its artificial intelligence principles that pledged to avoid using the technology in potentially harmful applications, such as weapons.
The organizers of Monday’s letter said in a statement that “Maven is not over.”
“Workers are going to continue organizing against the weaponization of Google’s AI technology until the company draws clear, enforceable lines,” they said.
In recent years, Google has strengthened its ties with the Pentagon.
In March, for instance, the company made available its Gemini AI agents for the Pentagon’s three million-strong workforce at the unclassified level, after previously making available its Gemini chatbot in December.
Emil Michael, the under secretary of defense for research and engineering, told Bloomberg in March that the Pentagon would start with unclassified usage of Google’s Gemini agents and “then we’ll get to classified and top secret.” He added that talks with Google over using the company’s AI agents on the classified cloud were already underway.
In April, the Information, a publication focused on the technology industry, subsequently reported that negotiations are underway between Google and the Pentagon for “all lawful uses” of the company’s AI tools. That description falls short of red lines cited by leadership at rival Anthropic, which worried all lawful use could feasibly include using AI on fully autonomous weapons systems and for domestic mass surveillance.
The Pentagon strongly contested such characterizations and argued commercial companies shouldn’t be able to dictate usage policies during wartime or preparations for war.