{"id":21430,"date":"2026-04-29T11:36:23","date_gmt":"2026-04-29T11:36:23","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/21430\/"},"modified":"2026-04-29T11:36:23","modified_gmt":"2026-04-29T11:36:23","slug":"google-signs-controversial-ai-deal-with-pentagon-despite-employee-objections","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/21430\/","title":{"rendered":"Google signs controversial AI deal with Pentagon despite employee objections"},"content":{"rendered":"<p>        More than 600 staff sign open letter expressing concern<\/p>\n<p>            <img decoding=\"async\" loading=\"lazy\" alt=\"\" src=\".\/media_1b41653926f89819faf3af32d1dee3c9239439ffd.jpg?width=750&amp;format=jpg&amp;optimize=medium\" width=\"650\" height=\"455\"\/><\/p>\n<p>Google has signed a contract with the US Department of Defence to supply its AI for classified military use, despite protests from staff. The deal has reignited debate over the ethical implications of tech industry involvement in defence operations.<\/p>\n<p>Google has gone where Anthropic refused to tread and has signed a contract with the US Department of Defence, allowing its AI models to be used across classified networks for \u201cany lawful purpose&#8221;.<\/p>\n<p>The deal includes access to Google\u2019s cloud systems and APIs, giving the Pentagon wide scope to apply Google\u2019s AI in logistics, cybersecurity, and critical infrastructure defence.<\/p>\n<p>Anthropic refused to sign a very similar contract, with the question of who gets to define the \u2018lawful\u2019 in \u2018any lawful use\u2019 proving to be the key stumbling block.<\/p>\n<p><a href=\"https:\/\/www.computing.co.uk\/news\/2026\/ai\/anthropic-stands-up-to-pentagon\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic insisted on guardrails<\/a> to prevent misuse, but the Pentagon and so-called Secretary of War Pete Hegseth had a very public meltdown and subsequently labelled Anthropic \u201csupply\u2011chain risk&#8221;. That designation triggered a lawsuit from Anthropic, with a judge granting Anthropic temporary relief while the case continues.<\/p>\n<p>According to The Information, which <a href=\"https:\/\/www.theinformation.com\/articles\/google-signs-classified-ai-deal-pentagon-amid-employee-opposition\" rel=\"nofollow noopener\" target=\"_blank\">first broke this story,<\/a> the agreement does not give Google the right to control or veto lawful government operational decision-making.<\/p>\n<p>The Pentagon has not yet commented.<\/p>\n<p>        Ethics and the national interest<\/p>\n<p>Anthropic stands alone in pushing back against Pentagon terms. xAI signed a deal last year and OpenAI stepped up to replace Anthropic earlier this year.<\/p>\n<p>Not everyone at Google agrees with the decision.<\/p>\n<p>On Monday, more than 600 Google workers signed an open letter to CEO, Sundar Pichai, expressing concerns about the negotiations.<\/p>\n<p>\u201cWe feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses,\u201d they wrote. \u201cTherefore, we ask you to refuse to make our AI systems available for classified workloads.\u201d<\/p>\n<p>Last year, Google amended its terms of use to <a href=\"https:\/\/www.computing.co.uk\/news\/2025\/ai\/google-removes-bar-on-ai-use-in-weapons\" rel=\"nofollow noopener\" target=\"_blank\">remove a ban on its AI being used in weaponry and surveillance tools<\/a> and despite employee concerns, Google has defended its position, saying it is proud to support national security while maintaining its commitment to responsible AI. It says it does not intend for its AI to be used in domestic mass surveillance or autonomous weapons without human oversight, but whether this is legally enforceable is a moot point according to legal experts.<\/p>\n<p>Google\u2019s deal is part of a much wider pattern. The Pentagon is building partnerships with major AI labs, and any company questioning terms faces exclusion. Anthropic managed to <a href=\"https:\/\/www.computing.co.uk\/news\/2026\/ai\/claude-tops-app-store-as-pentagon-deal-reshapes-ai-rivalry\" rel=\"nofollow noopener\" target=\"_blank\">turn its exclusion into revenue<\/a>, as Claude shot to the top of the app charts courtesy of disaffected ex Chat-GPT subscribers driven away by OpenAI agreeing to Pentagon terms.<\/p>\n<p>Anthropic\u2019s revenue surpassed that of OpenAI earlier this month.<\/p>\n<p>        Google, Palantir and Maven<\/p>\n<p>It isn\u2019t the first time Google has experienced tension between military application of its technology and the views and ethics of its employees. Maven is a targeting system used by the Pentagon. It was built in part by Palantir and identifies targets using a mix of satellite imagery, intelligence and other data sources.<\/p>\n<p>Palantir took the Maven contract over in <a href=\"https:\/\/www.bbc.co.uk\/news\/business-44341490\" rel=\"nofollow noopener\" target=\"_blank\">2018, when Google abandoned it<\/a> after thousands of employees signed letters of protest, some staged a strike and some resigned in protest.<\/p>\n<p>It was the <a href=\"https:\/\/www.ndtv.com\/world-news\/us-iran-war-donald-trump-software-that-rained-death-on-iran-how-palantirs-maven-helped-us-military-11340635https:\/\/www.ndtv.com\/world-news\/us-iran-war-donald-trump-software-that-rained-death-on-iran-how-palantirs-maven-helped-us-military-11340635\" rel=\"nofollow noopener\" target=\"_blank\">Palantir-built Maven targeting system<\/a> that incorrectly identified a primary school in Minab, Southern Iran as a military facility and led to the killing of between 175 and 180 civilians, most of them girls younger than 12 years old.<\/p>\n","protected":false},"excerpt":{"rendered":"More than 600 staff sign open letter expressing concern Google has signed a contract with the US Department&hellip;\n","protected":false},"author":2,"featured_media":21431,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[24,1428,25,132,1429,208,14993],"class_list":{"0":"post-21430","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-google","8":"tag-ai","9":"tag-ai-ethics","10":"tag-artificial-intelligence","11":"tag-google","12":"tag-google-ai","13":"tag-military","14":"tag-us-government"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/21430","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=21430"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/21430\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/21431"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=21430"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=21430"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=21430"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}