{"id":20519,"date":"2026-04-28T19:24:13","date_gmt":"2026-04-28T19:24:13","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/20519\/"},"modified":"2026-04-28T19:24:13","modified_gmt":"2026-04-28T19:24:13","slug":"google-joins-openai-and-xai-in-handing-ai-to-the-pentagon","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/20519\/","title":{"rendered":"Google Joins OpenAI and xAI in Handing AI to the Pentagon"},"content":{"rendered":"<p>Over 560 Google workers signed a letter pleading with their CEO to say no. <\/p>\n<p>He said yes anyway.<\/p>\n<p>The Deal Nobody Was Supposed to See<\/p>\n<p>Google has signed a classified agreement with the U.S. Department of Defense that lets the Pentagon use its AI models for \u201cany lawful government purpose.\u201d <\/p>\n<p><a href=\"https:\/\/www.theinformation.com\/articles\/google-signs-classified-ai-deal-pentagon-amid-employee-opposition\" rel=\"nofollow noopener\" target=\"_blank\">The Information<\/a> broke the story on Tuesday, April 28, 2026, citing a single source familiar with the deal.<\/p>\n<p>The timing couldn\u2019t be more awkward. <\/p>\n<p>Just one day earlier, <a href=\"https:\/\/www.cbsnews.com\/news\/google-ai-pentagon-classified-use-employee-letter\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">more than 560 Google employees sent an open letter to CEO Sundar Pichai<\/a> urging him to block the Pentagon from accessing Google\u2019s AI for classified work. They warned the technology could be used in \u201cinhumane or extremely harmful ways.\u201d<\/p>\n<p>Pichai clearly had other plans.<\/p>\n<p>What the Contract Actually Says<\/p>\n<p>According to <a href=\"https:\/\/www.theinformation.com\/articles\/google-signs-classified-ai-deal-pentagon-amid-employee-opposition\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">The Information\u2019s report<\/a>, the deal includes language stating that Google\u2019s AI shouldn\u2019t be used for domestic mass surveillance or autonomous weapons without \u201cappropriate human oversight and control.\u201d <\/p>\n<p>That sounds reassuring on paper.<\/p>\n<p>But here\u2019s the catch. The contract also states that Google has no \u201cright to control or veto lawful government operational decision-making.\u201d <\/p>\n<p>In other words, those restrictions aren\u2019t enforceable. Google can\u2019t actually stop the Pentagon from doing anything it considers legal.<\/p>\n<p>The deal also requires Google to help adjust its AI safety settings and filters whenever the government asks. A Google spokesperson <a href=\"https:\/\/finance.yahoo.com\/sectors\/technology\/articles\/google-signs-classified-ai-deal-050742064.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">told Reuters<\/a> the agreement is an amendment to an existing government contract. The company called it a \u201cresponsible approach to supporting national security.\u201d<\/p>\n<p>Google Joins a Growing Club<\/p>\n<p>This deal puts Google alongside <a href=\"https:\/\/autogpt.net\/inside-the-anthropic-openai-and-pentagon-ai-public-fight\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">OpenAI<\/a> and <a href=\"https:\/\/www.theverge.com\/news\/706855\/grok-mechahitler-xai-defense-department-contract\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Elon Musk\u2019s xAI<\/a>, both of which have their own classified AI agreements with the Pentagon. <\/p>\n<p>According to <a href=\"https:\/\/finance.yahoo.com\/sectors\/technology\/articles\/google-signs-classified-ai-deal-050742064.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Reuters<\/a>, the Pentagon signed deals worth up to $200 million each with major AI labs in 2025.<\/p>\n<p>Classified networks handle some of the military\u2019s most sensitive work. T<\/p>\n<p>hat includes mission planning and weapons targeting. The Pentagon has said it doesn\u2019t intend to use AI for mass surveillance of Americans or for weapons without human involvement. <\/p>\n<p>But it wants full flexibility to use AI for any lawful purpose, and it\u2019s not willing to let tech companies draw those lines.<\/p>\n<p>What Happened to Anthropic <\/p>\n<p>One major AI company tried to push back. <\/p>\n<p>Anthropic, the maker of Claude, <a href=\"https:\/\/www.cnbc.com\/2026\/03\/09\/anthropic-was-the-pentagons-choice-for-ai-now-its-banned-and-experts-are-worried.html\" rel=\"nofollow noopener\" target=\"_blank\">insisted on keeping guardrails<\/a> that prevented its models from being used for fully autonomous weapons or domestic mass surveillance.<\/p>\n<p>The Pentagon didn\u2019t like that. <\/p>\n<p>In late February 2026, Defense Secretary Pete Hegseth <a href=\"https:\/\/autogpt.net\/pentagon-calls-anthropic-a-security-risk\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">declared Anthropic a \u201csupply chain risk.\u201d<\/a> President Trump followed with a directive ordering federal agencies to stop using Anthropic\u2019s technology. It was the first time an American company received that designation \u2014 a label historically reserved for foreign adversaries.<\/p>\n<p>Anthropic sued. A federal judge in San Francisco granted a preliminary injunction blocking the broader government ban. But a D.C. appeals court let the Pentagon\u2019s supply chain designation stand, meaning the company remains locked out of new defense contracts.<\/p>\n<p>The Message Is Clear<\/p>\n<p>The Anthropic situation sends a loud signal to every AI company: play ball with the Pentagon or face consequences. <\/p>\n<p>Google, OpenAI, and xAI all signed deals that give the military broad access. Anthropic drew a line, and got blacklisted for it.<\/p>\n<p>As one Council on Foreign Relations expert told CNBC, the dispute \u201cfeels like it is about politics and personalities\u201d masquerading as a policy disagreement.<\/p>\n<p>Google Employees Aren\u2019t Staying Quiet<\/p>\n<p>The internal letter from Google workers didn\u2019t mince words. <\/p>\n<p>Employees wrote that their \u201cproximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.\u201d <\/p>\n<p>They specifically cited lethal autonomous weapons and mass surveillance as their biggest fears.<\/p>\n<p>The letter also warned that \u201cmaking the wrong call right now would cause irreparable damage to Google\u2019s reputation, business and role in the world.\u201d<\/p>\n<p>This isn\u2019t the first time Google employees have pushed back on military work. In 2018, thousands protested Project Maven, a Pentagon contract for AI-powered drone surveillance. Google eventually pulled out of that deal. This time, the company chose differently.<\/p>\n<p>The Bigger Picture<\/p>\n<p>There\u2019s a real tension at the heart of this story. <\/p>\n<p>The U.S. government wants unrestricted access to the most powerful AI systems on the planet. Tech companies want those lucrative contracts. <\/p>\n<p>And the guardrails that are supposed to prevent misuse? They\u2019re written into contracts that the government doesn\u2019t have to follow.<\/p>\n<p>Meanwhile, <a href=\"https:\/\/www.axios.com\/2026\/04\/16\/white-house-anthropic-ai-mythos-government-national-security\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">the White House is quietly negotiating with Anthropic<\/a> about accessing its newest model, Mythos, for civilian agencies, even as the Pentagon keeps the company blacklisted. <\/p>\n<p>The contradictions are hard to miss.<\/p>\n<p>Whether you see this as a necessary step for national security or a troubling erosion of AI safety principles probably depends on how much trust you place in the phrase \u201cany lawful government purpose.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Over 560 Google workers signed a letter pleading with their CEO to say no. He said yes anyway.&hellip;\n","protected":false},"author":2,"featured_media":20520,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[636,2150,1458,2899],"class_list":{"0":"post-20519","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-xai","8":"tag-ai-development","9":"tag-ai-revolution","10":"tag-ai-technology","11":"tag-xai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/20519","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=20519"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/20519\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/20520"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=20519"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=20519"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=20519"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}