{"id":382967,"date":"2025-08-29T17:13:09","date_gmt":"2025-08-29T17:13:09","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/382967\/"},"modified":"2025-08-29T17:13:09","modified_gmt":"2025-08-29T17:13:09","slug":"60-u-k-lawmakers-accuse-google-of-breaking-ai-safety-pledge","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/382967\/","title":{"rendered":"60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge"},"content":{"rendered":"<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color min-h-[6.375rem] lg:min-h-[4.75rem] dropcap text-left\" data-testid=\"paragraph-content\">A cross-party group of 60 U.K. parliamentarians has accused Google DeepMind of violating international pledges to safely develop artificial intelligence, in <a href=\"https:\/\/pauseai.info\/dear-sir-demis-2025\" target=\"_blank\" rel=\"noopener\">an open letter<\/a> shared exclusively with TIME ahead of publication. The letter, released on August 29 by activist group PauseAI U.K., says that Google\u2019s March release of Gemini 2.5 Pro without accompanying details on safety testing \u201csets a dangerous precedent.\u201d The letter, whose signatories include digital rights campaigner Baroness Beeban Kidron and former Defence Secretary Des Browne, calls on Google to clarify its commitment.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">For years, <a href=\"https:\/\/time.com\/7290554\/yoshua-bengio-launches-lawzero-for-safer-ai\/\" target=\"_blank\" rel=\"noopener\">experts<\/a> in AI, including Google DeepMind\u2019s CEO <a href=\"https:\/\/time.com\/7277608\/demis-hassabis-interview-time100-2025\/\" target=\"_blank\" rel=\"noopener\">Demis Hassabis<\/a>, have warned that AI could pose catastrophic risks to public safety and <a href=\"https:\/\/time.com\/7312305\/agi-race-us-china-trump\/\" target=\"_blank\" rel=\"noopener\">national security<\/a>\u2014for example, by helping would-be bio-terrorists in designing a new pathogen or hackers in a takedown of critical infrastructure. In an effort to manage those risks, at an international AI summit co-hosted by the U.K. and South Korean governments in February 2024, Google, OpenAI, and others signed the Frontier AI Safety Commitments. Signatories pledged to \u201cpublicly report\u201d system capabilities and risk assessments and explain if and how external actors, such as government <a href=\"https:\/\/time.com\/collections\/time100-ai-2025\/7305881\/oliver-ilott\/\" target=\"_blank\" rel=\"noopener\">AI safety institutes<\/a>, were involved in testing. Without binding regulation, the public and lawmakers have relied largely on information stemming from voluntary pledges to understand AI\u2019s emerging risks.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Yet, when Google released Gemini 2.5 Pro on March 25\u2014which it said beat rival AI systems on industry <a href=\"https:\/\/time.com\/7203729\/ai-evaluations-safety\/\" target=\"_blank\" rel=\"noopener\">benchmarks<\/a> by \u201cmeaningful margins\u201d\u2014the company neglected to publish detailed information on safety tests for over a month. The letter says that not only reflects a \u201cfailure to honour\u201d its international safety commitments, but threatens the fragile norms promoting safer AI development. \u201cIf leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards,\u201d Browne wrote in a statement accompanying the letter.\u00a0<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">&#8220;We\u2019re fulfilling our public commitments, including the Seoul Frontier AI Safety Commitments,\u201d a Google DeepMind spokesperson told TIME via an emailed statement. \u201cAs part of our development process, our models undergo rigorous safety checks, including by UK AISI and other third-party testers &#8211; and Gemini 2.5 is no exception.&#8221;<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">The open letter calls on Google to establish a specific timeline for when safety evaluation reports will be shared for future releases. Google first published the Gemini 2.5 Pro model card\u2014a document where it typically shares information on safety tests\u201422 days after the model\u2019s release. However, the eight-page document only included a brief section on safety tests. It was not until April 28\u2014over a month after the model was made publicly available\u2014that the model card was updated with a 17-page document containing details on specific evaluations, concluding that Gemini 2.5 Pro showed \u201csignificant\u201d though not yet dangerous improvements in domains including hacking. The update also stated the use of \u201cthird-party external testers,\u201d but did not disclose which ones or whether the U.K. AI Security Institute had been among them\u2014which the letter also cites as a violation of Google\u2019s pledge.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">After previously failing to address a <a href=\"https:\/\/fortune.com\/2025\/04\/09\/google-gemini-2-5-pro-missing-model-card-in-apparent-violation-of-ai-safety-promises-to-us-government-international-bodies\/\" target=\"_blank\" rel=\"noopener\">media request<\/a> for comment on whether it had shared Gemini 2.5 Pro with governments for safety testing, a Google DeepMind spokesperson told TIME that the company did share Gemini 2.5 Pro with the U.K. AI Security Institute, as well as a \u201cdiverse group of external experts,\u201d including Apollo Research, Dreadnode, and Vaultis. However, Google says it only shared the model with the U.K. AI Security Institute after Gemini 2.5 Pro was released on March 25. <\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">On April 3, shortly following Gemini 2.5 Pro\u2019s release, Google&#8217;s senior director and head of product for Gemini, Tulsee Doshi, <a href=\"https:\/\/techcrunch.com\/2025\/04\/03\/google-is-shipping-gemini-models-faster-than-its-ai-safety-reports\/\" target=\"_blank\" rel=\"noopener\">told<\/a> TechCrunch the reason it lacked a safety report was because the model was an \u201cexperimental\u201d release, adding that it had already run safety tests. She said that the aim of these experimental rollouts is to release the model in a limited way, collect user feedback, and improve it prior to production launch, at which point the company would publish a model card detailing safety tests it had already conducted. Yet, days earlier, Google had rolled the model out to all of its <a href=\"https:\/\/techcrunch.com\/2025\/05\/20\/googles-gemini-ai-app-has-400m-monthly-active-users\/\" target=\"_blank\" rel=\"noopener\">hundreds of millions<\/a> of free users, saying \u201cwe want to get our most intelligent model into more people\u2019s hands asap,\u201d in a <a href=\"https:\/\/x.com\/GeminiApp\/status\/1906131622736679332\">post<\/a> on X.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">The open letter says that \u201clabelling a publicly accessible model as \u2018experimental\u2019 does not absolve Google of its safety obligations,\u201d and additionally calls on Google to establish a more common-sense definition of deployment. &#8220;Companies have a great public responsibility to test new technology and not involve the public in experimentation,\u201d says Bishop of Oxford, Steven Croft, who signed the letter. \u201cJust imagine a car manufacturer releasing a vehicle saying, \u2018we want the public to experiment and [give] feedback when they crash or when they bump into pedestrians and when the brakes don&#8217;t work,\u2019\u201d he adds.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Croft questions the constraints on providing safety reports at the time of release, boiling the issue down to a matter of priorities: \u201cHow much of [Google\u2019s] huge investment in AI is being channeled into public safety and reassurance and how much is going into huge computing power?\u201d<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">To be sure, Google isn\u2019t the only industry titan to seemingly flout safety commitments. <a href=\"https:\/\/time.com\/collections\/time100-ai-2025\/7305842\/elon-musk-ai\/\" target=\"_blank\" rel=\"noopener\">Elon Musk\u2019s<\/a> xAI is yet to release any safety report for Grok 4, an AI model released in July. Unlike GPT-5 and other recent launches, OpenAI\u2019s February release of its Deep Research tool lacked a same-day safety report. The company says it had done \u201crigorous safety testing,\u201d but didn\u2019t publish the report until 22 days later. <\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Joseph Miller, director of PauseAI U.K. says the organization is concerned about other instances of apparent violations, and that the focus on Google was due to its proximity. DeepMind, the AI lab Google acquired in 2014, remains headquartered in London. U.K.\u2019s now Secretary of State for Science, Innovation and Technology, <a href=\"https:\/\/time.com\/collections\/time100-ai-2025\/7305877\/peter-kyle\/\" target=\"_blank\" rel=\"noopener\">Peter Kyle<\/a>, said on the campaign trail in 2024 that he would \u201c<a href=\"https:\/\/www.thenationalnews.com\/news\/uk\/2024\/06\/12\/labour-promises-to-get-tougher-on-ai-safety\/\" target=\"_blank\" rel=\"noopener\">require<\/a>\u201d leading AI companies to share safety tests, but in February it was <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/feb\/24\/uk-delays-plans-to-regulate-ai-as-ministers-seek-to-align-with-trump-administration\" target=\"_blank\" rel=\"noopener\">reported<\/a> that the U.K.\u2019s plans to regulate AI were delayed as it sought to better align with the Trump administration\u2019s hands-off approach. Miller says it\u2019s time to swap company pledges for \u201creal regulation,&#8221; adding that \u201cvoluntary commitments are just not working.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"A cross-party group of 60 U.K. parliamentarians has accused Google DeepMind of violating international pledges to safely develop&hellip;\n","protected":false},"author":2,"featured_media":382968,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,53,16,15],"class_list":{"0":"post-382967","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-uk","12":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115113178436675969","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/382967","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=382967"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/382967\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/382968"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=382967"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=382967"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=382967"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}