{"id":28572,"date":"2026-05-05T21:13:32","date_gmt":"2026-05-05T21:13:32","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/28572\/"},"modified":"2026-05-05T21:13:32","modified_gmt":"2026-05-05T21:13:32","slug":"microsoft-google-and-xai-to-give-us-government-early-access-to-ai-models-for-security-checks-3","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/28572\/","title":{"rendered":"Microsoft, Google and xAI to give US government early access to AI models for security checks"},"content":{"rendered":"<p class=\"mb-4 text-lg md:leading-8 break-words\">By Courtney Rozen and Aditya Soni<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">May 5 (Reuters) &#8211; <a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/www.yahoo.com\/organizations\/microsoft\/\" data-ylk=\"elm:link;elmt:article_link;slk:Microsoft;itc:0;sec:content-canvas\" class=\"link \" rel=\"nofollow noopener\" target=\"_blank\">Microsoft<\/a>, Google and Elon Musk\u2019s xAI agreed to give the U.S. government early access to new artificial intelligence models for national security testing, as U.S. \u200cofficials grow alarmed by the hacking capabilities of Anthropic\u2019s newly unveiled Mythos.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The Center for AI Standards and Innovation \u200cat the Department of Commerce said on Tuesday that the agreement would allow it to evaluate the models before deployment and conduct research to \u200bassess their capabilities and security risks. The agreement fulfills a pledge the Trump administration made in July 2025 to partner with technology companies to vet their AI models for \u201cnational security risks.&#8221;<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Microsoft will work with U.S. government scientists to test <a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/tech.yahoo.com\/ai\/\" data-ylk=\"elm:link;elmt:article_link;slk:AI systems;itc:0;sec:content-canvas\" class=\"link \" rel=\"nofollow noopener\" target=\"_blank\">AI systems<\/a> \u201cin ways that probe unexpected behaviors,\u201d the company said in a statement. Together they will develop shared datasets and workflows for testing \u200cthe company\u2019s models, the company said. Microsoft \u2060signed a similar agreement with the UK\u2019s AI Security Institute, according to the statement.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early \u2060access to frontier models, U.S. officials are aiming to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The development of advanced AI systems including Anthropic&#8217;s Mythos has in recent weeks created a stir globally, including among U.S. officials \u200band corporate \u200bAmerica, over their ability to supercharge hackers.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">&#8220;Independent, rigorous measurement science \u200bis essential to understanding frontier AI and its \u200cnational security implications,&#8221; CAISI Director Chris Fall said in a statement.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The move builds on previous agreements with <a data-yga=\"{\" ylinkelement=\"\" href=\"https:\/\/www.yahoo.com\/organizations\/openai\/\" data-ylk=\"elm:link;elmt:article_link;slk:OpenAI;itc:0;sec:content-canvas\" class=\"link \" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> and Anthropic, established in 2024 under the Biden administration when CAISI was known as the U.S. Artificial Intelligence Safety Institute. Under former President Joe Biden, the institute focused on developing AI tests, definitions and voluntary safety standards. It was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">CAISI, which serves as the government&#8217;s main \u200chub for AI model testing, said it had already completed more \u200bthan 40 evaluations, including on cutting-edge models not yet available to the \u200bpublic.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Developers frequently hand over versions of their models with \u200bsafety guardrails stripped back so the center can probe for national security risks, the agency \u200csaid.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">xAI did not immediately respond to a request \u200bfor comment. Google declined to comment.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">Last week, \u200bthe Pentagon said it had reached agreements with seven AI companies to deploy their advanced capabilities on the Defense Department&#8217;s classified networks as it seeks to broaden the range of AI providers working across the military.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">The Pentagon \u200bannouncement did not include Anthropic, which \u200chas been embroiled in a dispute with the Pentagon over guardrails on the military&#8217;s use of its AI \u200btools.<\/p>\n<p class=\"mb-4 text-lg md:leading-8 break-words\">(Reporting by Courtney Rozen in Washington and Aditya Soni in Bengaluru, additional reporting by Jaspreet Singh in \u200bBengaluru; Editing by Shinjini Ganguli, Mrigank Dhaniwala and Nick Zieminski)<\/p>\n","protected":false},"excerpt":{"rendered":"By Courtney Rozen and Aditya Soni May 5 (Reuters) &#8211; Microsoft, Google and Elon Musk\u2019s xAI agreed to&hellip;\n","protected":false},"author":2,"featured_media":28573,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[1934,18744,132,16981,320,1896,18797,2899],"class_list":{"0":"post-28572","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-xai","8":"tag-artificial-intelligence-models","9":"tag-center-for-ai-standards","10":"tag-google","11":"tag-joe-biden","12":"tag-microsoft","13":"tag-national-security","14":"tag-security-risks","15":"tag-xai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/28572","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=28572"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/28572\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/28573"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=28572"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=28572"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=28572"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}