{"id":28638,"date":"2026-05-05T22:03:07","date_gmt":"2026-05-05T22:03:07","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/28638\/"},"modified":"2026-05-05T22:03:07","modified_gmt":"2026-05-05T22:03:07","slug":"google-microsoft-and-xai-agree-to-share-unreleased-ai-models-with-us-government-to-strengthen-cybersecurity-ukraine-news","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/28638\/","title":{"rendered":"Google Microsoft and xAI agree to share unreleased AI models with US government to strengthen cybersecurity | Ukraine news"},"content":{"rendered":"<p style=\"font-style:italic;font-weight:500;font-size:18px;line-height:1.5\">US agencies will test the latest powerful AI systems behind closed doors, raising questions about oversight, access, and the balance between innovation and national security.<\/p>\n<p>Google, Microsoft, and xAI will agree to hand over unreleased versions of their artificial intelligence models to the government to bolster cybersecurity, the National Institute of Standards and Technology announced on Tuesday.<\/p>\n<p>The partnership followed a month earlier, when Anthropic\u2019s Mythos AI, a powerful model in cybersecurity, raised concerns about AI\u2019s impact on cybersecurity to a critical level, prompting the White House to consider a formal AI review process.<\/p>\n<p>The new agreements allow the Center for AI Standards and Innovation (CAISI) at the U.S. Department of Commerce to assess new models and their potential impact on national security and public safety before deployment. The center will also conduct research and testing after the deployment of the models and has already completed more than 40 assessments.<\/p>\n<p>Independent, rigorous measurement science is essential for understanding advanced AI and its implications for national security.<\/p>\n<p>\u2013 Chris Fall<\/p>\n<p>The Agreement and Its Context<\/p>\n<p>According to reports, Mythos, which Anthropic calls \u201cfar ahead\u201d of other models in cybersecurity matters, has sparked a wave of concern from governments, banks, and energy companies over the past month. The company says it is not yet ready to publicly disclose the model and limits access to a selected group of organizations, and has briefed senior U.S. officials on its capabilities.<\/p>\n<p>OpenAI also said last week that it is providing access to its most advanced AI models to all verified levels of government in order to stay ahead of threats arising from the use of AI.<\/p>\n<p>According to Jessica Ji, a senior research analyst at the Georgetown University Center for Security and Emerging Technologies, the partnership could ease AI testing by providing additional resources.<\/p>\n<p>They simply don\u2019t have the same level of resources \u2013 neither manpower, nor technical staff, nor access to computing resources \u2013 to select these models and conduct rigorous testing.<\/p>\n<p>\u2013 Jessica Ji<\/p>\n<p>The White House is now considering consulting with a panel of experts to advise on a potential government review process for new AI models; CNN confirmed. Such a move would mark a shift away from the Trump administration\u2019s softer approach to AI regulation that had been in place previously.<\/p>\n<p>The New York Times first reported on the working group on Monday.<\/p>\n<p>Any policy announcements will come directly from the President. Discussion of potential executive orders is speculation.<\/p>\n<p>\u2013 White House spokesperson<\/p>\n<p>While Microsoft regularly tests its models, CAISI provides additional \u201ctechnical, scientific, and national-security expertise,\u201d said Natasha Crampton, Microsoft\u2019s head of Responsible AI.<\/p>\n<p>Google declined to comment further on the agreement, and xAI did not respond to requests for comment.<\/p>\n<p>CNN journalist Lisa Eadicicco also contributed to the coverage.<\/p>\n<p>This development underscores the growing role of AI regulatory oversight in the United States and the importance of coordination between government and the private sector.<\/p>\n","protected":false},"excerpt":{"rendered":"US agencies will test the latest powerful AI systems behind closed doors, raising questions about oversight, access, and&hellip;\n","protected":false},"author":2,"featured_media":28639,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[18831,18830,10716,18501,313,66,3805,2899],"class_list":{"0":"post-28638","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-xai","8":"tag-ai-model-sharing","9":"tag-ai-model-sharing-caisi-cybersecurity-nist-anthropic-mythos","10":"tag-anthropic-mythos","11":"tag-caisi","12":"tag-cybersecurity","13":"tag-news","14":"tag-nist","15":"tag-xai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/28638","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=28638"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/28638\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/28639"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=28638"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=28638"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=28638"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}