{"id":28409,"date":"2026-05-05T19:08:09","date_gmt":"2026-05-05T19:08:09","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/28409\/"},"modified":"2026-05-05T19:08:09","modified_gmt":"2026-05-05T19:08:09","slug":"microsoft-google-and-xai-grant-us-early-access-to-ai-models-for-security-testing-firstpost","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/28409\/","title":{"rendered":"Microsoft, Google, and xAI Grant US Early Access to AI Models for Security Testing \u2013 Firstpost"},"content":{"rendered":"<p>Google, Microsoft, and xAI will give the US government early access to their AI models to assess national security risks<\/p>\n<p>Google, Microsoft, and xAI have agreed to provide the US federal government with early access to their latest AI models to evaluate their capabilities and security implications before public release. The Center for AI Standards and Innovation (CAISI), under the Department of Commerce, announced the agreement on Tuesday amid rising concerns about the capabilities of Anthropic\u2019s newly unveiled Mythos model and its potential misuse by hackers.<\/p>\n<p>Under the new arrangement, the US government will be able to evaluate these models prior to deployment and conduct research to assess their capabilities and security risks. The agreement fulfills a pledge made by Donald Trump in July to partner with technology companies to vet AI systems for \u201cnational security risks.\u201d<\/p>\n<p>STORY CONTINUES BELOW THIS AD<\/p>\n<p>Microsoft confirmed that it will work with US government scientists to test AI systems. Both sides have also pledged to develop shared datasets and workflows to further evaluate these models. Additionally, Microsoft has signed a similar agreement with the United Kingdom\u2019s AI Security Institute, the statement noted.<\/p>\n<p>The risks posed by increasingly powerful AI systems have raised significant concerns. By securing early access to frontier models, US officials aim to identify potential threats\u2014ranging from cyberattacks to military misuse\u2014before these tools are widely deployed.<\/p>\n<p>Recent advancements in AI, including Anthropic\u2019s Mythos model, have further heightened concerns among US officials and corporate leaders about their potential to supercharge cyber threats.<\/p>\n<p>This move builds on a 2024 agreement with OpenAI and Anthropic under the administration of Joe Biden, when CAISI was known as the US Artificial Intelligence Safety Institute. Under Biden, the institute focused on developing AI testing frameworks, definitions, and voluntary safety standards, and was led by tech adviser Elizabeth Kelly.<\/p>\n<p>CAISI stated that it has completed more than 40 evaluations, including on cutting-edge models that are not yet publicly available.<\/p>\n<p>These developments follow a recent agreement between the Pentagon and seven major tech companies\u2014including Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection AI, and SpaceX\u2014to use their AI systems across classified computer networks.<\/p>\n<p>STORY CONTINUES BELOW THIS AD<\/p>\n<p>Anthropic was notably absent from this list following its dispute with the Trump administration over the ethics and safety of AI use in warfare.<\/p>\n<p class=\"first-published\">First Published:<br \/>\nMay 05, 2026, 23:35 IST<\/p>\n<p><a href=\"https:\/\/www.firstpost.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Home<\/a><a href=\"https:\/\/www.firstpost.com\/tech\/\" title=\"Tech\" rel=\"nofollow noopener\" target=\"_blank\">Tech<\/a>Microsoft, Google, and xAI grant US early access to AI models for security testingEnd of Article<\/p>\n","protected":false},"excerpt":{"rendered":"Google, Microsoft, and xAI will give the US government early access to their AI models to assess national&hellip;\n","protected":false},"author":2,"featured_media":28410,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[132,1122,2899],"class_list":{"0":"post-28409","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-xai","8":"tag-google","9":"tag-meta","10":"tag-xai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/28409","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=28409"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/28409\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/28410"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=28409"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=28409"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=28409"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}