US agencies will test the latest powerful AI systems behind closed doors, raising questions about oversight, access, and the balance between innovation and national security.
Google, Microsoft, and xAI will agree to hand over unreleased versions of their artificial intelligence models to the government to bolster cybersecurity, the National Institute of Standards and Technology announced on Tuesday.
The partnership followed a month earlier, when Anthropic’s Mythos AI, a powerful model in cybersecurity, raised concerns about AI’s impact on cybersecurity to a critical level, prompting the White House to consider a formal AI review process.
The new agreements allow the Center for AI Standards and Innovation (CAISI) at the U.S. Department of Commerce to assess new models and their potential impact on national security and public safety before deployment. The center will also conduct research and testing after the deployment of the models and has already completed more than 40 assessments.
Independent, rigorous measurement science is essential for understanding advanced AI and its implications for national security.
– Chris Fall
The Agreement and Its Context
According to reports, Mythos, which Anthropic calls “far ahead” of other models in cybersecurity matters, has sparked a wave of concern from governments, banks, and energy companies over the past month. The company says it is not yet ready to publicly disclose the model and limits access to a selected group of organizations, and has briefed senior U.S. officials on its capabilities.
OpenAI also said last week that it is providing access to its most advanced AI models to all verified levels of government in order to stay ahead of threats arising from the use of AI.
According to Jessica Ji, a senior research analyst at the Georgetown University Center for Security and Emerging Technologies, the partnership could ease AI testing by providing additional resources.
They simply don’t have the same level of resources – neither manpower, nor technical staff, nor access to computing resources – to select these models and conduct rigorous testing.
– Jessica Ji
The White House is now considering consulting with a panel of experts to advise on a potential government review process for new AI models; CNN confirmed. Such a move would mark a shift away from the Trump administration’s softer approach to AI regulation that had been in place previously.
The New York Times first reported on the working group on Monday.
Any policy announcements will come directly from the President. Discussion of potential executive orders is speculation.
– White House spokesperson
While Microsoft regularly tests its models, CAISI provides additional “technical, scientific, and national-security expertise,” said Natasha Crampton, Microsoft’s head of Responsible AI.
Google declined to comment further on the agreement, and xAI did not respond to requests for comment.
CNN journalist Lisa Eadicicco also contributed to the coverage.
This development underscores the growing role of AI regulatory oversight in the United States and the importance of coordination between government and the private sector.