Google, Microsoft, and xAI have agreed to share unreleased versions of their AI models with the U.S. government so that these systems can be tested before they become publicly available. 

The evaluations will be conducted by the Center for AI Standards and Innovation (CAISI), part of the U.S. Department of Commerce.

According to the department, CAISI will now serve as a central hub for assessing commercial frontier AI systems. The tests focus on national security risks, including cybersecurity, biosecurity, and the potential use of AI in chemical weapons. This gives government agencies access to models before they are commercially rolled out.

The collaboration follows earlier agreements that OpenAI and Anthropic made with the Biden administration about two years ago. Since then, CAISI has conducted dozens of evaluations of advanced AI models, including systems that were not yet publicly available.

According to CAISI Director Chris Fall, independent and technically rigorous evaluation methods are necessary to fully understand the impact of frontier AI on national security. Fall states that the expanded collaboration with the AI industry enables the institute to conduct security reviews more quickly and on a larger scale, as AI technology evolves at breakneck speed.

Policy Shift on AI Regulation

It is notable that the announcement comes at a time when the Trump administration has recently taken a cautious stance toward regulating artificial intelligence. The U.S. government wanted to prevent strict oversight from slowing innovation, partly because it is striving to maintain its technological lead over China.

Nevertheless, concerns about AI risks appear to be growing within Washington. According to SiliconANGLE, the partial release of Anthropic’s Claude Mythos model also contributed to this. That introduction reignited discussions about the speed at which increasingly powerful AI systems are being developed and made available.

The New York Times also reported earlier this week that the Trump administration is working on a potential executive order regarding AI governance. Under this, technology companies and government agencies would jointly establish a formal review process for new AI models. This would mark a clear shift in course for the White House, which has so far largely left AI development to the market.

Last year, during an AI event, Trump stated that, in his view, artificial intelligence cannot be slowed down by political measures or excessive regulation. However, the U.S. government now appears to be cautiously moving toward a more active role in overseeing the development of advanced AI systems, partly due to growing public concerns about cybersecurity, job losses, misinformation, and mental health.