The Trump administration is reportedly mulling an executive order that would increase government oversight of generative AI, potentially including a formal government approval review and approval process for new AI models. If true, this would be a stunning reversal by the White House, which has championed AI development and criticized the sometimes heavy-handed regulations enacted by the European Union and some states. It would also be a mistake. A mandatory government AI vetting regime would likely do little to enhance security, while creating significant harm to innovation and competition in a sector that is currently carrying more than its share of stability and growth in the American economy.

The administration’s about-face appears to have been triggered in part by concerns about Anthropic’s new Mythos model. Last month, Anthropic announced it would not release the model to the public for now after internal testing showed its ability to identify and exploit cybersecurity vulnerabilities throughout the digital economy, capabilities that could be catastrophic in the wrong hands. In response, some in the administration have pressed for a model akin to the US Food & Drug Administration’s (FDA’s) drug approval process, whereby government-monitored independent testing verifies the safety of a product before it reaches consumers.

But the FDA model is suboptimal, because AI models are not drugs. As my AEI colleague Bronwyn Howell has explained, conducting meaningful risk assessments for AI models is difficult because of the difference between risk and uncertainty. A drug’s chemical properties are fixed; its risks, however complex, are bounded and measurable. By comparison, a large language model’s behavior emerges from billions of parameters interacting across a nearly infinite range of possible inputs and use cases. Generative AI operates in a domain characterized not by risk, “known unknowns” whose probability can be estimated, but by uncertainty, the realm of “unknown unknowns.”

Anyone who has experienced an AI hallucination understands this point. Developers work hard to minimize hallucinations, and early models improved with each iteration. But hallucinations persist, and some studies suggest newer models hallucinate more, not less. If developers with full access to their own models cannot predict or eliminate these failures, it’s unclear how a government program could certify a model as “safe.” While it can reduce some risks, such a certification provides at most a false sense of security.

There is another, more subtle, problem with ex ante government approval schemes. The pre-release compliance costs are expensive. Established incumbents such as Anthropic or OpenAI can absorb those costs much more easily than startups, for whom licensing can be a barrier to entry. Economists were less surprised than most when OpenAI CEO Sam Altman appeared before the Senate in 2023 calling for a federal AI licensing scheme. I do not doubt that Altman’s sincerity about AI safety risks is genuine. But as Harold Demsetz once quipped, “in utility industries, regulation has often been sought because of the inconvenience of competition.” A licensing scheme reduces the number of market players. It also reduces innovation by requiring those players to standardize their products to meet licensing requirements, which reduces the planes of competition.

That’s not to say we should do nothing about the very clear risks posed by powerful models like Mythos. But the solution lies elsewhere. Notably, no government mandate was needed to address Mythos. Anthropic identified the problem through in-house testing, and took action to address the problem. Through Project Glasswing, it has granted a small coalition of companies access to Mythos, such as Amazon and Google, with the goal of improving cybersecurity defenses systemwide. That is how responsible AI development should work: Companies with superior knowledge of their own models have reputational and commercial incentives to self-police when the stakes are high. As I’ve discussed elsewhere, litigation will reinforce those incentives by imposing accountability for harms that do materialize, generating case-by-case information about which capabilities, deployment contexts, and design choices cause real injury. From those outcomes, industry standards can emerge, shaped by practitioners rather than regulators, and refined by experience rather than conjecture. This bottom-up process is slower and messier than a government licensing regime. But it is also an iterative process that adapts to an ever-changing technological environment and is far less likely to calcify into a competitive moat that protects incumbents while shutting out the next generation of innovators.

The administration has been right to criticize the European Union’s heavy-handed approach to tech regulation, which has delayed product launches, chilled investment, and left European consumers behind. It would be a mistake for America to pivot to this model for AI. The risks posed by powerful AI are real. But so is the risk of slowing the experimentation and competition that made the United States the global leader in AI in the first place.