Loading the Audio Player…

America’s digital dominance wasn’t an accident. It was the result of a thoughtful, pivotal decision: treating the internet as interstate commerce governed at the national level. By choosing federal oversight over a patchwork of state laws, the United States skipped potential fragmentation and chose a unified landscape where innovation could scale nationwide. Artificial intelligence (AI) now presents a similar inflection point.

Like the internet, AI is not bound by national borders. It is trained on vast, distributed datasets, deployed through national and global infrastructure, and accessed instantaneously from anywhere. To regulate such a technology as though it were local would be to misunderstand its nature. A patchwork of state regulations would not merely complicate compliance; it would risk undermining both innovation and our nation’s ability to compete globally.

The White House’s proposed AI framework recognizes that a national approach is a template for success. It calls for a national standard or “one rulebook” that also preserves a role for states in areas where their authority is both legitimate and necessary. The federal government, in this view, sets the terms for a technology that operates on a national scale. States would still retain their traditional police powers: in consumer protection, enforcing fraud statutes, and safeguarding children within their jurisdictions.

This is not centralization for its own sake but to synthesize coherence and consistency across the nation. And yet, if this framework is to succeed, it must do more than replicate the structure of internet governance. It must learn from previous policy shortcomings. As Congress prepares its own proposal for national AI policy, the White House’s recommendations serve as a helpful starting point for what should be bipartisan consideration.

The early internet era demonstrated the power of a light-touch, federally anchored approach yet also revealed its limits. As platforms expanded, safeguards often lagged behind. Nowhere was this more evident than in the experience of children, who encounter a digital world that evolved more quickly than safeguards meant to protect them.

These tools will influence how truth is presented and understood. That reality should not lead to retreat but to thoughtful participation.

It would be imprudent for policymakers to ignore this lesson, and I’m encouraged that the president and his team recognize this fact and make the creation of safeguards for the most vulnerable a centerpiece of the White House plan. This proposal holds together two imperatives often presented in tension: establishing basic safety standards that enable innovation.

To that end, Jon Schweppe, a conservative child safety advocate and advisor to the Alliance For A Better Future, argues this proposal offers an opportunity to “do more this year to protect kids online and empower parents than we have done in more than a decade of policy work.” That could have major ramifications for our internet experience, creating needed safeguards that empower parents and protect children online and when they use AI.

As policymakers debate efforts to establish a framework for AI, they also must weigh its impact on our economy and potential as a tool for renewal. To this end, Christian entrepreneur and investor Nate Fischer suggests AI should be understood as a tool for national renewal, capable of increasing productivity, strengthening domestic industry, and expanding opportunity beyond familiar centers of economic power.

For Christians, the dual awareness of risk and opportunity should feel familiar. Technological change has always reshaped how truth is communicated and received. The printing press, radio, and the internet each altered the conditions of cultural and moral formation. AI will do the same.

That raises a more immediate question: not whether to engage, but whether Christians will help shape the systems through which people increasingly encounter information, meaning, and even questions of faith. These tools will influence how truth is presented and understood. That reality should not lead to retreat but to thoughtful participation.

A coherent national framework cannot answer those deeper questions, but it can shape the conditions under which they are asked. It can provide clarity, accountability, and baseline protections (especially for children) while ensuring the United States remains the place where this technology is built and guided.

The United States Congress once recognized that a digital technology required a national response, and we all benefited from that recognition. AI presents a similar challenge, though perhaps with higher stakes and no excuse for delay.