Recent Presidential action prompts corporate boards to accelerate their AI risk oversight efforts.

Copyright 2025 The Associated Press. All rights reserved.

Boards may need to accelerate their oversight controls for corporate artificial intelligence initiatives following a new Executive Order issued by President Trump on January 20, 2025.

The Trump Executive Order revoked President Biden’s previous Executive Order of October 30, 2023 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (“Biden EO”). The Biden EO established basic parameters for federal regulation and oversight of AI.

As more fully described in an accompanying Fact Sheet, the Biden EO was designed to meet two goals: First, to establish new standards for AI safety and security intended to protect the public from potential harm; and second, to enhance the promise of AI and catalyze AI research to advance American competitiveness. At the time, the Biden EO represented the latest and most significant governmental effort (federal or state) to establish a regulatory strategy for responsible AI development, deployment, and use.

Yet the Biden EO was one of multiple Biden administration initiatives that were revoked in the sweeping January 20 Executive Order intended to “…make our Nation united, fair, safe, and prosperous again, restore common sense to the Federal Government and unleash the potential of the American citizen.”

Media reports reflected competing perspectives of President Trump’s action. One view is that American-led innovation would be enhanced without the restraints of federal regulation. In other words, reduced federal regulatory obligations will have broad commercial benefits. A competing view is that the complete lack of standards and guidelines will decelerate AI innovation, as both business and consumers become concerned by the risks and costs of unregulated AI technology. In other words, a reduced federal regulatory presence could exacerbate risk and stunt development, as some believe has occurred in areas like cryptocurrency.

Private industry is left to deal with the confusion. Does the new Executive Order signal a significant federal pullback from plans for formal regulation of AI, or simply a “reset” of the regulatory focus? This might take some time to shake out – but in the meantime, the board of directors is somewhat caught in the middle of the debate. That’s because the board is ultimately responsible for the oversight of AI implementation and risk in any organization. While management and tech executives also have a significant role in this effort, the ‘risk buck’ stops with the board.

The dilemma is underscored by last October’s release of the new report from the National Association of Corporate Directors (NACD), “Technology Leadership in the Boardroom.” The Report’s overarching conclusion is that in the current environment, effective corporate governance “has a significant impact on whether and how new technologies will drive value creation and will be-or won’t be-accepted by organizations, economies, and societies”. The Report contains three imperatives for boards: (i) strengthen oversight; (ii) deepen insight; and (iii) develop foresight. And each of those imperatives are implicated by an uncertain federal appetite for AI regulation.

In this environment of uncertainty, boards may want to implement several steps:

First is to initiate, or accelerate existing, oversight practices to account for the unique risks posed by AI (i.e., algorithmic bias, cybersecurity vulnerabilities, compliance with rapidly changing laws, and the overall reputational risks of misuse).

Second is to evaluate the need to exercise greater oversight of the commitment of the company’s third party vendors to robust safety, security, and ethical standards, even in the absence of clear regulatory mandates.

Third is to solidify AI-related management-to-board reporting relationships, and other authority matrices, to support information flow and increase accountability.

Fourth to speed up efforts to build the technology proficiency of the board and its key committees.

Any new presidential administration has the right to implement its own policy priorities with respect to regulation of AI and similar technology. That notwithstanding, policies that promote less regulation and rely on a private market- based solution to AI safety and security standards heighten expectations for boards to proactively monitor the associated risks, both internally and across their vendor ecosystem. This is particularly true in industries with higher regulatory scrutiny or where AI has a direct impact on critical functions or consumer outcomes.

Many commentators expect the Trump administration to ultimately set forth some regulatory plan for AI. However the rapid development of AI and related technology, and its equally rapid implementation in organizations, don’t allow boards the luxury of waiting for the Trump Administration to provide more details on its plans.

The board is thus “on the clock.”

Michael thanks his partner, Shawn Helms, for his contributions to this post.