Agentic AI presents new opportunities for users and businesses to extend and expand the reach and effect of existing AI tools. Unlike generative AI or traditional machine learning systems, AI agents can take actions, adapt to new information, and interact with other agents and systems to complete tasks on behalf of humans. At the same time, these systems also present unique risks that require new approaches to AI risk governance.

In recognition of these new risks, on January 22, 2026, Singapore’s Infocomm Media Development Authority (IMDA) released a draft Model AI Governance Framework for Agentic AI at the World Economic Forum (WEF) in Davos, Switzerland. Singapore’s framework follows the release of a similar framework by the WEF in November 2025. Both proposals recognize that current AI governance frameworks and best practices may not address the unique risks presented by agentic AI and therefore offer new guidance for governing such risks.

Although both the Singapore and the WEF proposals are nonbinding, voluntary frameworks, both authorities have previously published other AI governance guidance that has been leveraged by AI developers and deployers. Singapore’s framework is open for public comment and the IMDA specifically seeks feedback on its description of agentic AI systems and the governance controls outlined in the framework.

Executive Summary & Takeaways

Agentic AI systems share common features. There is no consensus definition of agentic AI. However, these systems do have common features, including the ability to act with some degree of independence in both planning and executing defined tasks, using reasoning to execute multistep workflows, and often leveraging access to external systems, to achieve one or more user-defined goals.

Unique risks presented by agentic AI may not be covered under existing governance protocols. Because AI agents have significant autonomy, the potential risks associated with deploying these systems are broader than risks identified for generative AI or other AI systems. Risks unique to agentic AI include the potential for erroneous, unauthorized, or potentially illegal actions (and the resulting potential attribution of liability to developers/deployers/users/model providers); biased or unfair actions; data breaches; and disruptions to connected systems. These risks may be amplified by the fact that agentic AI systems may not provide fully transparent audit trails for ongoing oversight and review.

New governance practices can address unique agentic AI risks. In order to manage these new risks, Singapore recommends several discrete measures, including: 1) assessing and bounding new risks upfront (before deployment); 2) increasing accountability for humans responsible for overseeing agentic AI systems; 3) implementing appropriate technical controls and processes; and 4) enabling end-users to assume responsibility for managing risks.

Key Governance Recommendations for Agentic AI

Singapore’s framework calls out two important concepts critical to managing agentic AI risks: 1) the agent’s “action-space” (the tools and systems the agent may, or may not, access); and 2) the agent’s autonomy (defined by instructions governing the agent, and human oversight). Singapore’s framework articulates the following four actions to manage agent autonomy and action-space in order to mitigate unique agentic AI risks.

Assess and Bound Risks Upfront Prior to Deployment
Before deploying an AI agent, organizations should assess the agent’s potential risk based on such factors as the scope of actions the agent can take, the reversibility of those actions, and the level of autonomy the agent will be granted. Early management of these risks can include narrowing the scope of the agent’s “action-space” by limiting access to tools and external systems. Organizations can also limit agent autonomy by ensuring traceability and controllability of the agent’s actions through robust identity management and access control systems. The guidance also recommends engaging in threat modelling to identify potential methods an attacker might use to compromise the agent, something laid out in more detail in an addendumon securing agentic AI.

Make Humans Meaningfully Accountable
Another essential area of risk management is appropriate use of human oversight to manage the agent’s actions. While this is complicated by the autonomy inherent in agent activity, the guidance recommends requiring human approval at significant checkpoints in the agentic workflow, which could include any high-stakes or irreversible actions the agent might take (e.g., deleting data, sending communications, or making payments). Clearly defining these checkpoints, and the responsible human(s) within the organization, will help to define the permissible levels of autonomy for specific agents. This kind of limitation should be regularly audited to ensure it remains effective to mitigate risks without limiting the potential capability of the agent.

Implement Technical Controls and Processes
Organizations should also design and implement technical controls across the agent lifecycle. This means testing for baseline safety and reliability, execution accuracy, policy adherence, and tool use prior to deployment. Similarly, adopting a measured and gradual rollout and deployment timeline for agents, while continuously monitoring them both during and after deployment, should help mitigate risks. This approach, while more costly, ensures the broadest oversight in recognition of the fact that not all possible outcomes can be foreseen and tested beforehand.

Enable End-User Responsibility
Because end-users, not just developers, bear responsibility for trustworthy deployment of agents, the guidance recommends that users should be informed at a minimum about the agent’s range of actions, access to data, and the user’s own responsibilities. Organizations should provide training to employees to further equip them to responsibly utilize AI agents, especially if those employees will be integrating agents into their work processes.

Singapore’s Model Framework Builds off Recent World Economic Forum Guidance

As noted above, last November, WEF published its own guidance regarding the use of AI agents, including a section on governance considerations. While its discussion of risk assessment and governance was generally at a higher level than Singapore’s guidance, the two frameworks align on the big picture.

Notably, both frameworks recommend pre-deployment testing to validate whether the agent performs as expected. Both also emphasize the importance of oversight, accountability, and technical safeguards, and the need to ensure that these measures scale in accordance with the agent’s complexity, potential impact, and level of autonomy. As with Singapore’s recommendations, WEF urges actors to proactively monitor and assess risks, and recommends clear task boundaries to prevent unnecessary system or data access by the agent. WEF also recommends the use of input and output filters as a further technical control to screen out any potentially harmful or noncompliant actions.

Failure to Implement Proper AI Governance Procedures May Have Real Consequences

All AI governance systems are not alike. As Singapore and the WEF recommendations demonstrate, agentic AI systems present unique risks that require new governance processes. The four actions highlighted in the Singapore framework provide concrete steps that can be implemented within an organization in a relatively short period of time. Implementation of these, and other governance processes, will aid in limiting potential risks and better position companies to provide an effective response to potential allegations of insufficient risk management processes within an organization. Indeed, compliance with other governance best practices are recognized as safe harbor protections under certain AI laws that can shield companies from potential liability.

As AI regulation and oversight increase in the U.S. and abroad, the need for proactive and robust AI governance processes is critical. Further, potential enforcement by regulators, state attorneys general, or even litigation by class action plaintiffs provides a clear imperative for implementing effective governance processes to help eliminate or reduce the potential for enforcement or litigation liability.

Recommendations for Organizations Deploying Agentic AI Systems

Early adopters of agentic AI systems should develop robust governance processes to manage the unique risks presented by these systems. Singapore and the WEF recommendations provide clear actionable steps to implement appropriate governance frameworks within organizations large and small.

These steps should include developing pre-deployment actions to frame the scope of AI agents’ autonomy and action-space, revising policies to make humans accountable for operational deployment and risk management, enabling appropriate technical and controls and processes, and ensuring end-users can take actions in support of the broader risk management objectives.

+++

K.C. Halm, Stacey Sprenkel, and Nathan Keller are legal thought leaders who work on AI governance, regulatory, compliance and other matters, offering forward-thinking strategies for today’s legal challenges. For more insights, contact K.C., Stacey or Nathan, or another member of Davis Wright Tremaine’s AI Team.