In 2025, AI agents officially became “a thing.”  A mid-year CISO survey found that two thirds of enterprises already run AI agents in production and a further 23% planned to do so this year.  This warp-speed adoption is already outpacing established security controls, with Gartner reporting in their Market Guide for Guardian Agents that “enterprises are rapidly adopting AI agents, but governance and control mechanisms are not keeping pace.” 

Embodying the unprecedented combination of the boundless improvisation of humans with the endless stamina of software, the benefits have immediately become clear.  (What enterprise wouldn’t want its most creative employees working 24/7/365, without complaint, illness or personal time off?)  However, more recently, so too have the risks- unplanned, if unintentional, business interruption; an exploitable target for threat actors seeking access; a source of identity dark matter, the unseen layer of unmanaged identity.

To add insult to injury, AI agents not only expand identity dark matter, they also exploit it.  Despite the endless stamina highlighted earlier, AI Agents are, by design, lazy– always looking for the most efficient path to return a satisfactory outcome to each prompt.  However in doing so, identity dark matter- like orphan, dormant accounts or loose tokens, usually with local clear-text credentials and excessive privileges- often provides the shortcuts necessary to reach the “end of job,” regardless of whether they should have been allowed to do so.  

So, how does an enterprise unlock the value of AI Agents while managing the risk of this new- neither human nor machine- element of the workforce?

Article content

1. Use Agents to Supervise Agents

Expand your identity governance program to cover this new class of workers, with a new extension to identity and access management (IAM)- the Guardian Agent.  Gartner defines Guardian Agents as a blend of AI governance and runtime controls. To be considered a true “Guardian,” a solution must provide native controls in three mandatory areas:

AI Visibility and Traceability: The ability to inventory every agent—sanctioned or shadow—and score their risks over time.
Continuous Assurance and Evaluation: Real-time monitoring of security posture and operational health to detect drift or “rogue” behavior.
Runtime Inspection and Enforcement: The “universal enforcement mechanism” that can proactively block high-risk actions before they impact the business.

2. Extend (and Blend) Human and Non-Human IAM

AI Agents may think like human users and execute like machines, but they are truly neither in the eyes of IAM.  Access models need to flex in order to accommodate rather than break.. 

Pair AI Agents with Human Sponsors: For both accountability and proper access management, ensure that each agent (or at least the related actions of an agent towards an outcome) be attributed the human prompter, with limited scope, limited duration, context-aware privileges to match (no more and no less). 
Ensure Comprehensive Visibility and Auditability:  beyond simple logging of individual actions, tie actions to intent: what the agent accessed, what it changed, what it exported, and whether that action touched regulated or sensitive datasets. 
Commitment to Good IAM Hygiene: As with all identities, authentication flows, authorization permissions and implemented controls, strong hygiene- on the application server as well as the MCP server- is critical to keep every user, especially AI agents, within the proper bounds.

3. The Time is Now

For all of the excitement (and adoption), according to Gartner, “Today, guardian agent deployments are mainly prototypes or pilots, although advanced organizations are already using early versions of them to supervise AI agents.”   This means that there is still time to get our arms around these challenges and manage risk.  But not a ton of time.

Before the agentic explosion, identity dark matter already comprised just under half (46%) of enterprise identity.  But the accelerating adoption of AI agents, promises to tip the scales- heavily- such that organizations who don’t act will find them overwhelmed- and over exposed to threat actors- by these unseen elements of identity.

Summary: The Bottom Line

AI agents are shortcut seekers by design. Left unchecked, they will become the fastest-growing source of identity risk in history. The challenge for 2026 is not whether to use AI, but how to govern it at scale without sacrificing the very speed that makes it valuable.

“Guardian agent packaged capabilities are what actually matters in AI agent systems—not the platform that hosts them.”

[Request the Limited Availability Gartner Market Guide for Guardian Agents] Learn how to define your requirements, evaluate the vendor landscape, and secure your agentic future.

About Orchid Security

Orchid Security is the industry’s first and only Identity Infrastructure Platform. We help enterprises eliminate “identity dark matter” by continuously discovering the unmanaged accounts and risky flows that AI agents exploit. By treating identity as infrastructure, Orchid ensures that human and AI-agent access is visible, compliant, and governed by design—without requiring any application code changes.