AI agent symbol overlayed on person using tablet

The Rise Of AI Agents Is Breaking Access Governance

Traditional IAM systems can’t track the non-deterministic behavior of autonomous agents, making intent-based oversight a security necessity.


By Itamar ApelblatMay 07, 2026

Access governance was built for humans. Over the past decade, it has been stretched, awkwardly, to cover machines. Now, AI agents are arriving at scale, and will soon surpass both human and machine identities in number, speed and potential impact.

According to Deloitte’s 2026 State of AI report, 74% of companies plan to deploy agentic AI across multiple areas within two years. Most will do so without governance frameworks designed for this new class of identity. That lack of oversight is already producing incidents that traditional access reviews and monitoring cannot detect.

AI agents represent a new, riskier, identity category because they combine machine-level access with something machines have never had before: the ability to autonomously perform tasks, select tools and chain operations. Because their behavior is non-deterministic, an agent with identical permissions can act very differently depending on context.

This is not a prompt engineering or guardrails issue. Content filtering has its place, but it cannot answer fundamental questions:  Who is the agent? What is it authorized to do? How are deviations detected and stopped?

When Authorized Access Becomes a Liability

The emerging risk with agentic AI is not that agents will perform unauthorized actions. It’s that they will perform authorized actions, just not in the way or at the scale anyone anticipated. Governance doesn’t fail at the permission level, it fails at the intent level. Without the ability to evaluate intent, a governance model cannot distinguish an agent acting within its mission from one that has quietly drifted far outside it.

Consider a healthcare organization that deploys an AI agent to help clinicians retrieve and summarize patient information within an electronic health record (EHR) system. The agent is granted access to the EHR, permission to query laboratory results and the ability to generate summaries through a clinical portal. This is the kind of access a clinician might hold under role-based controls.

In practice, the agent retrieves patient histories, queries lab databases, pulls imaging metadata and accesses billing records to correlate treatment timelines. It synthesizes all of it into a single clinical summary.

From the IAM system’s perspective, every individual action is authorized. But when the security team reviews access logs, they find that the agent has been routinely pulling data across clinical, diagnostic and administrative systems that are normally governed under separate compliance boundaries. No individual action is flagged. The violation lies in the access pattern.

This is what makes agentic AI categorically different. The risk is not a single unauthorized action, but the machine-speed composition of legitimate permissions across systems that were never intended to be used together. The consequences range from regulatory exposure and breach liability to audit failure and loss of trust.