Corporate use of AI agents in 2026 looks like the Wild West, with bots running amok and no one quite knowing what to do about it – especially when it comes to managing and securing their identities.
Organizations have been using identity security controls for decades to ensure only authorized human users access the appropriate resources to do their jobs, enforcing least-privilege principles and adopting zero-trust-style policies to limit data leaks and credential theft.
“These new agentic identities are absolutely ungoverned,” Shahar Tal, the CEO of agentic AI cybersecurity outfit Cyata, told The Register. Agentic identities are the accounts, tokens, and credentials assigned to AI agents so they can access corporate apps and data.
“We’re letting things happen right now that we would have never let happen with our human employees,” he said. “We’re letting thousands of interns run around in our production environment, and then we give them the keys to the kingdom. One of the key pain points that I hear from every company is that they don’t know what’s happening” with their AI agents.
In part, this is by design. “In the agentic AI world, the value proposition is: give us access to more of your corporate data, and we will do more work for you,” Nudge Security co-founder and CEO Russell Spitler told The Register. “Agents need to live inside the existing ecosystem of where that data lives, and that means that they need to live within the existing authentication and access infrastructure that SaaS providers already provide to access your data.”
This means AI agents using OAuth tokens to access someone’s Gmail or OneDrive containing corporate files, or repository access tokens to interact with a GitHub repo that holds source code.
We’re letting things happen that we would have never let happen with our human employees
“In order to provide value, agents need to get the data from the things that already have the data, and there are existing pathways to get that data,” Spitler said.
Plus, the plethora of coding tools makes it super easy for individual employees to create AI agents, delegate access to their accounts and data, and then ask the agents to do certain jobs to make the humans’ lives easier.
Spitler calls this AI’s “hyper-consumerized consumption model.”
“Those two pieces are what gives rise to the challenges that people have from a security perspective,” he said.
Everything all the time
These challenges can lead to disastrous consequences, as researchers and red teams have repeatedly shown. For example, AI agents with broad access to sensitive data and systems can create a “superuser” that can chain together access to sensitive applications and resources, and then use that access to steal information or remotely execute malicious code.
As global education and training company Pearson’s CTO Dave Treat recently noted: AI agents “tend to want to please,” and this presents a security problem when they are granted expansive access to highly sensitive corporate info.
“How are we creating and tuning these agents to be suspicious and not be fooled by the same ploys and tactics that humans are fooled with?” he asked.
Block discovered during an internal red-teaming exercise that its AI agent could be manipulated via prompt injection to deploy information-stealing malware on an employee laptop. The company says the issue has since been fixed.
These security risks aren’t shrinking anytime soon. According to Gartner’s estimates, 40 percent of all enterprise applications will integrate with task-specific AI agents by 2026, up from less than 5 percent in 2025.
Considering many companies today don’t know how many AI agents have access to their apps and data, the challenges are significant.
Tal explains the first thing his company does with its customers is a discovery scan. “And there is always this jaw-dropping moment when they realize the thousands of identities that are already out there,” he said.
It’s important to note: these are not only agentic identities but also human and machine identities. As of last spring, however, machine identities outnumber human identities by a ratio of 82 to one.
When Cyata scans corporate environments, “we are seeing anywhere from one agent per employee to 17 per employee,” Tal said. While some roles – specifically research and development and engineering – tend to adopt AI agents more quickly than the rest of their companies, “folks are adopting agents very, very quickly, and it’s happening all across the organization.”
This is causing an identity crisis of sorts, and neither Tal nor the other AI security folks The Register spoke to for this story believe that agentic identities should be included in the larger machine identities counts. AI agents are dynamic and context-aware – in other words, they act more like humans than machines.
“AI agents are not human, but they also do not behave like service accounts or scripts,” Teleport CEO Ev Kontsevoy told us.
“An agent functions around the clock and acts in unpredictable ways. For example, they may execute the same task with different approaches, continuously creating new access paths,” he added. “This requires accessing critical resources like MCP servers, APIs, databases, internal services, LLMs, and orchestration systems.”
It also makes securing them using traditional identity and access management (IAM) and privileged access management (PAM) tools “near impossible at scale,” Kontsevoy said. “Agents break traditional identity assumptions that legacy tools are built on, that identity is either human or machine.”
AI agents are not human, but they also do not behave like service accounts or scripts
For decades, IAM and PAM have been critical in securing and managing user identities. IAM is used to identify and authorize all users across an organization, while PAM applies to more privileged users and accounts such as admins, securing and monitoring these identities with elevated permissions to access sensitive systems and data.
While this has more or less worked for human employees with predictable roles, it doesn’t work for non-deterministic AI agents, which act autonomously and change their behavior on the fly. This can lead to security issues such as agents being granted excessive privileges, and “shadow AI.”
Meet the new shadow IT: shadow AI
“We are seeing a lot of shadow AI – someone using a personal account for ChatGPT or Cursor or Claude Code or any of these productivity tools,” Tal said, adding that this can lead to “blast-radius issues.”
“What I mean by that: they’re risky agents,” he said, explaining that some are essentially workflow experiments that someone in the organization created, and neither the IT nor security departments have any oversight.
“What they have done is created a super-connected AI agent that is connected to every MCP server and every data source the company has,” Tal said. “We’ve seen the problem of rogue MCP servers over and over again, where they compromise an agent and steal all of its tokens.”
Fixing this requires visibility into both IT-sanctioned and unsanctioned AI agents being used, so they can be continuously monitored for misconfigurations or any other threats.
“We do a risk assessment for each and every identity that we discover,” Tal said. “We look at its configuration, its connectivity, the permissions that it has. We look at its activity or history – journals, logs that we collect – so we can maintain a profile for each of these agents. After that, we want to put posture guardrails in place.”
These are mitigating controls that prevent the agent from doing something or accessing something sensitive. Sometimes improving security is as easy as speaking to the human behind the AI agents about the risk they unknowingly introduced via the agent and what it can access.
“We need to pop this bubble that agents come out of immaculate conception – a human is creating them, a human is provisioning their access,” Spitler said. “We need to associate tightly those agents with the human who created it, or the humans who work on it. We need to know who proxied their access to these other platforms, and what roles those accounts have in these platforms, so we understand the scope of access and potential impact of that agent’s access in the wild.”
Spitler says that’s “ground zero” for managing and securing agentic identities. “You need to know who your agents are.” ®