Data finds that 86% of cybersecurity professionals say AI agents and autonomous systems cannot be trusted without unique, dynamic digital identities

CLEVELAND, Jan. 29, 2026 /PRNewswire/ — Keyfactor, the industry leader in digital trust for modern enterprises, today announced findings from its Digital Trust Digest: The AI Identity Edition, conducted in partnership with Wakefield Research. Keyfactor’s research reveals the significant gap between the push toward agentic AI adoption and organizations’ ability to securely authenticate, govern, and trust autonomous systems operating inside their environments.

As AI agents gain autonomy, such as initiating actions, accessing systems, and interacting with other agents without direct human oversight, traditional security models become outdated and begin to break down. Organizations are deploying increasingly powerful AI agents without the identity foundations required to verify who or what is acting, what it is allowed to do, and how its behavior can be stopped or audited if something goes wrong.

This widening trust gap is quickly becoming a security risk. The report finds that more than two-thirds (69%) of cybersecurity professionals believe that vulnerabilities in AI agents and autonomous systems pose a greater threat to their company’s security and identity systems than human misuse of AI.

At the root of this risk is identity. An overwhelming 86% of cybersecurity professionals agree that without unique, dynamic digital identities, AI agents and autonomous systems cannot be fully trusted. Yet despite this recognition, many organizations lack the identity infrastructure and governance models needed to securely manage AI agents at scale. This gap is becoming more urgent as adoption accelerates: 85% of cybersecurity professionals expect digital identities for AI agents will be as common as human and machine identities within five years.

The Agentic AI Security Recognition-Action Gap

While cybersecurity professionals acknowledge AI-based vulnerabilities, only half have implemented governance frameworks to address them, and just 28% believe they can actually prevent a rogue agent from causing damage. As such, agentic AI security represents a board-level priority, and yet 55% of security leaders say their C-suite is not taking agentic AI risks seriously enough, creating a recognition-action gap that leaves organizations vulnerable.

“As businesses race to deploy autonomous AI systems, the security infrastructure to protect them is falling dangerously behind,” said Jordan Rackie, CEO of Keyfactor. “C-Suite must provide the resources and support security teams need to enable their organizations to trust what their AI is doing, prove it, and stop it when needed. This is the next frontier of digital trust, and identity will define the winners in the next decade of AI.”

The Identity Crisis of Enterprise AI

Despite the overwhelming agreement that digital identities for AI agents will be as common as human and machine identities in the coming years, the shift needs to happen now to secure those identities appropriately. Without proper controls, AI agents cannot reach their full potential.

“Our identity, security, and governance models were built for a world where software did not act on its own behalf, and the result is an emerging identity crisis at the center of enterprise AI. You can’t have AI agents running autonomously without clear traceability and controls in place to manage their identities,” said Ellen Boehm, SVP, IoT & AI Identity Innovation. “Teams must put the identity infrastructure in place now to handle the influx of agents coming, or they risk severe security consequences for their organization.”

The Approaching Security Cliff of Vibe Coding

As vibe coding gains momentum in software development, a critical security gap is emerging: more than two-thirds of organizations (68%) lack full visibility or governance over AI-generated code contributions. This creates an untenable risk as AI assistants write increasingly large portions of enterprise codebases without the fundamental safeguards that make code trustworthy.

“Vibe coding offers tremendous benefits for DevSecOps teams, but also significant risk if not secured appropriately,” added Boehm. “The solution is clear: every AI contribution carries a cryptographic fingerprint, every code path has auditable provenance, every commit links to an attributable identity, and every AI agent operates with enforceable boundaries and revocable credentials. Without it, teams will never be able to verify who — or what — wrote critical pieces of their software. Identity, cryptographic provenance, and governance solve that.”

The study, conducted by Wakefield Research on behalf of Keyfactor, includes responses from 450 Cybersecurity Professionals in North America and Europe at companies with at least 1,000 employees.

To view the complete findings and download the Digital Trust Digest: AI Identity Edition, please visit: https://www.keyfactor.com/digital-trust-digest-ai-identity  

About Keyfactor
Keyfactor brings digital trust to the hyper-connected world by empowering organizations to build and maintain secure, trusted connections across every device, workload, and machine. By simplifying PKI, automating certificate lifecycle management, and enabling crypto-agility, Keyfactor helps organizations move fast to establish digital trust at scale. With Keyfactor, businesses can tackle today’s challenges, like growing certificate volumes, manual processes, and new standards and regulations, while laying the groundwork for a successful transition to post-quantum cryptography. For more, visit keyfactor.com.

SOURCE Keyfactor