Agentic AI attacks now move faster than any human security team can respond.IBM’s answer is autonomous AI defence, but it raises questions nobody is answering yet.

There is a number worth sitting with before discussing IBM’s latest cybersecurity move: 27 seconds. That is the fastest eCrime breakout time recorded in 2025, according to CrowdStrike’s 2026 Global Threat Report–the window between an attacker’s initial access and full lateral movement across a network. The average is 29 minutes, a 65% speed improvement over 2024. 

AI-enabled adversaries drove an 89% year-over-year surge in attack volume across that same period. Against that backdrop, IBM’s announcement of IBM Autonomous Security on April 15 is perhaps less a product launch and more an acknowledgement of where the threat has arrived. The service, composed of coordinated AI agents, is designed to analyse software exposures, enforce security policies, detect anomalies, and contain threats with minimal human intervention–operating, as IBM puts it, at machine speed. 

The logic is straightforward: if attackers are moving at a pace no human SOC team can match, defenders need systems that do not have to wait for a human to respond. The announcement is grounded in IBM’s own threat data. IBM’s 2026 X-Force Threat Intelligence Index, published in February, found a 44% increase in attacks beginning with the exploitation of public-facing applications, driven largely by AI-enabled vulnerability discovery. 

Vulnerability exploitation became the leading cause of attacks overall, accounting for 40% of all incidents X-Force observed in 2025. “Frontier models are creating a new category of enterprise threat that is fast-moving, systemic and increasingly autonomous,” said Mark Hughes, Global Managing Partner of Cybersecurity Services, IBM Consulting. “Meeting that threat requires a systemic defence. AI-powered offence demands AI-powered defence.”

The agentic AI threat is not a future problem

What makes IBM’s move significant is the context in which it arrives. The same frontier AI models enabling enterprise productivity are being actively weaponised. IBM X-Force has documented attackers injecting malicious prompts into GenAI tools, with infostealer malware alone exposing over 300,000 ChatGPT credentials in 2025. 

The attack surface is no longer just endpoints and applications. The AI systems enterprises are deploying are now targets themselves. IBM Autonomous Security is designed to operate across this expanded surface. It brings together what IBM describes as interoperable, vendor-agnostic digital workers that function across an organisation’s full security stack, connecting into identity, risk, and governance functions alongside IT and operational technology environments. 

The aim is to make security programs act as a coordinated system rather than a collection of disconnected tools; a problem most enterprise environments have struggled with for years.

Alongside the service, IBM Consulting is also offering an Enterprise Cybersecurity Assessment for Frontier Model Threats, designed to help organisations understand where AI-specific exposures exist before an attack exploits them. 

Given that IBM’s own research shows enterprises often have no clear picture of their AI attack surface, that starting point matters as much as any response capability.

The question IBM’s announcement leaves open

Here is where the piece gets more complicated. IBM is not alone in this space. TrendAI launched its Agentic Governance Gateway in March. Cisco unveiled a Zero Trust architecture for autonomous AI agents at the RSA Conference in April. The defensive AI market is projected to reach US$44.24 billion in 2026. Every major security vendor is essentially making the same argument: the only way to defend against AI-speed attacks is with AI-speed defence.

That is probably true. But it also means enterprises are now managing AI systems on both sides of the risk equation simultaneously, deploying AI for productivity, defending against AI-powered attacks, and now running AI to govern their security posture. Each layer adds complexity, and complexity has historically been where attackers find their best opportunities.

The accountability question is also unsettled. When an autonomous security system makes a wrong call, blocks a legitimate process, misidentifies a threat, or is itself compromised, who owns that failure? IBM’s announcement does not address this directly, and neither does the broader vendor conversation around autonomous security.

That is not a criticism of IBM’s product specifically. The speed problem is real, and the need for an automated response is genuine. But enterprises adopting autonomous security tools would do well to define, before deployment, exactly where human oversight sits in the loop and what happens when machine-speed decisions turn out to be wrong.

The arms race framing, at least, is accurate. As IBM’s own threat data makes clear, attackers are not reinventing their playbooks; they are accelerating them with AI.

IBM announced IBM Autonomous Security on April 15, 2026. The IBM X-Force Threat Intelligence Index 2026 was published on February 25, 2026. CrowdStrike’s 2026 Global Threat Report was released on February 24, 2026.