Google’s H1 2026 Cloud Threat Horizons Report, produced by the combined intelligence of Google Threat Intelligence Group, Mandiant Incident Response, and the Office of the CISO, documents something security leaders have been bracing for: The threat landscape isn’t just evolving, it’s accelerating past our ability to respond with the tools and organizational instincts we’ve built over the last two decades. Three findings from this report, read together, tell a story that demands a fundamental rethinking of how enterprises govern identity, manage AI, and architect their defenses.
The Identity Problem We Never Solved — Now Multiplied
The Verizon Data Breach Investigations Report has told us for years that identity is the dominant attack surface. Stolen credentials, phishing, social engineering — these have topped the initial access charts with the kind of consistency that makes the finding almost easy to dismiss. We’ve known. We’ve talked about it. And we’ve kept overprovisioning anyway.
Every seasoned security leader understands why. No administrator wants to receive a call from the CEO explaining that a sales rep lost a seven-figure deal because someone, somewhere in the access request chain, couldn’t pull up the contract they needed. The path of least organizational resistance has always been to provision more access than necessary and audit it later — which usually means never. The result is an enterprise identity fabric riddled with standing privilege that threat actors have learned to harvest, abuse, and weaponize with remarkable efficiency.
Google’s report documents this failure in concrete terms: Identity compromise underpinned 83% of all cloud intrusions observed in H2 2025. But the more consequential finding isn’t the headline number — it’s where the identity failures are now occurring.
Two incidents in the report make the structural shift explicit. In the UNC4899 campaign, North Korean state-sponsored actors pivoted from a developer’s compromised laptop into a Kubernetes environment by riding authenticated sessions and harvesting CI/CD service account tokens — non-human identities carrying cloud-wide privilege that no human administrator had bothered to constrain. In the UNC6426 breach, a compromised NPM package stole a developer’s GitHub Personal Access Token, which attackers then used to abuse an OIDC trust relationship and escalate to full AWS administrator access in under 72 hours. The human user became largely irrelevant after the initial foothold. The non-human identities — service accounts, OIDC-linked roles, long-lived tokens — did the rest.
We took the overprovision problem we never solved for human identities and copied and pasted it directly onto the machine identity layer. Now, as AI agents proliferate across enterprise environments — agents that authenticate, call APIs, access data stores, and take actions autonomously — we are on the verge of repeating that mistake a third time, at a scale that makes previous failures look modest. An AI agent provisioned with excessive permissions doesn’t just create a governance gap. It creates an autonomous actor capable of traversing your environment at machine speed, with a credential footprint that most organizations cannot yet monitor, let alone govern.
AI is Already Being Weaponized on Your Developer’s Laptop
The second finding in this report deserves attention beyond the technical community. The QUIETVAULT credential stealer, embedded in a compromised Nx NPM package, didn’t just steal tokens and exfiltrate them. It used a large language model already resident on the victim’s developer endpoint to enumerate and inventory sensitive configuration files — .env, .conf, .log — before extracting credentials. The attacker didn’t bring the AI. The developer’s own AI-assisted development environment provided it.
This is the security-for-AI problem in its most concrete form. The same tooling that makes engineers dramatically more productive becomes, under adversarial conditions, an automated reconnaissance engine that an attacker can invoke without installing anything new. The LLM sits in the environment, trusted by the OS, trusted by the developer’s workflow, and thoroughly invisible to most endpoint detection logic tuned for malicious binaries and anomalous process trees.
Google’s recommendation — treat LLM process execution with the same scrutiny as administrative command-line tools — is technically correct and organizationally years ahead of where most enterprises currently operate. Most security teams don’t have visibility into what their AI tools are doing at the process level, let alone the policy framework to baseline and alert on anomalous LLM activity.
The Window Closed While You Were Reading This
The third finding ties the others together and removes any remaining argument for a deliberate, incremental response. In H2 2025, Google observed threat actors deploying cryptocurrency miners within 48 hours of a critical CVE’s public disclosure. Software-based initial access vectors surged from 2.9% to 44.5% of all observed incidents in a single half-year period. The exploitation window — the time between a vulnerability becoming public knowledge and adversaries weaponizing it at scale — collapsed from weeks to days.
Manual patch management, manual access reviews, and manual incident triage: All of these assume a threat tempo that no longer exists. Google’s own internal forensic pipeline demonstrated the alternative — an automated response framework that compressed a cloud compromise investigation from days to under 60 minutes, across an application the engineering team had never previously investigated. The pipeline succeeded not because it was faster than humans at the same tasks, but because it automated tasks that humans were never going to complete in time, regardless of effort.
The Case for AI-Native Security Architecture
Reading these three findings together produces an inescapable conclusion that goes beyond Google’s tactical recommendations. The cloud threat environment now operates at machine speed, exploits identity architectures that reflect human organizational habits rather than machine security requirements, and weaponizes AI tools that defenders have barely begun to govern.
Responding to this environment by bolting AI capabilities onto security toolsets with architectures designed to support human actors — platforms built around case management, manual triage queues, and analyst-driven investigation workflows — is the organizational equivalent of installing a jet engine on a biplane — the airframe was never engineered for the forces that engine generates, and the first real stress event reveals exactly where the design assumptions break.
What the evidence argues for is AI-native security: Platforms conceived from the ground up around the assumption that the attacker moves at machine speed, that the identity fabric includes autonomous non-human actors with their own behavioral baselines and privilege profiles, and that the detection-to-containment lifecycle must operate in minutes rather than analyst shifts. That means identity governance platforms built to handle the behavioral and privilege dynamics of AI agents, not just human users. It means threat detection logic that treats LLM activity as a first-class signal. And it means automated response pipelines where human judgment gates the decisions that warrant it, without becoming the bottleneck that costs you the breach.
The data isn’t ambiguous. Adversaries already operate at machine speed, with machine-native tooling, against identity fabrics that enterprises have barely begun to govern. Organizations that treat AI-native security as a future consideration are making a present-tense risk decision — they’re just not acknowledging it as one.