AI is emerging as a double edged cyber threat, acting as not only a sophisticated engine for attacks, but also a high-value target for threat actors.

This is according to the latest report from Google Threat Intelligence Group (GTIG), which documented the shift from hackers experimenting with AI to using it on an industrial scale.

For the first time, GTIG identified a threat actor using a zero-day exploit that the group believes was developed with AI. According to the report, the threat actor behind the exploit planned to use it in a mass exploitation event which was prevented by Google’s counter discovery.

Scot-Secure West Summit

The zero-day vulnerability identified was in a Python script that enables a user to bypass two-factor authentication (2FA) on a web-based, open-source system administration tool. GTIG has not identified what AI model was leveraged, but says they “have high confidence that the actor likely leveraged n AI model to support the discovery and weaponisation of this vulnerability.”

The vulnerability was the result of a high-level semantic logic flaw, which frontier large language models (LLMs) excel at identifying, Google said.

“Though frontier LLMs struggle to navigate complex enterprise authorization logic, they have an increasing ability to perform contextual reasoning, effectively reading the developer’s intent to correlate the 2FA enforcement logic with the contradictions of its hardcoded exceptions,” the threat report said. “This capability can allow models to surface dormant logic errors that appear functionally correct to traditional scanners but are strategically broken from a security perspective.”

AI has also been behind the acceleration of infrastructure suites and polymorphic malware, facilitating defense evasion.

The development of AI-enabled malware is causing a shift to autonomous attack orchestration, GTIG says, where models take the initiative to generate commands and manipulate environments based on their interpretation of target systems. GTIG’s “analysis of this malware reveals previously unreported capabilities and use cases for its integration with AI,” the groups blog said.

As found in previous GTIG reports, threat actors continue to leverage AI as a research assistant for attacks, but are increasingly turning to agentic workflows for automation.

Recommended

Threat actors also often access premium tier AI models anonymously, using automated registration pipelines and middleware to bypass limits on usage.

Supply chains continue to be pain points, as threat actors begin to target AI environments and software dependencies as an initial access vector.

“There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there. Threat actors are using AI to boost the speed, scale, and sophistication of their attacks,”  John Hultquist, Chief Analyst, Google Threat Intelligence Group, said.

Related