FILE - A woman walks by a giant screen displaying the Google logo at an event at the Paris Google Lab on the sidelines of the AI Action Summit in Paris, Feb. 9, 2025.

FILE – A woman walks by a giant screen displaying the Google logo at an event at the Paris Google Lab on the sidelines of the AI Action Summit in Paris, Feb. 9, 2025.

Thibault Camus/AP

Google said Monday that it had disrupted a criminal group’s attempt to use artificial intelligence to exploit another company’s previously unknown digital vulnerability, adding to heightened worries across government and private industry about AI’s risks for cybersecurity.

Google shared limited information about the attackers and the target, but John Hultquist, chief analyst at the tech giant’s threat intelligence arm, said it represents a moment cybersecurity experts have warned about for years: malicious hackers arming themselves with AI to supercharge their ability to break into the world’s computers.

Article continues below this ad

“It’s here,” Hultquist said. “The era of AI-driven vulnerability and exploitation is already here.”

It comes at a time of leaps in AI’s abilities to find vulnerabilities, including the Mythos model announced a month ago by Anthropic. Among those trying to bolster their defenses is President Donald Trump’s White House, which has shifted its approach in how it plans to vet the most powerful AI models before their public release.

After following through with a campaign promise to repeal Democratic President Joe Biden’s guardrails around the fast-developing technology, the Republican administration and its allies are now sending mixed signals about the government playing a larger role in AI oversight.

“Some people don’t want there to be a regulatory response to this and others do,” said Dean Ball, a senior fellow at the Foundation for American Innovation who was previously a White House tech policy adviser and a lead author of Trump’s AI policy roadmap last year.

Article continues below this ad

“I don’t like regulation,” Ball said. “I would prefer for things not to be regulated. But I think we need to in this case.”

Google says it found evidence of AI helping in cyberattack

Google said it observed a group of prominent “threat actors” planning a big operation relying on a bug they had found. The vulnerability allowed them to bypass two-factor authentication to access a popular online system administration tool, which Google declined to name.

The company called it a zero-day exploit, a cyberattack that takes advantage of a previously unknown security vulnerability. “Zero-day” refers to the fact that the security engineers have had zero days to develop a fix for the vulnerability.

Article continues below this ad

Google said it notified the affected company and law enforcement and was able to disrupt the operation before it caused any damage. But as it traced the hackers’ footprints, it found evidence they had used an AI large language model — the same technology that powers popular chatbots — to discover the vulnerability.

Google didn’t reveal which AI model was used in the cyberattack, only that it was most likely not Google’s own Gemini or Anthropic’s Claude Mythos. Google also didn’t reveal which group it suspected in the attack but said there was no evidence it was tied to an adversarial government, though the company said groups tied to China and North Korea have been exploring similar techniques.

Hultquist said that compared with government spies who typically work slowly and quietly, criminal hackers have some of the most to gain from AI’s “tremendous capability for speed” in finding and weaponizing security bugs.

“There’s a race between you and them to stop them before they can essentially get whatever data they need to extort you with, or launch ransomware,” he said in an interview. “AI is going to be a huge advantage because they can move a lot faster.”

Article continues below this ad

Anthropic’s Mythos has sparked a panic and call for regulation

Trump’s Commerce Department announced last week that it signed new agreements with Google, Microsoft and Elon Musk’s xAI to evaluate their most powerful AI models before their public release, building on previous agreements the Biden administration made with Anthropic and ChatGPT maker OpenAI. But the announcement later disappeared from the Commerce Department website.

It was the latest example of jumbled signals from the Trump administration in the month since Anthropic announced a new model it called Mythos that it said was so “strikingly capable” at hacking and cybersecurity work that it could only release it to a small group of trusted organizations.

Anthropic created an initiative called Project Glasswing bringing together tech giants including Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world’s critical software from “severe” fallout that the new model could pose to public safety, national security and the economy. But its relationship with the U.S. government was complicated by a public and legal fight with the Pentagon and Trump himself over military use of its AI technology.

Article continues below this ad

Its top rival, OpenAI, has since introduced a similar model. The company said Friday it was releasing a specialized cybersecurity version of ChatGPT that would only be available to “defenders responsible for securing critical infrastructure” to help them find and patch vulnerabilities in their code.

Ball said he’s optimistic that, over the long term, AI tools that are increasingly good at coding will make us safer from the routine cyberattacks afflicting hospitals, schools and other organizations. In the meantime, however, he said there are “untold trillions of lines of software code” supporting the world’s computing systems that are at risk if AI tools are unleashed to exploit all of their bugs.

It could take years to harden all of that software — a process that Ball believes would be aided by coordination from the U.S. government.

Article continues below this ad

In the meantime, Ball predicts a “transitional period” where cybersecurity risks rise significantly and “the world might actually be more dangerous.”