An attack late last year on a company’s cloud environment is the latest example of how threat actors are using AI technologies to accelerate their malicious operations.
Armed with found credentials, the hackers were able to move from gaining initial access to obtaining administrative privileges in the Amazon Web Services (AWS) environment in fewer than 10 minutes, according to threat researchers with Sysdig.
The incident highlighted not only the use of generative AI by the bad actor but also the mistakes the victim company made when storing credentials in the cloud.
“The attack stood out not only for its speed, but also for multiple indicators that suggest the threat actor leveraged large language models (LLMs) throughout the operation to automate reconnaissance, generate malicious code, and make real-time decisions,” Sysdig’s Alessandro Brucato and Michael Clark wrote in a report this week. “The threat actor completed the entire sequence from credential theft to successful Lambda execution in just eight minutes, including reconnaissance to identify admin users and roles.”
Brucato and Clark warned that as LLMs become more sophisticated, such AI-powered attacks will become more common and effective.
“The hallucinations observed in this operation will become rarer as offensive agents increase their accuracy and awareness of target environments,” they wrote. “Organizations must prioritize runtime detection and least-privilege enforcement to quickly defend against this accelerating threat landscape.”
Using AI to Accelerate Attacks
Hackers, since the early days of generative AI in late 2022, have been using the technology to improve their capabilities, from making phishing messages more convincing to ramping up the use of vishing and deepfakes to target victims.
More recently, threat researchers have seen AI become more integral to their operations. AI vendors like OpenAI and Microsoft have outlined way bad actors have used their AI technologies in attacks. Anthropic had issued similar warnings, but in November 2025, it pointed to an escalation in AI use by cybercriminals.
The vendor wrote that a Chinese-nexus group had used its Claude Code agentic AI coding tool to automate 80% to 90% of the work in a cyberespionage campaign against more than two dozen organizations. Human intervention was only needed at four to six critical decision points, the Anthropic team wrote.
Last month, Check Point researchers wrote about a malware dubbed “VoidLink” that was mostly developed by one person using AI in less than a week.
“It signals a broader shift in the threat landscape,” the researchers wrote. “The era of AI-generated malware development is no longer speculative. It is here, and it is evolving fast.”
From Credentials to Admin Privileges
In the case from November 28, 2025, detected by Sysdig, the bad actors found credentials in public AWS S3 storage buckets, then used those to escalate privileges by running a Lambda function code injection to gain unauthorized access and exfiltrate. They used the tactic to edit a function called EC2-init several times, eventually hijacking an account from a user called “frick,” giving them full administrative capabilities.
With this, they were able to move laterally through 19 identities – including five actual users and six of their own over 14 sessions – to blend in and hide their presence. Using the admin privileges, they collected a range of data from multiple services, from secrets from Secrets Manager and SSM parameters from EC2 Systems Manager to CloudWatch logs, internal data from S3 buckets to Lambda function source code.
“The use of comments, comprehensive exception handling, and the speed at which this script was written strongly suggests LLM generation,” Brucato and Clark wrote, noting the code was written in Serbian. “The threat actor completed the entire sequence from credential theft to successful Lambda execution in just eight minutes, including reconnaissance to identify admin users and roles.”
Moving the Focus to LLMs, GPUs
The threat actors also used the compromised cloud credentials in an LLMjacking attack to abuse Amazon Bedrock to run a range of AI models like Anthropic’s Claude 3.5 Sonnet and 3 Haiku, DeepSeek R1, Meta’s Llama 4 Scout, and Amazon’s Titan Image Generator.
Their focus then turned to GPU hijacking in EC2 instances. Once they had the infrastructure in place, the attackers tried – but failed – to launch their own P5 high-performance computing instance called “stevan-gpu-monster.” However, they were successful in spinning up a smaller instance and running that, if it hadn’t been detected, would’ve cost the victim company $23,600 a month.
‘Misconfigured S3 Buckets’
In a statement to the media, AWS wrote that its services and infrastructure weren’t affected by this issue, and that the compromise happened because of misconfigured S3 buckets.
“We recommend all customers secure their cloud resources by following security, identity, and compliance best practices, including never opening up public access to S3 buckets or any storage service, least-privilege access, secure credential management, and enabling monitoring services like GuardDuty, to reduce risks of unauthorized activity,” the company wrote.