Security concerns for the new agentic AI tool formerly known as Clawdbot remain, despite a rebrand prompted by trademark concerns raised by Anthropic. Would you be comfortable handing the keys to your identity kingdom over to a bot, one that might be exposed to the open internet?
Clawdbot, now known as Moltbot, has gone viral in AI and developer circles in recent days, with fans hailing the open-source “AI personal assistant” as a potential breakthrough.
The long and short of it is that Moltbot can be controlled using messaging apps, like WhatsApp and Telegram, in a similar way to the GenAI chatbots everyone knows about.
Taking things a little further, its agentic capabilities allow it to take care of life admin for users, such as responding to emails, managing calendars, screening phone calls, or booking table reservations – all with minimal intervention or prompting from the user.
All that functionality comes at a cost, however, and not just the outlay so many seem to be making on Mac Mini purchases for the sole purpose of hosting a Moltbot instance.
In order for Moltbot to read and respond to emails, and all the rest of it, it needs access to accounts and their credentials. Users are handing over the keys to their encrypted messenger apps, phone numbers, and bank accounts to this agentic system.
Naturally, security experts have had a few things to say about it.
First, there was the furor around public exposures. Moltbot is a complex system, and despite being as easy to install as a typical app on the face of it, the misconfigurations associated with it prompted experts to highlight the dangers of running Moltbot instances without the proper know-how.
Jamieson O’Reilly, founder of red-teaming company Dvuln, was among the first to draw attention to the issue, saying that he saw hundreds of Clawdbot instances exposed to the web, potentially leaking secrets.
He told The Register that the attack model he reported to Moltbot’s developers, which involved proxy misconfigurations and localhost connections auto-authenticating, is now fixed. However, if exploited, it could have allowed attackers to access months of private messages, account credentials, API keys, and more – anything to which Clawdbot owners gave it access.
According to his Shodan scans, supported by others looking into the matter, he found hundreds of instances exposed to the web. If those had open ports allowing unauthenticated admin connections, it would allow attackers access to the full breadth of secrets in Moltbot.
“Of the instances I’ve examined manually, eight were open with no authentication at all and exposing full access to run commands and view configuration data,” he said. “The rest had varying levels of protection.
“Forty-seven had working authentication, which I manually confirmed was secure. The remainder fell somewhere in between. Some appeared to be test deployments, some were misconfigured in ways that reduced but didn’t eliminate exposure.”
On Tuesday, O’Reilly published a second blog detailing a proof-of-concept supply chain exploit for ClawdHub – the AI assistant’s skills library, the name of which has not yet changed.
He was able to upload a publicly available skill, artificially inflate the download count to more than 4,000, and watch as developers from seven countries downloaded the poisoned package.
The skill O’Reilly uploaded was benign, but it proved he could have executed commands on a Moltbot instance.
“The payload pinged my server to prove execution occurred, but I deliberately excluded hostnames, file contents, credentials, and everything else I could have taken,” he said.
“This was a proof of concept, a demonstration of what’s possible. In the hands of someone less scrupulous, those developers would have had their SSH keys, AWS credentials, and entire codebases exfiltrated before they knew anything was wrong.”
ClawdHub states in its developer notes that all code downloaded from the library will be treated as trusted code – there is no moderation process at present – so it’s up to developers to properly vet anything they download.
Therein lies one of the key issues with the product. It is being heralded by nerds as the next big AI offering, one that can benefit everyone, but in reality, it requires a specialist skillset in order to use safely.
Eric Schwake, director of cybersecurity strategy at Salt Security, told The Register: “A significant gap exists between the consumer enthusiasm for Clawdbot’s one-click appeal and the technical expertise needed to operate a secure agentic gateway.
“While installing it may resemble a typical Mac app, proper configuration requires a thorough understanding of API posture governance to prevent credential exposure due to misconfigurations or weak authentication.
“Many users unintentionally create a large visibility void by failing to track which corporate and personal tokens they’ve shared with the system. Without enterprise-level insight into these hidden connections, even a small mistake in a ‘prosumer’ setup can turn a useful tool into an open back door, risking exposure of both home and work data to attackers.”
The security concerns surrounding Moltbot persist even when it is set up correctly, as the team at Hudson Rock pointed out this week.
Its researchers said they looked at Moltbot’s code and found that some of the secrets shared with the assistant by users were stored in plaintext Markdown and JSON files on the user’s local filesystem.
The implication here is that if a host machine, such as one of the Mac Minis being bought en masse to host Moltbot, were infected with infostealer malware, then it would mean the secrets stored by the AI assistant could be compromised.
Hudson Rock is already seeing malware as a service families implement capabilities to target local-first directory structures, such as those used by Moltbot, including Redline, Lumma, and Vidar.
It is fathomable that any of these popular strains of malware could be deployed against the internet-exposed Moltbot instances to steal credentials and carry out financially motivated attacks.
If the attacker is also able to gain write access, then they can turn Moltbot into a backdoor, instructing it to siphon sensitive data in the future, trust malicious sources, and more.
“Clawdbot represents the future of personal AI, but its security posture relies on an outdated model of endpoint trust,” said Hudson Rock. “Without encryption-at-rest or containerization, the ‘Local-First’ AI revolution risks becoming a goldmine for the global cybercrime economy.”
The start of something bigger
O’Reilly said that Moltbot’s security has captured the attention of the industry recently, but it is only the latest example of experts warning about the risks associated with wider deployments of AI agents.
In a recent interview with The Register, Palo Alto Networks chief security intel officer Wendi Whitmore warned that AI agents could represent the new era of insider threats.
As they are deployed across large organizations, trusted to carry out tasks autonomously, they become increasingly attractive targets for attackers looking to hijack these agents for personal gain.
The key will be to ensure cybersecurity is rethought for the agentic era, ensuring each agent is afforded the least privileges necessary to carry out tasks, and that malicious activity is monitored stringently.
“The deeper issue is that we’ve spent 20 years building security boundaries into modern operating systems,” said O’Reilly. “Sandboxing, process isolation, permission models, firewalls, separating the user’s internal environment from the internet. All of that work was designed to limit blast radius and prevent remote access to local resources.
“AI agents tear all of that down by design. They need to read your files, access your credentials, execute commands, and interact with external services. The value proposition requires punching holes through every boundary we spent decades building. When these agents are exposed to the internet or compromised through supply chains, attackers inherit all of that access. The walls come down.”
Heather Adkins, VP of security engineering at Google Cloud, who last week warned of the risks AI would present to the world of underground malware toolkits, is flying the flag for the anti-Moltbot brigade, urging people to avoid installing it.
“My threat model is not your threat model, but it should be. Don’t run Clawdbot,” she said, citing a separate security researcher who claimed Moltbot “is an infostealer malware disguised as an AI personal assistant.”
Principal security consultant Yassine Aboukir said: “How could someone trust that thing with full system access?” ®