Token Security Labs has reported that employees are actively using the open-source AI assistant Clawdbot, also known as Moltbot, in 22% of its customer organisations.

Clawdbot is a personal AI assistant that runs on a user’s own Mac or Linux device. The project’s creator is Peter Steinberger. Users can connect the assistant to messaging and collaboration tools including Slack, WhatsApp, Telegram, Discord, Microsoft Teams, Signal and iMessage.

Token Security described the software as part of a wider trend of “shadow AI” inside companies. The firm said it found the level of adoption in less than a week of analysis.

The assistant integrates with calendars, email, documents and file systems, according to Token Security. It can read and respond to emails and manage schedules. It can access files and execute commands.

Token Security said the tool can execute terminal commands and run scripts. It can browse the web and control browsers. It retains memory across sessions and can reach out to users.

Many of those features depend on broad access to user accounts and data. Token Security said Clawdbot stores credentials locally in plaintext. The firm also said employees often run it on unmanaged personal devices.

Security concerns

Token Security said the combination of access, local credential handling and limited oversight creates risk for corporate environments. It said the software runs outside centralised logging and monitoring in many cases.

One scenario described by Token Security involved an employee running Clawdbot on a personal laptop or Mac Mini and using WhatsApp or iMessage as the chat interface. In that scenario, the employee connects the assistant to corporate Slack and grants it access to internal channels, direct messages, files, emails and calendars.

Token Security said that setup could create a pathway for sensitive information to move from corporate systems to a consumer messaging app without the visibility of enterprise security teams. It cited data loss prevention controls and audit trails as areas that could be bypassed in that type of workflow.

Exposed instances

Token Security also pointed to reports of exposed Clawdbot deployments on the public internet. It cited research by Jamieson O’Reilly, who it said discovered hundreds of Clawdbot instances with no authentication and open admin dashboards.

Token Security said those dashboards could expose API keys, OAuth tokens and conversation histories. It also said some cases allowed remote code execution using stolen gateway tokens.

The firm identified additional issues tied to local storage practices. It said Clawdbot stores configuration and credentials in plaintext files under ~/.clawdbot/ and ~/clawd/. It said those files are readable by any process running as the user.

Token Security said connections to corporate collaboration tools could expose internal communications and documents to an AI system outside IT visibility. It also cited prompt injection as an attack vector when the tool reads emails, documents and web pages.

Token Security also highlighted the lack of default sandboxing. It cited Clawdbot documentation which states there is “no perfectly secure setup when operating an AI agent with shell access.”

Response steps

Token Security outlined steps it said organisations can take when dealing with Clawdbot adoption. It recommended discovery and visibility measures such as monitoring for characteristic access patterns and process names, and checking endpoints for “.clawdbot” directories.

The company also pointed to permission and access controls. It recommended reviews of OAuth grants and API tokens connected to systems such as Slack, Microsoft Teams, Google Workspace and email. It said security teams should identify and revoke unauthorised integrations that provide broad access.

It also called for clearer policies on personal AI agent usage. Token Security said employees may not understand the risk of granting an AI agent full system access.

The firm also described network-level actions. It recommended blocking or monitoring connections to Clawdbot-related infrastructure and ensuring employees do not expose gateway ports to the internet. It also suggested monitoring for connections into internal systems from Clawdbot-related infrastructure.

Token Security also advocated the provision of approved enterprise alternatives. It said some employees want AI automation. It said companies should offer tools with security controls, audit logging and IT oversight.

Token Security position

Token Security said it provides visibility and control over AI agent activity and access across an organisation. It said it can identify Clawdbot running on corporate endpoints by correlating cloud identities with third-party service connections.

It said it monitors OAuth apps, tokens and service accounts for access from Clawdbot infrastructure. It also said it can apply network policies and “right-sizing” for access, and use remediation steps to lock down identities that it finds in use.

Token Security said security teams need visibility into these tools and control over the access given to each AI agent, including the ability to lock down that access when faced with a security vulnerability or risk.