Agentic AI Risk. Humanoid playing chess
target readers-cv
By Jonathan Armstrong
As autonomous AI agents reshape how work gets done, businesses must urgently address the growing risks around control, security, and accountability before innovation outpaces governance.

Agentic AI seems to be the buzz word in business right now.  For some organisations it is the future – organisations of all shapes and sizes are adopting Agentic AI to automate tasks and decision-making.  But there’s a need for businesses to understand the tech and to understand their risk too.  Recent reports exposing widespread vulnerabilities in tools like OpenClaw highlight how quickly risks can materialise at scale. With adoption accelerating, businesses must understand, manage, and mitigate these threats before they become embedded across critical systems and operations.

AI shifts from content creation to autonomous action

Businesses are rapidly moving beyond using Gen AI, and are now deploying AI to execute tasks. This next phase in the evolution of AI, known as Agentic AI, is gaining pace. According to Gartner Inc., 40% of enterprise applications will be integrated with task-specific AI agents by the end of 2026, up from less than 5% in 2025.

As organisations accelerate their digital transformation efforts, Gartner Inc. highlights that Agentic AI will extend far beyond individual productivity gains. Instead, it is set to redefine how work gets done, enabling more sophisticated collaboration between humans and AI, and reshaping workflows across large-scale software systems.

Agentic AI enables organisations to delegate tasks to interconnected AI systems. Rather than relying on a single tool, one AI application can coordinate multiple agents, selecting tools, making decisions, and executing processes with minimal human input. In some cases, these systems operate autonomously, determining how best to complete a task without direct oversight.

While this promises significant efficiency gains, it also introduces a new layer of complexity and risk. Governance, oversight, and accountability become far more challenging when decision-making is distributed across multiple autonomous agents.

Until recently, many of these risks remained largely theoretical. However, emerging security concerns surrounding the tool OpenClaw demonstrate how quickly that promise can translate into real-world vulnerability, highlighting the urgent need for businesses to better understand and manage the implications of Agentic AI.

The OpenClaw exposure

OpenClaw, created in late 2025 by developer Peter Steinberger as a “weekend project”, quickly gained traction for its ability to connect multiple AI agents and give them shared access to systems. Its popularity surged, with millions of developers exploring its use as part of emerging Agentic AI setups.

However, serious security concerns emerged soon after. A report in February 2026 identified more than 42,000 exposed OpenClaw control panels across 82 countries, many of which had full system access. Researchers also found nearly 50,000 devices vulnerable to remote code execution, meaning attackers could potentially take control of affected systems.

And what’s even worse is that since OpenClaw often connects to other tools and services, the risks extended beyond a single system. In some cases, vulnerabilities could allow access to emails, calendars, messaging platforms, social media accounts and browsers. A separate cybersecurity investigation also uncovered a misconfigured database exposing 1.5 million authentication tokens, along with tens of thousands of email addresses and private AI-to-AI communications.

Regulators have already begun to respond. In February 2026, the Dutch data protection authority, Autoriteit  Persoonsgegevens (AP),  warned organisations against using OpenClaw and similar experimental tools, particularly in environments handling sensitive data. It also highlighted a common misconception that just because a system runs locally does not mean it is secure.

Managing Agentic AI risk in practice

For many organisations, addressing the risks linked to OpenClaw is unlikely to be as simple as uninstalling the software. One of the biggest challenges is visibility. In some cases, businesses may not even be aware that OpenClaw has been deployed, particularly if developers or employees have been experimenting with AI tools without formal approval.

This so-called Shadow AI risk is already significant. A Microsoft study from October 2025 suggested that 71% of UK employees admitted using unapproved AI tools at work. Given the rapid adoption of AI since then, the true figure could now be higher.

The issue is further complicated by OpenClaw’s ability to integrate with widely used platforms such as WhatsApp, Telegram, Discord, Slack and Teams. Where these connections exist, removing risk is not straightforward, resetting credentials and access tokens across multiple systems can be time-consuming and complex. Against this backdrop, there are steps that businesses can implement now to mitigate the risks.

Practical steps for businesses

Looking at technical settings: Organisations need to restrict the use of applications like OpenClaw on their networks. There are tools available to look at ShadowAI risk. If the organisation has those tools, they need to add OpenClaw on to the list of prohibited applications. It has been reported that it is currently not possible for humans to delete an account on OpenClaw at least by using common settings. Organisations that think they have been exposed may want to take specialist advice.
Check your socials: It has been reported that OpenClaw collects X (formerly Twitter) user names, display names, passwords etc. so it might be possible via OpenClaw for a threat actor to gain access to the organisation’s social networking output, which again can lead to reputational risks and expose the organisation to phishing attacks etc.
Literacy is key. AI literacy has become a regulatory expectation, including under the EU AI Act, and staff need to understand both the opportunities and risks of AI systems.  Proper training will be a key part of this.
Take measures to protect against ShadowAI: Whilst a literacy program will be part of this, organisations may want to include traditional software solutions like data loss prevention (DLP) software, and also specialist ShadowAI monitoring and blocking services.
Look at contracts and developer due diligence: For some organisations the issue might stem from sub-contracted developers. Therefore they need to ensure contractual protections are in place to meet their compliance and regulatory obligations. This might also include specific insurance policies since developers with just 1 or 10 employees are unlikely to have the financial ability to pay up when things go wrong.
Do a proper DPIA or AIIA: This isn’t just common sense but may well be a legal requirement.  Whilst organisations want to move quickly in the new AI world, sometimes it is necessary to step back and see if an organisation’s legal and compliance obligations are being taken into account.

Conclusion

Agentic AI has the potential to transform how organisations operate. However, realising those benefits will depend on putting the right safeguards in place. Strengthening oversight, improving AI literacy and taking a structured approach to risk will leave businesses far better positioned to adopt these technologies with confidence.

About the Author
Jonathan ArmstrongJonathan Armstrong is a lawyer at Punter Southall Law working on compliance & technology. He is also a Professor at Fordham Law School. Jonathan is an acknowledged expert on AI and he serves on the NYSBA’s AI Task Force looking at the impact of AI on law & regulation.