SecuraAI, an AI security firm specializing in threat modeling and red teaming for agentic AI systems, today announced the public release of Project Feral, an independent security research initiative analyzing OpenClaw, the open-source AI agent platform that gained over 200,000+ GitHub stars in just two months.

The research represents one of the first practical applications of the OWASP Agentic Security Initiative (ASI) Top 10 (2026), Cloud Security Alliance (CSA) MAESTRO 7-layer architecture framework, and MITRE ATLASadversarial machine learning taxonomy to a real-world agentic AI system.

The OpenClaw Security Challenge

OpenClaw (formerly Clawdbot and Moltbot) grants AI agents autonomous access to shell commands, file systems, messaging platforms, and over 100 integrations through the Model Context Protocol (MCP). While the platform has revolutionized personal AI assistance, security researchers, including teams at Cisco, Palo Alto Networks, and MITRE have raised significant concerns about its security posture.

Censys tracked growth from approximately 1,000 to over 21,000 publicly exposed instances in under a week, with many leaking API keys, OAuth tokens, and private conversation histories.

Phase I Research Findings

Project Feral Phase I delivers a complete architecture-level threat model identifying:

10 enumerated threats (3 Critical, 4 High, 3 Medium)
5 multi-stage attack chains, including the “Alpha Chain” – a single crafted message leading to full remote code execution
6 trust boundaries mapped across the OpenClaw architecture
Full OWASP ASI Top 10 coverage (10/10 categories addressed)
CSA MAESTRO 7-layer mapping for each identified threat
Prioritized remediation roadmap with effort estimates

Phase I.5 Real-World Validation

Following the initial release, SecuraAI conducted a delta analysis cross-referencing the Phase I threat model against real-world incidents and OpenClaw security patches:

Validation Metric
Result

Threats validated by real-world incidents
7 of 10

Threats with MITRE ATLAS case study match
4 of 10

Threats partially mitigated by patches
6 of 10

Threats fully mitigated
0 of 10

Key incidents validating Project Feral’s findings include:

CVE-2026-25253 (CVSS 8.8): One-click RCE via token exfiltration directly maps to OC-T01 (Prompt Injection) and OC-T02 (Unsandboxed Execution)
ClawHavoc Campaign: 335 malicious skills identified in ClawHub marketplace validates OC-T05 (Supply Chain), upgraded from HIGH to CRITICAL severity
MITRE ATLAS Investigation (February 2026): 4 new case studies (AML.CS0048–0051) and 7 new agentic AI techniques added to ATLAS framework

Tri-Framework Approach

Project Feral uniquely combines three complementary security frameworks:

OWASP ASI Top 10 (2026) – Risk taxonomy for agentic AI applications
CSA MAESTRO – 7-layer architectural decomposition for multi-agent systems
MITRE ATLAS – Adversarial tactics, techniques, and procedures for AI/ML systems

This tri-framework approach enables security teams to communicate findings in the language of established industry standards while capturing the unique risks of agentic architectures.

Free Access for Educators and Researchers

All Phase I and Phase I.5 materials are published under CC BY-NC-SA 4.0 and freely available for non-commercial educational use. The research is designed for:

Case studies in AI security, software security, and threat modeling courses
Student projects: extend the threat model, build detection tools, or analyze mitigations
Research baselines: cite or build upon for academic publications
Framework validation: one of the first real-world applications of OWASP ASI Top 10

Research Portal: https://projectferal.securaai.com

Phase II: Call for Participation

SecuraAI is now recruiting academic collaborators, security researchers, and industry partners for Phase II – hands-on security testing against an isolated OpenClaw environment:

Red Teaming: Adversarial testing of identified attack chains
Vulnerability Scanning: Automated discovery and validation
Defensive Tooling: Detection rules, monitoring dashboards, and hardening scripts
Comparative Analysis: Apply methodology to other agentic AI platforms

Phase II offers an opportunity for students and researchers to work on real-world agentic AI security challenges alongside industry practitioners.

Phase II Registration: https://joinferalph2.securaai.comty purposes.