SAN FRANCISCO — Anthropic accidentally exposed roughly 512,000 lines of proprietary TypeScript source code for its AI-powered coding agent Claude Code on March 31, the company confirmed, in what it described as a human error during a routine software update.

Claude

The leak involved the full client-side architecture of Claude Code, a command-line tool that acts as an autonomous “coding coworker.” It did not include the core Claude AI model’s weights, training data or server-side infrastructure.

The exposure occurred when version 2.1.88 of the anthropic-ai/claude-code npm package included a 59.8-megabyte JavaScript source map file. The map inadvertently pointed to a complete ZIP archive of the original codebase stored on Anthropic’s Cloudflare R2 bucket. Security researcher Chaofan Shou spotted the file and publicly disclosed it on X, triggering rapid downloads and mirroring across GitHub within hours.

The leaked code spanned nearly 1,900 to 2,000 files and revealed the inner workings of Anthropic’s agent harness — the decision-making engine that turns natural-language instructions into real actions. It included the full ReAct-style loop (reason, act, observe, repeat), tool definitions for bash commands, file reading and editing, git operations and test execution; system prompts that guide safety, memory management and multi-agent coordination; and dozens of unreleased feature flags.

Anthropic called the incident “human error” in its release and packaging process and stated that no customer data, credentials or model internals were compromised. The company has since issued Digital Millennium Copyright Act (DMCA) takedown notices to GitHub, resulting in the removal of more than 8,000 repositories.

Developers who examined the code said it offered an unprecedented look at how Claude Code plans complex tasks, handles errors, maintains context across long sessions and enforces boundaries to prevent unsafe actions — details previously hidden behind Anthropic’s closed-source approach.

Open-source contributors began porting the agent logic to Python, Rust and other languages, potentially accelerating development of competing coding agents. Analysts said the leak could erode Anthropic’s competitive edge in the fast-growing “agentic” AI market while exposing internal architecture that rivals had long tried to reverse-engineer.

Elon Musk, a longtime critic of Anthropic, reacted to the leak on X with pointed irony, replying “Banger 😂” and “Troubling.”

Musk has accused Anthropic’s executives of hypocrisy claiming that the company stole training data at a massive scale, paid multibillion-dollar settlements to settle related lawsuits, and now is aggressively policing leaks while opposing open-source AI development. He calls Anthropic’s approach “misanthropic” and argues that the company’s closed-source stance ultimately fails when simple packaging errors hand its “secrets” to the public.