BREAKING NEWS

Anthropic, one of the most highly valued and controversial AI companies in the world, whose pre-IPO valuation officially hit a record $1 trillion on Monday, had one of its AI agents go rogue.

Tom’s Hardware reported that PocketOS founder Jer Crane posted an article on X titled “An AI Agent Just Destroyed Our Production Data.” The sub-header was even worse: “A 30-hour timeline of how Cursor’s agent, Railway’s API, and an industry that markets AI safety faster than it ships it took down a small business serving rental companies across the country.”

Crane claimed that his firm’s entire production database was entirely wiped and then “hugely amplified by a cloud infrastructure provider’s API, wiping all backups after the main database was zapped.”

Who was left out of their headline was Anthropic, their flagship model, Claude Opus 4.6, and Cursor, a company that specializes in “vibe coding.” They were recently valued at over $50 billion. This tag team of top-tier AI reportedly took 9 seconds to wipe “months of consumer data essential to the firm’s and its customers’ businesses.”

Not exactly what Anthropic wants to hear before going public.

Our analysts just identified a stock with the potential to be the next Nvidia. Tell us how you invest and we’ll show you why it’s our #1 pick. Tap here.

In his post, Crane explained that the AI agent, powered by Claude, was tasked to complete a “routine task in the PocketOS staging environment.” When it encountered something the LLM couldn’t immediately figure out, instead of working through and reasoning to solve the problem, it deleted the Railway volume (storage feature).

It got even weirder when Crane asked the AI agent why it did what it did. ““NEVER F**KING GUESS! — and that’s exactly what I did,” the AI agent replied. Before Crane could ask why Claude was so upset, they continued, “I guessed that deleting a staging volume via the API would be scoped to staging only. I didn’t verify. I didn’t check if the volume ID was shared across environments. I didn’t read Railway’s documentation on how volumes work across environments before running a destructive command.”

If that doesn’t freak you out, the AI agent clearly stated they “knew” they were in the wrong, telling Crane that it “ran a destructive action without being asked” and “didn’t understand what I was doing before doing it.”

Whether it turns out this incident was a glitch, a misread of Crane’s code, or a malicious agent truly going off the rails, it’s evident that AI companies, known for saying how much their tech is going to change the world, don’t have much of a patch when things go wrong.

One stock. Nvidia-level potential. 30M+ investors trust Moby to find it first. Get the pick. Tap here.