Photo-Illustration: Intelligencer; Getty Images
Nobody quite saw last week’s deal between Anthropic and xAI coming, but as soon as it was announced, it was suddenly overdetermined. Of course Anthropic needed access to xAI’s enormous gas-turbine-powered Colossus data center, given the rapid growth of token-hungry tools like Claude Code. Obviously the relatively uncompetitive and unpopular offerings from xAI made turning the company into a neo-cloud provider a smart choice for Elon Musk, whose main advantage in the AI race is his ability to attract capital and apply it to the increasingly physical problems of scaling — acquiring hardware, building data centers, getting them switched on — but who had nowhere to direct all those Nvidia GPUs and needed a new AI story for SpaceXAI, pre-IPO. Dario Amodei, the theory-minded essayist and Anthropic CEO with comparatively less experience in ignoring local regulations to quickly build enormous facilities, needed to stop his product from breaking down on a weekly basis as it onboards new corporate customers. It just makes sense.
This is all true, but again, an Anthropic-xAI collab wasn’t the sort of thing people were talking about two weeks ago and for plenty of good reasons. xAI might not be a clear frontier lab at the moment, but Musk still has ambitions for it to become a major player and ultimately beat Anthropic. While Anthropic will be using xAI’s first major data-center project (Colossus) to meet its inference needs — that is, to serve its models, not to train them — xAI will still be using the much larger Colossus 2 for its own purposes. Anthropic, as compute-constrained as it may be, wasn’t without options but surprisingly chose a deal with a company whose CEO Amodei has specifically identified as someone with whom he has deep ideological differences regarding both AI development and politics in general. Anthropic is also seeking access to the data-center project most identified with pollution and disregard for the surrounding community at a moment of mounting backlash.
Meanwhile, Musk has been pretty clear about his feelings toward Anthropic for a long time and especially recently:
Your AI hates Whites & Asians, especially Chinese, heterosexuals and men.
This is misanthropic and evil. Fix it.
Frankly, I don’t think there is anything you can do to escape the inevitable irony of Anthropic ending up being Misanthropic. You were doomed to this fate when you…
— Elon Musk (@elonmusk) February 12, 2026
Then, last week, Musk wrote, “I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed.” Sure.
Both of these men, in their own ways, claim to be building an epochal technology with a nonzero risk of destroying the world, and each has implied that he’s the one best equipped to avoid such an outcome. One way to read their collaboration is as an admission that they’re not quite as concerned as they say they are about things going Very Wrong with AI. One could also imagine our modern AI CEOs as so aligned in a general post-human mission that they’re willing to do anything to bring about the God of Artificial Superintelligence. As much as their mutual dislike and misaligned business interests make this a strange deal, though, toxic interpersonal dynamics offer the clearest and best explanation for why it happened: As much as they don’t trust one another, they trust OpenAI’s Sam Altman less.
Amodei left OpenAI to start Anthropic over differences with Altman. (“The problem with OpenAI is Sam himself,” Amodei said in 2021.) Today, despite Anthropic’s momentum, OpenAI remains competitive in the two ways that matter most: It has more users than anyone else through ChatGPT and has the other most credible coding product in Codex. It’s also the company most responsible for Anthropic’s inability to access more compute (Google is an established cloud company that designs its own AI chips). Anthropic knows that its current advantages over OpenAI are thin and potentially fleeting — the mood and comparative rankings in this are prone to rapid change — and Amodei, perhaps perceiving Musk as a less credible threat to his business than Altman in the medium term, could use some help.
As for Musk, the most useful background for this deal is probably his ongoing lawsuit against Sam Altman and Greg Brockman, with whom he co-founded OpenAI a decade ago. Now, Musk claims to feel like a “fool” to have ever invested in Altman’s “long con” to turn OpenAI into a for-profit company and that its leaders are currently “betraying their promise” and that misaligned AI “could kill us all.” However one assesses Elon Musk’s level of concern for human beings who are not Elon Musk, the outline of the story is enough to justify virtually any deal that might get in Altman’s way: Musk helped start the emblematic firm of the current AI boom, in part to prevent Google from winning, but then, after trying to take control himself, got outmaneuvered and pushed out and hasn’t been able to catch up using either Tesla or xAI.
Amodei believes he might have a chance to beat his old boss. Musk sees a chance to put more pressure on a nemesis whom he’s trying to thwart in multiple other ways. Whatever differences they have about the development of AI, they’re united in one goal: to make sure Sam Altman doesn’t end with more power than he already has.
Sign Up for John Herrman column alerts
Get an email alert as soon as a new article publishes.
Vox Media, LLC Terms and Privacy Notice