Elon Musk has backed an X (formerly Twitter) post showing support to Pentagon’s ban on using Anthropic’s AI tool Claude in the military. The discussion began when an X user Oilfield Rando @Oilfield_Rando wrote: “Claude is doing amazing things for coding, but it will flat out refuse to do further work on your project if it thinks you’re going to use it to promote things like nativism,” adding “I 100% support the government banning its use in the military. Grok is critically important”. Elon musk replied to the post with a yes. Notably, this is the first time when Musk voiced his support for the Anthropic ban. In a previous post, the tech billionaire wrote “Anthropic hates Western Civilization”. Just weeks ago, he called the company’s AI “misanthropic and evil” and personally attacked Anthropic philosopher Amanda Askell over her role in shaping Claude’s ethics.
Anthropic vs Pentagon
Last month, the US Department of War designated Anthropic as “a supply chain risk”, a label that’s historically only been applied to foreign companies. The conflict started when Anthropic refused a Pentagon ultimatum for “full, unrestricted access” to its AI tool Claude, citing ethical concerns regarding mass surveillance and fully autonomous weapons.In a sharply critical Truth Social post then, President Donald Trump described the company’s leadership as “Leftwing nut jobs”, signalling a deepening rift between the White House and key AI suppliers to the Pentagon.“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!” Trump wrote in the post.
Anthropic challenges Pentagon’s ban
In response, Anthropic challenged the ban, filing a lawsuit. On March 26, US district judge Rita Lin granted the AI company’s request for a preliminary injunction, halting both the Presidential Directive ordering federal agencies to stop using Anthropic’s technology and defence secretary Pete Hegseth’s designation of the company as a “supply chain risk.” Judge Lin criticized the government’s actions, saying they looked like “an attempt to cripple Anthropic” rather than a straightforward decision to stop using its AI tool Claude. She noted that one amicus brief described the Pentagon’s move as “attempted corporate murder.”
