A closeup of a laptop displaying Anthropic’s website.

Pages from the Anthropic website and the company’s logos are displayed on a computer screen. (Patrick Sison/AP)

ABOUT THE AUTHOR: Wes Martin, a retired U.S. Army colonel, served as the first Senior Antiterrorism Officer for all Coalitions in Iraq and holds an MBA in international politics and business.

As Senior Antiterrorism Officer for Coalition Forces in Iraq and later as Headquarters, Department of the Army, Chief of Information Operations, I met vendors pursuing lucrative military work. Yet I never endured vendors trying to dictate the usage terms of their products once the contract was approved.

That is exactly the problem now with the management of San Francisco-based tech company Anthropic trying to tell the Pentagon how its artificial intelligence model, Claude, should be employed.

Anthropic developed Claude as an advanced AI tool that can be used in military operations, classified and unclassified. Anthropic was content in 2024 to win a $200 million contract in providing Claude to the military.

But late last year, Claude was reportedly used by the Pentagon in support of U.S. troops going into Venezuela and capturing its accused drug czar head-of-state, President Nicolás Maduro. Anthropic now seems upset that its product was used in an operation.

Anthropic and other defense contractors don’t need to look too far back in history for lessons learned when emerging technologies reach the Pentagon. In 1983, an American warrior fighting in Grenada reportedly used his personal telephone credit card to call for indirect fire support. In no way, shape, or form did AT&T even consider that its product was going to be used by American ground troops in combat. Yet in the aftermath, nobody from AT&T ever dared to suggest that our troops stop using their phone cards.

Three facts must be pointed out: Anthropic won its contract during the final year of the Biden administration; Chief Executive Officer Dario Amodei openly supported Kamala Harris’ bid for president while disparaging Donald Trump, and multiple members of the board of directors are Biden loyalists.

Service members deploying into hostile operations do not get to agree to or refuse involvement based upon their political or personal views. The same rules need to apply to vendors who have accepted lucrative contracts that provide technology to the U.S. military.

Combat is a come-as-you-are operation. The five battle domains of air, land, sea, space and cyber space must be mutually supportive. No resources within those domains must be held back, especially not because of vendor ideologies. Planners and professional warriors will use all resources and force multipliers available to achieve overwhelming success while minimizing casualties. Preparation of the battlefield, whether it is a high- or low-intensity conflict, is critical. AI was reportedly a force multiplier that served a critical role in the Maduro mission. The result was total success with no American deaths.

Anthropic is now faced with a lead, follow, or get out of the way situation. If it wants to stay in front of the pack with the competitive edge Claude provides, Anthropic needs to stop making ideologically-driven objections. Google, OpenAI and xAI have long since approved their AI tools being used in support of “all lawful purposes.” If Anthropic management refuses to follow the example established by competitors, they need to get out of the way.

Anthropic went into this contract fully aware that the 1990s is well into the past and the U.S. military no longer operates with a “kinder, gentler, in touch with their emotions” mindset. All armed forces of the U.S. military are dedicated to serving the nation. During the Maduro operation, AI was part of the solution. We can’t let it become part of the problem. 

Anthropic is tempting the Pentagon to justifiably cut all business ties, working its way into being declared a “supply chain risk.” That designation will result in negating the Claude contract, losing future contracts, and forbidding other U.S. military vendors and contractors from doing business with Anthropic.  

So far, Anthropic management has responded with a nebulous statement that they are “engaged in constructive discussions, in good faith, with the Department of Defense on how to advance our collaboration and address these new and complex challenges.”

Anthropic needs to understand their “new and complex challenges” must be immediately resolved and brought into compliance with Pentagon policy requiring AI vendors to deliver “models free from usage policy constraints that may limit lawful military applications.”

The challenge facing Anthropic is not hard to comprehend. This current administration is not going to hold back “lawful” resources that enhance military operations while reducing risk to deploying warriors because of a vendor’s ideology.