Rapid developments in artificial intelligence are changing modern warfare and raising a new batch of ethical questions.

The U.S. military reportedly used Anthropic’s AI model Claude in the strikes in Iran and in Venezuela. But now the Pentagon has ended its contract with the tech company after a dispute over how the system would be deployed.

Anthropic wanted to put guardrails on Claude’s use – banning mass surveillance and fully autonomous weapons without a human in the so-called “kill chain.” Military officials said they would follow lawful uses, but wouldn’t agree to restrictions imposed by a private company. As a result, Anthropic has been blacklisted and the Pentagon has struck a deal with OpenAI.

So, how advanced are AI-powered weapons today? What risks come with the technology? Who decides the limits? And should how that conversation is playing out in other countries like China affect our thinking in the U.S.? Can we risk losing the AI arms race?

Guests:

Paul Scharre, executive vice president for the Center for a New American Security. He led the working group that drafted the current Pentagon policy on autonomous weapons and is a former Army Ranger.
Mieke Eoyang, former deputy assistant secretary of defense for cyber policy and a non-resident senior fellow at the Carnegie Mellon Institute for Strategy and Technology.