EUROPE SAYS
  • Continent
    • NL
    • BE
    • LU
    • CH
    • AT
    • DK
    • NO
    • SE
    • FI
    • IE
    • EE
    • LV
    • LT
    • HU
    • CZ
    • SK
    • SI
    • PT
    • RO
  • W
    • France
    • Germany
    • Belgium
    • Netherlands
    • Luxembourg
    • Switzerland
    • Austria
    • United Kingdom
    • Ireland
  • S
    • Italy
    • Vatican
    • Spain
    • Portugal
  • C
    • Poland
    • Czech Republic
    • Slovakia
    • Slovenia
    • Croatia
    • Hungary
  • N
    • Iceland
    • Norway
    • Sweden
    • Finland
    • Estonia
    • Latvia
    • Lithuania
    • Denmark
  • E
    • Russia
    • Belarus
    • Ukraine
    • Moldova
    • Romania
    • Bulgaria
    • Greece
    • Cyprus
    • Türkiye
  • FR
  • DE
  • IT
  • ES
  • PL
  • UK
  • US
  • Conflicts
    • NATO
    • Ukraine
    • Israel
    • Climate
    • Environment
    • Refugees
    • Asylum
    • Immigrant
    • Migrant
    • Immigration
  • World
    • AI
    • Data
    • Crypto
    • Space
    • Politics
    • Business
    • Economy
    • Energy
    • Nuclear
    • Crude Oil
    • Petroleum
    • Natural Gas
  • Iran
  • Africa
  • Afrique
  • AI
  • Andorra
  • Asylum
  • Australia
  • Austria
  • Belarus
  • Belgium
  • Bulgaria
  • Business
  • Canada
  • Climate
  • Conflicts
  • Continent
  • Croatia
  • Crude Oil
  • Crypto
  • Cyprus
  • Czech Republic
  • Data
  • Denmark
  • Economy
  • Energy
  • Environment
  • Estonia
  • Finland
  • France
  • Germany
  • Greece
  • Hungary
  • Iceland
  • Immigrant
  • Immigration
  • Ireland
  • Israel
  • Italy
  • Japan
  • Latvia
  • Liechtenstein
  • Lithuania
  • Luxembourg
  • Malta
  • Markets
  • Migrant
  • Moldova
  • Monaco
  • NATO
  • Natural Gas
  • Netherlands
  • New Zealand
  • News
  • Norway
  • Nuclear
  • Olympics
  • Petroleum
  • Poland
  • Politics
  • Portugal
  • Refugees
  • Romania
  • Royal Families
  • Russia
  • San Marino
  • Slovakia
  • Slovenia
  • South Korea
  • Space
  • Spain
  • Sweden
  • Switzerland
  • Türkiye
  • Ukraine
  • United Kingdom
  • United States
  • Vatican
  • World
EUROPE SAYS
  • Continent
    • NL
    • BE
    • LU
    • CH
    • AT
    • DK
    • NO
    • SE
    • FI
    • IE
    • EE
    • LV
    • LT
    • HU
    • CZ
    • SK
    • SI
    • PT
    • RO
  • W
    • France
    • Germany
    • Belgium
    • Netherlands
    • Luxembourg
    • Switzerland
    • Austria
    • United Kingdom
    • Ireland
  • S
    • Italy
    • Vatican
    • Spain
    • Portugal
  • C
    • Poland
    • Czech Republic
    • Slovakia
    • Slovenia
    • Croatia
    • Hungary
  • N
    • Iceland
    • Norway
    • Sweden
    • Finland
    • Estonia
    • Latvia
    • Lithuania
    • Denmark
  • E
    • Russia
    • Belarus
    • Ukraine
    • Moldova
    • Romania
    • Bulgaria
    • Greece
    • Cyprus
    • Türkiye
  • FR
  • DE
  • IT
  • ES
  • PL
  • UK
  • US
  • Conflicts
    • NATO
    • Ukraine
    • Israel
    • Climate
    • Environment
    • Refugees
    • Asylum
    • Immigrant
    • Migrant
    • Immigration
  • World
    • AI
    • Data
    • Crypto
    • Space
    • Politics
    • Business
    • Economy
    • Energy
    • Nuclear
    • Crude Oil
    • Petroleum
    • Natural Gas
  • Iran
  • Africa
  • Afrique
Arizona State University researcher warns against overtrusting AI in Iran strikes
AAI

Arizona State University researcher warns against overtrusting AI in Iran strikes

  • 2026-04-02

PHOENIX (AZFamily) — The U.S. military’s AI-powered battlefield intelligence system can compress targeting decisions that once took days into minutes or seconds. But in that push for speed, a preliminary inquiry by the Pentagon found the U.S. relied on outdated intelligence and struck an Iranian school, killing about 170 people, mostly children.

It turns out there’s a lot of research on what happens when humans deploy AI in battlefield settings and why things can go wrong.

“AI is not ready for prime time,” said Nancy Cooke, director of ASU’s Center for Human, AI, and Robot Teaming, on the latest episode of Generation AI. “It is unreliable. It can do unexpected things. And humans may have the tendency to overtrust it.”

Cooke has spent years studying what happens when humans team up with artificial intelligence in high-stakes scenarios. In her research on simulated drone pilot teams, she’s watched AI perform its assigned tasks flawlessly while simultaneously making the humans perform worse.

AI-powered tools like the Maven Smart System, the Pentagon’s battlefield intelligence platform that identifies and prioritizes targets, create a risk for over-reliance on AI recommendations, she said.

Large language models appear deceptively human-like, Cooke explained, but “they’re very much not like human intelligence, although people may think so and then overtrust them as a result.”

Three-person drone experiment

Cooke’s research team created simulated three-person drone teams, then substituted AI for one human pilot. The AI executed its core functions without error, controlling airspeed, heading and altitude.

But something unexpected happened.

“[The AI pilot] acted like there was no one else on the team,” Cooke said. “It did not anticipate the information needs of its fellow team members. And as a result, the coordination of the whole team broke down.”

The humans changed their behavior, too. Thinking they were working with a superior AI, the research subjects decided to follow the machine’s lead. “AI isn’t anticipating information needs. So, I’m going to stop doing that too,” seemed to be their subconscious logic.

The result: teams with AI got reconnaissance photos slower than all-human teams, despite AI’s superior individual performance.

“Even though AI may be fast, the combination of AI working with humans may be slow and bad,” Cooke said.

“It Shouldn’t Be Trusted”

Both over-reliance and under-trust of AI pose challenges on the battlefield, but Cooke is convinced one error is more serious.

“Definitely over-trusting is worse. Because it shouldn’t be trusted. It’s going to give you bad information a lot of the time. Not all of the time. And it’s going to be fast, but that’s not necessarily better,” she said.

The Maven Smart System represents exactly what worries her most. The Pentagon has praised the system for combining eight or nine different intelligence systems into one, condensing targeting decisions from days or hours into minutes.

“So many things can go wrong,” Cooke said. “You have all these different system components that haven’t been tested. They have no safeguards on them. We don’t know how they play off of each other and work together. It’s just a recipe for disaster.”

The Anthropic precedent

Some AI companies are drawing their own red lines. The Pentagon labeled Anthropic a supply chain risk in March after the company refused to grant the military a license to use its products for “any lawful purpose,” without restrictions for domestic mass surveillance or autonomous lethal weaponry.

Anthropic CEO Dario Amodei said he objected, in part, because he did not believe the company’s models could reliably handle such grave tasks.

“Anthropic was spot on. They’re not ready,” Cooke said. “And I don’t know that they’re going to be ready in a very long time.”

Her position goes further than timing concerns. Some decisions, she argues, should remain exclusively human: “decisions to target something, decisions to shoot.”

Information overload

Cooke’s wildfire research reveals another dimension of the challenge of partnering humans with AI. Drones can collect vast amounts of reconnaissance data, but processing it remains “a complex cognitive task to go over reels and reels of video.”

Her research found that too much information creates its own problems, leading to decision paralysis and worse outcomes; the opposite of what AI integration promises to deliver.

The pattern holds across domains: AI excels at narrow technical tasks but struggles with the contextual awareness and anticipation that effective teamwork requires, she said.

“I think you have to make sure that people realize that this is not human intelligence and humans have a lot to offer,” Cooke said. “The best combination would be good human intelligence coupled with good technology.”

The escalation question

Critics argue that moral qualms about autonomous weapons put the U.S. at a disadvantage against adversaries like China or Russia, who might deploy fully autonomous systems.

They worry about next-generation weapons that can decide to fire on their own. In a world where milliseconds might be the difference between life and death, these critics argue human-in-the-loop weapons won’t be able to keep up.

Cooke sees it differently: she thinks autonomous systems run the risk of friendly fire and may be vulnerable to foreign hacking, turning advanced weapons into threats against their own operators.

More broadly, she views the AI arms race as inherently escalatory, potentially raising the risk of countries opting for a weapon of last resort: a nuclear bomb. “People are pushing to, you know, move fast and break things. And indeed, we will.”

See a spelling or grammatical error in our story? Please click here to report it.

Do you have a photo or video of a breaking news story? Send it to us here with a brief description.

Copyright 2026 KTVK/KPHO. All rights reserved.

  • Tags:
  • AI
  • AI battlefield decision-making risks
  • AI battlefield intelligence
  • artificial intelligence
  • Autonomous weapons
  • azfamily
  • Maven Smart System Pentagon
  • Nancy Cooke ASU AI research
  • Pentagon AI targeting system
  • phoenix news
EUROPE SAYS
www.europesays.com