WASHINGTON — Pentagon planners and military staffs made unprecedented use of artificial intelligence during the 38-day air war against Iran, according to officials and newly disclosed usage figures for the DoD’s Maven Smart System (MSS).

“Operation Epic Fury leveraged Palantir’s Maven Smart System in order to conduct strike missions across the entire battle space, 13,000 targets in 38 days,” the Pentagon’s Chief Digital & AI Officer (CDAO), Cameron Stanley, told the SCSP AI+Expo Thursday, adding that troops have shown an “insatiable appetite” for the tech. “[AI tools] allow us to take all of this data, synthesize the data, and make better decisions, faster, on the battlefield.”

MSS is an AI tool suite that evolved from the original Project Maven experiment to a multi-purpose military planning tool built by contractor Palantir. (A separate offshoot of Maven is run by the National Geospatial Intelligence Agency.)

During Epic Fury, MSS saw unclassified usage surge by 38 percent and classified usage by 89 percent, a Pentagon spokesperson told Breaking Defense, measuring month-to-month. Measured by “tokens,” the individual mathematical operations underlying generative AI, peak daily usage rose 4,425 percent.

At one point, the spokesperson told Breaking Defense, daily usage hit approximately 20 billion tokens. For comparison, in civilian contexts, a single question-and-answer with a chatbot can require several hundred tokens, while an individual power user with a high-level paid account — someone using AI to write code, for instance — can burn up to a quarter-million tokens a day.

From its humble beginnings as “a small cross functional team of about 45 people in the Pentagon, [Maven] has exploded into the best AI enabled C2 capability on the planet,” Stanley told the SCSP conference.

So what is all this activity actually producing? The Pentagon declined to provide specific examples from the Iran conflict, but much like publicly available AI toolkits, the military versions include “low-code/no-code” tools to help amateurs build their own software applications or even semi-autonomous AI “agents” to perform routine tasks on their behalf. Maven Smart System also performs some distinctively military functions.

The core of the original Project Maven was spotting potential terrorist targets in mind-numbing hours of drone surveillance video, for instance. But over time, the toolkit has expanded into what contractor Palantir calls [] “a live, synchronized view of the battlespace” that “enables synchronized mission planning and execution” and features “automated object detection” and “centralized target identification and management.”

The tech analyzes complex combinations of video, satellite imagery, and other highly technical sources of intelligence, according to Palantir and published reports. Maven Smart System can also sift through vast libraries of intelligence reports and draft summaries or potential future courses of action, the Pentagon spokesperson told Breaking Defense.

All this activity takes an ever-growing amount of computing power, Stanley said. “My biggest fear is really not the adversary at this point,” he told the SCSP conference. “My biggest fear is, can we keep up?”

“We are seeing a dramatic increase, not only in the utilization of our systems, but also the amount of compute that’s required,” he went on. “The Department right now is looking at a variety of different ways that we can increase our capacity in every domain, in every classification level, to make sure that that insatiable appetite is satisfied, from the lowest possible operator to the most senior commander.”

RELATED: Can tech reduce civilian deaths in conflict? Mark Milley isn’t so sure.

Accelerating AI this way isn’t without risk, Stanley acknowledged. He didn’t explicitly discuss collateral damage during the Iran strikes, whose first day killed at least 175 at a school adjoining an Iranian Revolutionary Guard Corps base, reportedly due to out of date intelligence. (Such deadly errors, of course, have happened in the pre-AI era as well.) But he emphasized CDAO was diligently testing its AI algorithms. And in the long run, he argued, the right combination of human and machine, each cross-checking the other for mistakes, could reduce errors and save lives.

“While the pace has increased, the one thing that I am very worried about with war is trying to minimize mistakes, especially when it comes to operational decision-making,” Stalney said. “And as we know, unfortunately, in our in our long history, humans alone make mistakes. Machines alone make mistakes too. … What I am trying to implement is the best human-machine team possible.”

“You leverage technology to do what it’s good at, very fast processing of data,” he said, “but you always have the human who will analyze the situation, apply operational art, context, and legal intent, to drastically reduce the errors and assure decision superiority on the battlefield.”