By the end of January Grok, the artificial intelligence program developed by Elon Musk’s X platform, will be plunging its digital tentacles into some of the Pentagon’s most heavily classified computer systems and intelligence databases, harvesting “all appropriate data” to provide American war planners with fresh insights.

Opening a back door into the world’s most powerful military for Musk — whose Grok tool is being investigated by the European Union for generating sexual deepfake images — may sound eccentric at best and foolhardy at worst.

But it is only one of the conundrums of a profound but little-heralded revolution that is unfolding at the top of armed forces across the West and beyond.

After years of abstract and largely futile discussion about the prospect of autonomous weapons being used in war, the Russian onslaught in Ukraine has made AI-assisted drones an indispensable tool. And they are only the most visible part of something much broader.

Advances in AI, network technology and computing power have finally made it possible to start implementing an idea known in military theory as the “intelligent kill web”. This is where the commander notionally sits like a spider at the centre of a vast array of sensors and weapons that speak to each other faster than the speed of thought. In January the French army, which has traditionally used homegrown military AI, upgraded its command software to SitaWare, which provides a real-time picture of battlefields using AI analytics.

Today’s battlefields are crawling with tens of thousands of electronic devices that feed an overwhelming volume of data back to headquarters. The ability to combine, analyse and act on this torrent of information faster and more effectively than the enemy is decisive.

“You need to be able to collect information, to process information, to write and disseminate your order faster than your opponent,” said Yvan Gouriou, a newly retired French army general who is now a strategy adviser to the defence software firm Systematic Defence, headquartered in Denmark.

The increasingly sophisticated use of AI for these purposes raises the prospect of wars in the near future that unfold at unprecedented speed, with “kill chains” — the process of finding a target, assessing it, hitting it and examining the results — contracting to a matter of seconds.

This is the dawn of the age of algorithmic warfare, according to a dozen current and former western military officers, defence industry sources and analysts who have spoken to The Sunday Times over the past few weeks.

Three military intelligence specialists using an augmented reality table map to scan an enemy base.

An augmented reality table map is used to scan enemy bases

GETTY IMAGES

The introduction of AI to defence is in a league “akin to the introduction of electricity”, said one figure at a European arms manufacturer. However, it also raises questions about how much control human commanders will one day exercise over a battlefield bristling with AI-driven autonomy.

The technology is moving beyond merely sifting through the data. It is starting to sit on commanders’ shoulders as a digital intelligence analyst and tactical adviser. It identifies and suggests targets. It provides minute assessments of the battlefield terrain based on the weather. It can even assess plans of action against the other side’s movements in real time. “You can ask it to check if your plan for the enemy is corroborated by the intel that is coming back to you,” said Gouriou.

Until now, the US has led the field. Its armed forces routinely describe data as the “new ammo” and give some military units a great deal of latitude to tinker around with new uses for AI.

To the untutored eye, recent exercises on the scrubby plains of Colorado by the US army’s 4th Infantry Division would have looked pretty much like any other land warfare drills in the past few years. Drones circled, artillery roared and an assortment of old Soviet T-72 tanks were reduced to superannuated scrap metal.

But the revolutionary element of the wargames, given the name “Ivy Sting”, was incorporeal: a lattice of interlaced AI modules that detected the enemy, identified and labelled potential targets for the gunners and even compared before-and-after pictures to determine whether the strikes had been successful.

Soldier firing using the Next Generation Command and Control prototype.

The 4th Infantry Division during Ivy Sting

This was the first test of the $100 million prototype for the army’s next-generation command and control (NGC2) system, assembled by Anduril and incorporating software from the aristocracy of American defence tech, including Microsoft and Palantir.

A spokesperson with parts of a Shahed drone at a rocket cemetery in Kharkiv.

The wreckage of a Shahed drone, fired by Russia at Kharkiv last year

SUNDAY TIMES PHOTOGRAPHER JACK HILL

The US air force has gone further still with its “Dash” experiments, which challenge various AI chatbot-style tools to plan an aerial attack better than seasoned human officers.

As recently as last summer, the results were still modest. The chatbots were significantly faster but tended to produce subtle errors such as choosing the wrong sensors for the weather conditions.

In early January, however, the air force presented new data showing that the best machines were not only well over a hundred times quicker than the officers but also performed drastically better, achieving a 97 per cent “viability and tactical viability” rate compared with the humans’ 48 per cent.

Military and arms industry figures insist there will always be a “human in the loop” who still calls the shots. Under the intense time pressure and overwhelming volumes of incoming data, though, the potential for mistakes is significant.

“The problem with this is we know the ‘human in the loop’ doesn’t work in a high-speed environment, because you just can’t keep up,” said Thomas “TX” Hammes, who served in the US Marine Corps for 30 years and researches the future of war at the National Defence University’s Institute for National Strategic Studies in Washington.

What Nato’s arms industry is learning from the battlefields of Ukraine

This is a patch of military terra incognita where even the war in Ukraine — a frontier for new military technology — provides little guidance. Since the middle of 2022, the Ukrainian army has been using a situational awareness system called Delta, which fuses information from a huge variety of sources ranging from American spy satellites to first-person-view drones.

But Kyiv’s military has yet to start using the full potential of the technology, and any potential future Nato-Russia war would probably unfold at a much more dizzying pace, according to one military adviser who has spent a long time in Ukraine.

In the jargon of the armed forces, the new technology is all about drastically shortening the “Ooda” (observe, orient, decide, act) loop — the precious window of time from spotting a development to doing something about it.

The transformation has been in the making for the best part of a decade. In 2017 the first Trump administration started a programme known as Project Maven, where Silicon Valley tech companies were invited to bring machine-learning techniques to bear on warfare.

The objective was to create a “single pane of glass”, a system that could use AI to process input from tens or hundreds of thousands of sensors so effectively that it would condense the results into a useable form on one screen.

A hand interacting with a digital interface showing a map overlaid with circular data visualizations for space exploration and cybersecurity.

Machine-learning techniques are already beginning to transform warfare

GETTY IMAGES

The project has been supercharged with gigantic amounts of battlefield data from Ukraine. At the same time, smaller units routinely carry “edge” devices — tablets or other computers that constantly exchange information with the mothership in headquarters.

One recently retired US military commander likened the results to playing a computer game. On one naval air defence mission, he said, he was able to track the ammunition stocks on warships halfway around the world in real time.

Now, though, militaries are starting to bolt other kinds of AI on to the basic command and control (C2) and intelligence-management systems.

One example is drones. In September, Systematic, the Danish software firm, signed a deal to wire intelligent drone swarms made by the German-British AI company Helsing into its SitaWare C2 system. This software is used across the higher command levels of the British Army and many of its European allies, and is soon to be adopted by the French army. The Helsing link would, in effect, pave the way for flocks of autonomous drones to be controlled by a commander alongside all the other conventional assets at their disposal.

A pilot operates a drone from a remote pilot station with a joystick and monitor, showing a design study for an unmanned combat aircraft.

A drone pilot operating a Helsing system

ALAMY

One of the most striking areas of experimentation, however, is in prediction. The value of being able to forecast your logistical needs (what supplies may be needed and where) — or, indeed, the enemy’s — can hardly be overstated.

Systematic and other European companies are also looking at using AI to try to anticipate the other side’s movements. “Based on some of the information you might have about your enemy, such as … his normal ways of working, you can look at what else you’re likely to find and where it would be relative to your sensors or equipment,” said Andrew Graham, a senior vice-president at Systematic.

Inside the ‘living room’ making suicide drones for the British Army

Against the backdrop of Trump’s second term and the recent crisis over Greenland, European countries are moving to build their own capabilities and move away from reliance on Palantir Technologies, a US firm which has a vast share of the market. Some European sources expressed reservations about how much control Palantir is really willing to cede over its systems, and deep misgivings about the ideology and ambitions of the figures behind the company, including the billionaire Peter Thiel, who is connected with the Trump administration.

Insiders say the UK has also quietly been catching up through an AI-assisted warfare programme known as Project Asgard, which has been tested by British troops on the Nato eastern flank in Estonia. It is an integral part of the army’s ambition to increase its “lethality” — crudely, the damage it can inflict on the enemy for each unit or asset deployed — by 900 per cent over the next decade.

There are some important caveats to the hype. One is that while these processes might sound seamless in theory, structuring such prodigious and diverse data is often challenging in practice.

A senior figure at Palantir said predicting logistics involved so many variables that it tested the limits of the firm’s software.

Graham said Systematic faced similar problems: “This is a constant challenge because the amount of information is growing, not shrinking, and it will continue to grow with the number of sensors and automated systems.”

Another difficulty is that no matter how good the software is, it often runs up against military computer systems created 30 or 40 years ago that make data hard to digest.

One source in the German military complained that some information had to be transferred between databases by hand.

Kateryna Bondar, an expert on defence AI at the Centre for Strategic and International Studies think tank in Washington, said this was also the biggest hurdle for the US. “A lot of this stuff is obsolete, from the beginning of the Nineties,” she said. “The biggest challenge is not to analyse this data or develop a model and use AI. It is to actually make the systems talk to each other.”

A deeper and potentially more troubling problem is the risk that AI could end up exercising ever more influence over military decisions. Some security figures are asking what keeping a “human in the loop” will actually mean once vast parts of the analysis are automated. Commanders facing snap decisions under intense time pressure will be flooded with information.

Some sources are unperturbed, arguing that high-ranking officers have always had to do this. But others have been rattled by the recent Israeli campaign in Gaza, where a series of deadly strikes on civilians have been blamed on the military’s use of AI systems called Gospel and Lavender to generate lists of potential targets.

There is a danger that the “human in the loop” may ultimately amount to little more than a rubber stamp on a plan conceived and planned almost entirely by AI.