In the hours after India struck Nur Khan Airbase in Pakistan on May 10, 2025, social media erupted with more than just images of missile strikes and military mobilization: it became the epicenter of a digital counter-offensive. AI-created satellite images, deepfake videos of explosions and airfield damage, and fake audio messages where a senior Pakistani commander is heard declaring a nuclear alert were shared widely, sending panic across the region as the world waited. Cyberspace, once a peripheral battleground, has become central to conflict escalation—a place where one synthesized clip might trigger a real-world nuclear miscalculation.

The strategic deterrence systems of South Asia rely on quick response time and tight command-and-control methods. Officially, India has a no-first-use policy, and Pakistan’s Full Spectrum Deterrence doctrine permits the early use of nuclear weapons in response to conventional attack. Reduced decision windows during crises, however, could enable misinformation to distort threat perceptions and force bad decisions based on inaccurate assumptions.

Deepfakes—the AI-generated images, videos, or audio that have evolved from poorly crafted digital fakes to impeccably realistic near-photographic forgeries, which can now be used to impersonate world leaders and military commanders. A DISA report shows that deepfake tools are now readily available to anyone with a smartphone and minimal technology experience. Already, casual users can clone voices, puppeteer lips, lip-sync audio and video perfectly, even in real time, and render authentic battlefield footage. When emotional disinformation spreads on social media, it can circumvent analytical filters and potentially pressure leaders to act before official verification can be achieved.

In the panic of May, observers witnessed tragic instances of the ways in which synthetic material exacerbated the crisis. A deepfake of Pakistan’s prime minister, seemingly conceding defeat and questioning the nation’s resolve, went viral before it was debunked. At the same time, Indian television channels were airing reused or AI-generated footage that was being portrayed as actual battlefield footage, drones exploding at close range or invading Pakistani territory, without any verification. The London Story and CSOH were in the first wave of accounts to show how both X and Facebook were filled with nationalist images and emotionally manipulative disinformation and how they were taking both countries “closer to war.”

Adversarial digital combat increases the danger of misjudgment. A forged statement by a military figure, reported as real, can cause a chain reaction: civilians panic, the news amplifies the horror, military command demands an explanation, and politicians, responding to the fear of the masses, can make demands. AI-powered deceptions can easily outmaneuver fact-checkers and reinforce that the trust between us is broken.

This is not only a South Asian phenomenon. In the case of Ukraine, a deepfake video allegedly showing President Zelensky calling for surrender spread before being debunked. Previous victims include synthetic audio, as when a CEO’s voice was cloned to heist $243,000 in 2019 in the UK. These examples show that synthetic media can upend public sentiment and national policy.

South Asia is already confronting formidable AI security challenges. India has introduced AI-based surveillance systems, and Pakistan’s military has created training and emerging-technology units with an eye toward AI. An accidental firing of a BrahMos missile from Indian territory in 2022 is a case in point of how flawed detection systems can prove fatal. So, for example, military AI systems without human oversight could mistake provo­cative acts for actual attacks, setting off a chain of escalation that can’t be reversed.

But at the same time that governments are incorporating AI into their military arsenal, there are relatively few confidence-building measures that are being adopted regarding risks of synthetic media. Neither India nor Pakistan has even set up the infrastructure to share suspected AI content, cross-check potentially false reports with each other, or quarantine misinformation before it’s fed to the public (which is such a foundational step in preventing further information warps that it needs to be codified). In Pakistan, not even the official crisis communication SOPs include mechanisms to rapidly detect and act against deepfakes as an emerging threat.

Two measures are urgently needed to reduce the risk of a nuclear escalation due to artificially induced disinformation. First, create a bilateral digital crisis management mechanism. This could include hotlines for reporting suspicious deepfakes, rapid response collaboration on joint fact-checking, and sharing technical standards for AI-generated content. India has already developed “Vastav AI” against such deepfakes, and a shared or interoperable version between India and Pakistan could be designed.

Second, initiate Track II dialogues with security academics, technologists, and AI ethicists on both sides. Such forums should be modeled after early nuclear-era confidence-building dialogues but instead focus on synthetic media and cyber-driven disinformation. They could help shape labeling of deepfakes, joint monitoring of manipulable media, and protocols for public alerts whenever manipulated media circulates.

These types of measures should be incorporated into existing crisis communication mechanisms, such as back-channel military “hotlines” and foreign office contact points. They’re more likely to be effective if they are embedded in the structures of third parties such as the Shanghai Cooperation Organization or the International Atomic Energy Agency, forums in which India and Pakistan already participate.

As the marketplace for synthetic media grows, the information battlefield can be as dangerous as missile silos. In the crisis in May, the speed of disinformation was the weapon of choice. Combatting the “weaponization of AI by mainstream and social media” was strategically foundational, officials said. For its part, Pakistan’s national security establishment confessed its own lack of structural resilience in combating AI-fueled textual attacks, warning that “unattended coordinated AI strikes” constitute a menacing threat.

If the deterrence theory of the past revolved around missiles and command posts, it needs to work with algorithms and the perception that AI creates. Both India and Pakistan acknowledge the necessity of regulating military AI, yet the synthetic media niche remains perilously unregulated. And if digital trust infrastructures are not integrated into crisis frameworks, the next flashpoint may not be ignited by bombs, but by bytes.

The urgency is real. A single piece of well-timed disinformation—a deepfake of public capitulation, a cloned voice issuing instructions to prepare for nuclear war, or a fake video of mayhem on the battlefield—could nudge one or both to the point of no return. What we now must do is modernize deterrence around not only weapons but also information.

To guard against this new frontier of escalation, India and Pakistan now have to fight nuclear fire with digital fire, dual-tracking their nuclear doctrines: strategic stability through arsenals just as much as resilience in the digital domain. In the age of AI, the fastest weapon is the digital disruption, something, like the cyberattacks attributed to Russia, that is so cheap and easily deployed that you don’t even know who is firing at you. And illusions can be as dangerous as warheads.