The AI-generated deepfake is becoming a familiar feature of political debates, but also a potential national security threat, as US Secretary of State Marco Rubio recently discovered.
An imitation of Rubio’s voice was used in an attempt to solicit counterparts to share information, highlighting how sophisticated and convincing AI voice scams have become. This deception adds to concern about the weaponisation of AI in geopolitics, via misinformation or algorithmic manipulation. It’s not only about influencing elections. Imagine the severe consequences that could flow from targeting high-level officials who might have access to nuclear controls.
Misinformation and deepfakes are not new, but AI turns the danger into something faster, sharper and harder to untangle. This is especially a risk during a crisis.
For example, the conflict between India and Pakistan in May saw two nuclear-armed rivals trading blows, watching the other through a fog of AI-generated half-truths. Propaganda has long been a challenge in war, but early claims about the destruction of Pakistani military assets, cities, and the capture of soldiers were quickly exposed as a disinformation campaign, drawing widespread international criticism.
The late Henry Kissinger warned of the fear that AI would not just change warfare but blur the line between reality and illusion.
But the risk did not end with the ceasefire. As AI systems develop, the false narratives could feed machine learning as biased or false data, compounding known problems with AI “hallucinations” and fuelling future crises.
International standards and regulation for AI are needed, but global cooperation to date has been hamstrung by competition – between the United States and China particularly, but also in trying to wrangle an industry that is premised on seamlessly crossing state borders. China recently announced comprehensive regulations in a bid to label any AI-generated content on the internet from 1 September 2025. But the problem is too big for any one country to handle alone.
The development of nuclear arms controls offers a guide – especially considering the threat AI poses to the nuclear non-proliferation regime itself. These nuclear agreements came about with a realisation of both major powers, the United States and USSR, that the technology was too destructive to remain unregulated.
In the context of AI, as the late Henry Kissinger warned, there is the fear that AI would not just change warfare but blur the line between reality and illusion. If a global AI arms race goes unchecked, suspect phone calls might be seen as a quaint yet dangerous beginning.