The UN General Assembly adopted its first resolution addressing the risks of AI in nuclear weapons systems this December. Spearheaded by Mexico, along with Austria, El Salvador, Kazakhstan, Kiribati, and Malta, the resolution focuses on the potential for AI to increase the risk of accidental detonations or unauthorized military decisions.

The Mexican-led proposal was adopted with 118 votes in favor, nine against, and 44 abstentions. The initiative underscores a growing international interest in maintaining human control over nuclear command, control, and communications (NC3) architectures as new technologies are incorporated into military spheres.

The resolution noted systemic risks associated with AI integration, including the compression of decision-making timelines and the introduction of misperceptions or cognitive biases. While many states endorse the principle of “meaningful human control,” the resolution signals concern that AI could inadvertently escalate crises even when humans retain formal authority.

Nuclear command and control involves a complex network of radar, satellites, and computer systems monitored by humans. US policy currently requires “dual phenomenology” — confirmation of an attack by both satellite and radar — to authorize a strike. Experts question whether AI can be trusted to serve as one of these confirming phenomena.

“What I worry about is that somebody will say we need to automate this system and parts of it, and that will create vulnerabilities that an adversary can exploit,” says Jon Wolfsthal, Director of Global Risk, Federation of American Scientists, Wired. “It will produce data or recommendations that people are not equipped to understand, and that will lead to bad decisions.”

Geopolitical Divides and Technical Challenges

The voting patterns revealed a divide between nuclear-armed states and the Global South. While non-nuclear-weapon states viewed AI as an additional layer of risk in a fragile system, some nuclear-armed states, such as Russia, the United States, and China, emphasized AI’s potential operational advantages, such as improved early warning and situational awareness.

Technical hurdles also remain, including a lack of agreed-upon definitions for “AI” or “meaningful human control.” Some experts suggest that the “black box” nature of many AI systems makes them unsuitable for the high-stakes environment of nuclear deterrence.

The resolution coincides with the 80th anniversary of the United Nations and the start of the nuclear era in 1945. Mexico’s position aligns with its historical commitment to nuclear disarmament, including the Treaty of Tlatelolco and the Treaty on the Prohibition of Nuclear Weapons (TPNW).

Recent policy shifts in the United States have described the rush toward AI as an arms race. The Department of Energy recently characterized AI as “the next Manhattan Project.” However, some specialists criticize this comparison.

Retired US Air Force Major General Bob Latiff compared the inevitability of AI integration to basic infrastructure: “It is like electricity. It is going to find its way into everything.”

The adoption of this resolution establishes a diplomatic foundation for future discussions, including the 2026 Nonproliferation Treaty (NPT) Review Conference, as the international community seeks to define safeguards and transparency levels for the AI-nuclear nexus.