Editors’ note: This post is part of a series that features presentations at this year’s 17th International Conference on Cyber Conflict (CyCon) in Tallinn, Estonia. Its subject will be explored further as part of a chapter in the forthcoming book International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay. Kubo Mačák’s introductory post is available here.

Conversation among States about technological developments in how wars are fought invariably involve a discussion of the lawfulness or otherwise of those technologies under international law. During the drafting of the 1977 Additional Protocol I to the Geneva Conventions (AP I), some States proposed the creation of a special international mechanism to assess the lawfulness of new weapons. This proposal did not meet with widespread support. Instead, Article 36 of AP I now obliges High Contracting Parties to conduct national legal reviews of new weapons, means and methods of warfare.

In the lead-up to the AP I negotiations, meetings of governmental experts were concerned with future developments such as “geophysical, ecological, electronic and radiological warfare as well as with devices generating radiation, microwaves, infrasonic waves, light flashes and laser beams.” They were also mindful of the prospect that technological change “leads to the automation of the battlefield in which the soldier plays an increasingly less important role.” The rapid development and adoption of artificial intelligence (AI) in the military domain over the past few years testifies to the prescience of the experts. It also demonstrates a continuation of the pattern of legal concerns accompanying technological change.

States that adopt AI for military purposes may have to make some variations to the usual approach to legal reviews to account for the specific characteristics of this technology. In this post we discuss the characteristics of AI that may necessitate a tailored approach to legal reviews, and share, non-exhaustively, some ways in which States can enhance the effectiveness of legal reviews of AI-enabled military capabilities. The post anticipates the forthcoming Good Practices in the Legal Review of Weapons, developed together with Dr. Damian Copeland, Dr. Natalia Jevglevskaja, Dr. Lauren Sanders, and Dr. Renato Wolf, and reflects the presentation that Netta Goussac delivered at CyCon 2025 as part of a panel titled “International Law Perspectives on AI in Armed Conflict: Emerging Issues.”

Legal Reviews are Central to Governance of Military AI

Effective legal reviews are important because they can be a potent safeguard against the development and adoption of AI capabilities that are incapable of being used in compliance with international law regulating warfare. A key mechanism for implementing international humanitarian law (IHL) at the national level, legal reviews are a legal obligation for parties to AP I (art. 36) but can also be a part of a State’s general efforts to ensure the effective implementation of its IHL obligations, chiefly the treaty and customary rules relating to means and methods of warfare. Moreover, openness about the way in which legal reviews are carried out—even if their results cannot be revealed—serves as an important transparency and confidence-building measure.

The centrality of legal reviews in how States think about military AI is reflected in the frequent reference to them in recent national statements, as well as collective statements such as the Blueprint for Action adopted at the 2024 Summit on Responsible AI in the Military Domain (para. 11) and the Political Declaration on Responsible Military use of AI and Autonomy (para. B).

The Need For a Tailored Approach

The term “AI” is frequently used to denote computing techniques that perform tasks that would otherwise require human intelligence. It is a poorly understood yet ubiquitous umbrella term. Moreover, the “AI effect” or “odd paradox” means that what is first (aspirationally) considered AI is becomes simply “software” as soon as it usefully performs a function. The development, acquisition and employment of AI capabilities by militaries may therefore be seen as an “old problem.” Militaries have been using software for a variety of tasks for decades, without significant concerns regarding legality. However, there are some characteristics of AI as a technology, the applications in which it is used, and the way those applications are acquired and employed by militaries. This has implications for how States can review whether the AI-enabled capabilities they develop or acquire can be used lawfully by their militaries. In this post we will highlight four of these characteristics, though there are more.

The first characteristic relates to the wide range of applications of AI that are or may be of use to militaries. This means that States need to make decisions about what kinds of military AI applications will be subjected to legal reviews and the rules against which such capabilities will be assessed. Areas where AI has generated important opportunities for militaries include intelligence, surveillance and reconnaissance (ISR), maintenance and logistics, command and control (including targeting), information and electronic warfare, and autonomous systems. While some of these applications align neatly with categories of weapons, means and methods of warfare that States subject to legal reviews (e.g. autonomous weapon systems that rely on AI to identify, select or apply force to targets), others don’t (e.g. AI-enabled vessels or vehicles that are designed for, say, ISR but not the infliction of physical harm).

Relatedly, the wide range of applications means that the use of AI by militaries engages a broader range of international law rules. Traditionally, the IHL rules relating to means and methods of warfare (general and specific prohibitions), and rules of arms control and disarmament law, have been at the centre of legal reviews. When it comes to military AI applications, other norms of IHL may become relevant (especially rules and principles on the conduct of hostilities, i.e. distinction, proportionality and precautions), as well as other branches of international law, such as international human rights law, international environmental law, law of the sea, space law, and the law on the use of force.

The second characteristic relates to the reliability of military AI applications. The ability to carry out a legal review requires a full understanding of the relevant capability including the ability to foresee its effects in the normal or expected circumstances of use. But technical performance of an AI capability can be unreliable, and difficult to evaluate. The lack of transparency in how AI—and particularly machine learning—systems function complicates the traditional and important task of testing and evaluation. In the absence of an explanation for how a system reaches its output from a given input, it can make difficult (if not impossible) the task of assessing the system’s reliability and foreseeing the consequences of its use. This complicates the task of those conducting legal reviews in advance of employment of the capabilities, as well as legal advisers supporting military operations in real time.

Third, the development and acquisition of AI-based systems demands an iterative approach. This is a characteristic of the industry in which AI capabilities are being developed, as well as of the technology itself, which requires changes over time to maintain or improve safety and operational effectiveness. The acquisition and employment of some AI-enabled military capabilities is therefore more akin to how militaries procure services than to how they procure goods, potentially complicating the linear process of legal reviews within the acquisition/procurement process.

The final characteristic of military AI that we will mention here is the role played by industry actors, that is, entities outside of a State’s government or military. The obligation to legally review new weapons, means and methods of warfare remains with States, but industry actors play a crucial role in the process, particularly in the testing, evaluation, verification, and validation of AI capabilities. As we pointed out in a report we co-authored in 2024, “having designed and developed a particular weapon or weapon system, industry may have extensive amounts of performance data available that could be used in a legal review.” When information and expertise about AI-enabled military capabilities sits outside of the military itself, it becomes critical for States to plan whether and how to make use of this information and expertise, including as part of legal reviews. Our research indicates there are some barriers to information sharing between States and industry, including contractual and proprietary issues, that States will need to think through. 

Enhancing Effectiveness Through a Tailored Approach

To fulfil the potential of legal reviews as a safeguard against AI-enabled military capabilities that cannot be used in compliance with international law, and to facilitate compliance with their international obligations when developing and using military AI capabilities, States will need to adapt their approach to the characteristics of AI. This adaptation is of relevance to all States that are developing or using AI-enabled military capabilities, regardless of whether they already conduct legal reviews and are wishing to strengthen their process or are considering whether and how to conduct legal reviews of military AI capabilities.

In our forthcoming publication, we compile a list of good practices that can enhance the efficacy of legal reviews. Here, we preview some of the observations that underpin these good practices that are relevant to States who are developing or acquiring military AI capabilities.

Legal Reviews are Part of a Decision-Making Process

While the publicly available State practice is relatively limited, our research—which draws on submissions made to the Group of Governmental Experts on lethal autonomous weapon systems, and consultations with governmental, industry, civil society and academic experts—indicates that States that conduct legal reviews view them as part of their broader process for design, development and acquisition of military capabilities. In general, the output of a legal review (in effect, legal advice) is complemented by advice from other sources, including technical, operational, strategic, policy or ethical advice.

Where a legal review concludes that the normal or expected use of a AI-enabled military capability is unlawful, such advice should overrule any advice militating towards employment of the capability. However, where a legal review concludes that the normal or expected use of an AI capability is lawful, or lawful under certain circumstances or conditions, a decision-maker may have regard to additional inputs when deciding whether to authorise the relevant stage of development or acquisition.

While this observation is not novel to AI-enabled military capabilities, integration of legal reviews into a broader decision-making process is particularly important when it comes to the development and use of such capabilities and has triggered the creation of new policy frameworks in some States.

The Decision Whether and How to Conduct a Legal Review Need Not be Legalistic

A State could conduct a legal review of a military AI capability because of the State’s specific obligation under international law to undertake the review (e.g. AP I, art. 36), the State’s interpretation of its general obligations to implement IHL, or the State’s national law or policy. For reasons that we do not have the space to mention here, the language of Article 36 remains a useful guidepost for States in conducting legal reviews, no matter the basis upon which a State conducts them. Efforts to fulfil the requirements of Article 36 may lead States and experts to interpret the text to determine whether a particular military AI capability is “new,” or is a “weapon, means or method of warfare,” and whether a legal review should be limited to the question of whether a capability is “prohibited” under international law, as compared with the question of the circumstances under which it can be used lawfully.

In our view, an overly narrow or legalistic approach to whether a legal review is required and how it is to be conducted may limit the utility of this important tool. As Damian Copeland wrote for Articles of War in 2024, States can take a functional approach to legal reviews. This would mean analysing the functions of a particular AI-enabled capability to determine whether those functions are regulated by international law in any way and assessing the capability to ensure the ability to comply with those rules.

Legal Reviews as Part of Accelerated Processes

States may be adapting (or considering whether to adapt) procurement pathways in response to pressure to accelerate procurement, deployment, and scaling of military AI capabilities. A key challenge for States, and one which we think could be a line of effort within initiatives to govern the development and use of AI-enabled military capabilities, is how to manage time and resources needed to integrate legal reviews in a non-linear and iterative development and acquisition process where reliability is difficult to assess. It is critical that States carefully and systematically locate and synchronise legal reviews as part of these pathways, in order to ensure that such reviews can complement and feed into broader policy processes, and continue to be an effective safeguard against the development and adoption of AI-enabled capabilities that are incapable of being used in compliance with international law.

Realising the Potential of Legal Reviews as a Safeguard and Mechanism

Military adoption of AI-enabled capabilities, like adoption of earlier technologies in warfare, has prompted a discussion of the lawfulness of such capabilities under international law and how to assess whether a capability is able to be used in compliance with a State’s obligations. Legal reviews are a potent tool in preventing the employment of unlawful capabilities in armed conflict as well as facilitating compliance with IHL and other relevant international law rules.

Conducting legal reviews at the national level does not perform the same role, nor have the same effect, as adoption of governance or regulatory measures at the international level. However, legal reviews can (and will inevitably) support the implementation of policy measures (of any legal status) adopted at the international level. This is particularly true while there is no agreed verification regime among States with respect to their military AI capabilities.

To make full use of this tool, States will need to tailor how they conduct legal reviews at the national level, to acknowledge both persistent challenges in conducting legal reviews and novel challenges associated with the characteristics of AI. At the international level, openness about how legal reviews of AI-enabled capabilities are carried out (if not the outcomes of specific reviews) should be considered in initiatives to govern the development and use of AI-enabled military capabilities.

***

Netta Goussac is a Senior Researcher in the SIPRI Governance of Artificial Intelligence Programme.

Rain Liivoja is a Professor of Law at the University of Queensland, and a Senior Fellow with the Lieber Institute for Law and Land Warfare at West Point.

The views expressed are those of the authors, and do not necessarily reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense. 

Articles of War is a forum for professionals to share opinions and cultivate ideas. Articles of War does not screen articles to fit a particular editorial agenda, nor endorse or advocate material that is published. Authorship does not indicate affiliation with Articles of War, the Lieber Institute, or the United States Military Academy West Point.

 

 

 

 

 

 

 

Photo credit: Getty Images via Unsplash