Postgame: the U.N. Debate on Lethal Robots

Samsung's SGR-A1 autonomous sentry turret
Samsung’s SGR-A1 autonomous sentry turret Patent Image

By Kelley Sayler

This post was published in conjunction with the Center for a New American Security, which has an ongoing project on Ethical Autonomy. CNAS’ statement to the CCW can be found here. For our backgrounder on the UN CCW meeting, click here

From April 13 to 17, the UN Convention on Certain Conventional Weapons convened a meeting of experts on lethal autonomous weapon systems (LAWS), attended by over 90 states and 15 non-governmental organizations (NGOs).  This meeting was intended to build upon the first round of expert meetings, held in May 2014, and foster discussion of the technical, ethical, and legal issues related to LAWS.  The meeting featured a combination of expert presentations and statements from participants, as well as daily side events hosted by NGOs.

“It was clear that states held widely divergent views on the nature of autonomous weapons.”

While states were more engaged in the proceedings—and discussion was more focused—than during the 2014 meeting, it was clear that states held widely divergent views on the nature of autonomous weapons. For some, many existing drones qualify as LAWS. For others, only those systems capable of autonomous target selection and engagement qualify.  For still others, only adaptive learning systems that exhibit human-level cognition would qualify as LAWS.

The lack of a common understanding of the very topic of discussion greatly complicated the possibility for states to have a coherent, crosscutting conversation about their respective national policies.  For example, both the U.K. and Cuban delegations unequivocally stated that their countries would not pursue LAWS; however, Cuba’s definition of LAWS—which appeared to include existing systems—was significantly more expansive than the U.K.’s—which held that autonomous systems “do not and may never exist.”  Clarifying the definition of LAWS will thus be critical to ensuring productive discussions in the future.

MBDA’s Brimstone “Fire and Forget” missile system.

States and NGOs also expressed a broad range of views on both the inherent legality of LAWS and the potential benefits or threats posed by LAWS.  The Pakistani delegation, for example, expressed concerns that LAWS would lead to increases in the use of force and ultimately lower the threshold of going to war.  The delegation further stated that LAWS are, by their very nature, unethical and unlikely to be able to meet the standards of international humanitarian law, particularly with regard to the principles of distinction, proportionality, precaution, humanity, and military necessity.  Similarly, the International Committee of the Red Cross and others noted that weapons with autonomy in their critical functions might have difficulty complying with international humanitarian law if employed in dynamic, rapidly changing environments, while others noted that LAWS could be vulnerable to hacking or hijacking by malicious actors.  Ireland expressed an additional concern that LAWS could be used outside of a traditional military context, for example, in law enforcement or domestic policing.

“No state expressed support for removing humans entirely from decisions over life and death.”

In contrast, some state delegations, such as those from South Korea, Switzerland, and Canada, advocated examining the acceptability of LAWS in the context of their use (that is, based on the characteristics of the particular system, the types and locations of targets to be engaged, the concept of employment, the ability to comply with commander intent, etc.).  Others, such as Japan, expressed concern that a pre-emptive ban on LAWS, an approach favored by some states and NGOs, could inhibit the development of beneficial civilian or dual-use applications of autonomy.  Going one step further, the Israeli delegation asserted that prudent use of LAWS could potentially promote greater compliance with international humanitarian law.  Several expert panelists additionally noted that, given their speed and precision, LAWS could be helpful in executing tasks that humans would otherwise fail at.

The CCW in session Credit: Campaign to Stop Killer Robots

Despite the absence of agreement on the definition, nature, and implications of LAWS, states broadly agreed that some level of human involvement will continue to be necessary in any use of force.  While states and NGOs characterized this involvement using different terms, including “meaningful human control” and “appropriate human judgment,” no state expressed support for removing humans entirely from decisions over life and death, citing, at a minimum, the need to ensure accountability on the battlefield.  France did, however, notably object to the dominant concept of “meaningful human control,” terming it “too vague to be useful,”and suggested that states instead focus on the predictability of LAWS.

Ultimately, states did not reach a common understanding of the degree of human involvement or amount of information required to make acceptable targeting decisions.  For example, while there was widespread agreement that a human who pushes a button every time a light comes on is not sufficiently involved in or informed of targeting decisions, states generally declined to articulate more specific requirements.  Speaking as an expert panelist on technical issues and citing his work with Michael Horowitz, Paul Scharre noted that perfect information is not realistic in war, and thus should not be required to establish “meaningful human control.”  Instead, there should simply be sufficient information and human involvement to ensure the lawfulness of a given action.

While discussions during last week’s meetings were useful in supporting an open dialogue among states,  it was evident that there is a significant need—prior to the next meeting of experts—for states to reach a common understanding of LAWS and to clarify the threshold for achieving a satisfactory level of human involvement in lethal targeting decisions.  Failure to do so will likely stall discussions and prevent states from making progress on what is a critical issue for the future of international security.

Kelley Sayler (@kelleysayler) is a Research Associate at the Center for a New American Security. 

For updates, news, and commentary, follow us on Twitter.

[includeme file=”tools/sympa/drones_sub.php”]