This paper is published with permission from the Center for a New American Security.
By Paul Scharre, Michael C. Horowitz, and Kelly Sayler
On April 13-17, delegates to the United Nations Convention on Certain Conventional Weapons (CCW) will discuss lethal autonomous weapon systems, an emerging technology that raises significant legal, moral, ethical, and policy issues. This meeting builds on discussions held in May of 2014, and states should use this opportunity to further their understanding of this important issue.
What are Lethal Autonomous Weapon Systems (LAWS)?
LAWS are weapon systems that, once activated, are intended to select and engage targets on their own, also known as operating without a human “in the loop.”
- LAWS would be different from drones today, where a human is responsible for firing weapons against any target.
Why Discuss LAWS?
Increasing autonomy in systems across militaries and the commercial sector suggest it is important to consider this topic now:
- Rapid advances in computer technology have raised the prospect of future development of autonomous systems in many applications.
- It is important to distinguish between trends toward greater autonomy in systems in general, such as self-driving cars, military robots, or missiles with advanced navigation features, and autonomous weapon systems that would select and engage targets on their own.
- Some simple forms of autonomous weapons already exist, although they are generally limited to systems supervised by humans that protect vehicles and military bases from attacks.
- There are some issues that may be unique to LAWS in comparison with other weapon systems, some issues that are exacerbated by LAWS, and some that apply to any weapon, including LAWS.
What Issues Should Delegates Consider?
While intelligent, humanoid robots are likely to firmly remain in the realm of science fiction, simple autonomous weapons are possible today. Understanding the technological range of the possible is an important task.
- Autonomous weapons require thinking carefully about issues of accountability to ensure that any weapons are used in compliance with the law of war.
- Autonomous weapons raise important questions about strategic stability. As multiple countries pursue these technologies, they could affect crisis dynamics, particularly in cases where states fear adversaries might attempt to deny them situational awareness.
These effects could vary for different types of LAWS, making further discussions to understand how LAWS might affect crisis stability important.
Do LAWS Exist Today?
Many weapons incorporate a high degree of automation, but are not “autonomous weapon systems.” A human still decides which specific targets are to be engaged. These systems include air-to-air homing missiles, torpedoes, and precision-guided weapons.
- However, over 30 nations already operate human-supervised autonomous weapon systems to defend bases or vehicles against attacks from mortars, rockets, or missiles. Systems in this category include automated air and missile defense systems as well as active protection systems for ground vehicles.
These systems are essential for responding to short-warning threats where there is not sufficient time to adequately respond with a human “in the loop.” To date, these systems have been used narrowly. They have been used to defend human-occupied vehicles and bases and retain a person “on the loop” who supervises operation and can intervene, if necessary.
Are LAWS Illegal?
There are no specific provisions in international humanitarian law (IHL) that prohibit LAWS. Like all weapons, any use of LAWS must be compliant with IHL principles, including distinction, proportionality, and others.
- Some uses of LAWS might therefore be illegal while others might be compliant with IHL and therefore lawful.
What is an Autonomous vs. a Semi-Autonomous Weapon?
- An autonomous weapon system is a weapon system that, once activated, is intended to select and engage targets where a human has not decided those specific targets are to be engaged.
- A human-supervised autonomous weapon system is a weapon system with the characteristics of an autonomous weapon system, but with the ability for human operators to monitor the weapon system’s performance and intervene to halt its operation, if necessary.
- A semi-autonomous weapon is a weapon system that incorporates autonomy into one or more targeting functions and, once activated, is intended to only engage individual targets or specific groups of targets that a human has decided are to be engaged.
For more information, see: An Introduction to Autonomy in Weapon Systems, Center for a New American Security (February 2015).
What is “Meaningful Human Control”?
Some have suggested the concept of “meaningful human control” as one way to address the challenge of increased autonomy in weapon systems. There are three essential components of meaningful human control:
- Human operators are making informed, conscious decisions about the use of weapons.
- Human operators have sufficient information to ensure the lawfulness of the action they are taking, given what they know about the target, the weapon, and the context for action.
- The weapon is designed and tested, and human operators are properly trained, to ensure effective control over the use of the weapon. These standards help ensure accountability, moral responsibility, and the ability to safely control the weapon.
For more information, see: Meaningful Human Control in Weapon Systems: A Primer, Center for a New American Security (March 2015).
Do LAWS Violate Human Dignity?
- IHL gives certain persons protected status in war, such as civilians and combatants who are hors de combat.
- However, there is no requirement to give combatants a “right to a dignified death.” Nor has allowing combatants a “dignified death” been customary practice in war.
Are LAWS Immoral or Unethical?
LAWS raise a number of important moral and ethical issues that are not explicitly addressed in international humanitarian law, but nevertheless should be considered.
- Some have suggested that LAWS could be more precise and discriminate than humans, thus reducing civilian casualties in war.
- Even if that were true, it is possible that LAWS could lead to an of-loading of moral responsibility for killing, leading to greater use of LAWS and more killing overall.
- There are also ample examples of situations in conflict in which it was lawful to kill, but humans refrained from doing so. In theory, LAWS may not have this restraint, and their use could therefore lead to more killing in war.
- Conversely, there are also many examples of situations in conflict in which humans have committed war crimes or other emotionally driven acts of violence. If programmed to act in accordance with IHL, LAWS could therefore lead to less killing in war.
Are LAWS Unpredictable?
Autonomous systems, such as self-driving cars or airplane autopilots, can introduce new challenges when operating in uncertain and unknown environments.
- Autonomous systems will follow their programming every time. In situations where the environment is known, this can lead to more precise and predictable behavior than humans.
- If faced with situations outside the bounds of what they were programmed for, however, autonomous systems may lack the flexibility and adaptability of humans to react to novel situations.
Are LAWS Stabilizing or Destabilizing?
Discussions of LAWS to date have focused largely on the effect LAWS would have in the conduct of war, and in particular on humanitarian concerns. These are important issues, but LAWS also raise important considerations for proliferation and crisis stability.
- Many nations might have strategic or reputational incentives to pursue autonomous weapons, raising the prospect of proliferation.
- The interaction of complex, autonomous systems in real-world environments could lead to unanticipated behavior. In a crisis, this could potentially lead to unwanted or unintended escalation between parties.
- The use of autonomous trading systems in financial markets points to some of the risks associated with autonomous systems interacting in uncontrolled environments.
On May 6, 2010, the interaction of an automated stock trade and high-frequency trading algorithms led to a “flash crash,” in which the U.S. stock market lost nearly 10% of its value in a very short period of time. While stocks recovered quickly, the event led to the introduction of “circuit breakers” to prevent future flash crashes.
- States could consider fail-safe measures to limit the potential consequences of unanticipated behaviors by autonomous systems, such as “human circuit breakers.”
- At the same time, some forms of LAWS, such as automated air and missile defense systems, could strengthen international stability by reducing incentives for attack and heightening deterrence.
Recommendations
State parties to the CCW should use these discussions to help better understand LAWS and the potential challenges they bring.
- States should follow these discussions with a more focused examination of the strategic stability issues surrounding LAWS, perhaps in the form of a working group.
[gview file=”http://dronecenter.bard.edu/files/2015/04/Autonomous-Weapons-at-the-UN_040615_FINAL.pdf”]