What You Need to Know About “Killer Robots”

By Dan Gettinger, @GettDan

In 2014, Lethal Autonomous Weapons Systems (LAWS) became a hot topic of debate. Military strategists are planning for a future in which autonomous unmanned systems such as aerial drones are capable of making targeting decisions without any human input. Meanwhile, the United Nations is considering a preemptive international moratorium on such systems. But what exactly are autonomous weapons? Which companies are making these systems? And what are the arguments for banning them? Here’s what you need to know about LAWS.

  • The first thing you need to know is that just because a platform is unmanned does not mean that it is a “robot.” Most drones have limited operational autonomy and require the input of many human actors to operate the vehicle, conduct targeting, and fire the weapons systems. However, as one New York Times story noted last month, weapons with greater autonomy could be a more common sight on the battlefield in the near future. Read On: Fearing Bombs that Pick Whom to Kill,” by John Markoff (New York Times)
  • On November 14, the United Nations held a meeting in Geneva to discuss the Convention on Certain Conventional Weapons. The delegates were there to debate lethal autonomous weapons systems, otherwise known as “killer robots.” The conference follows up on the Meeting of Experts in May, a two-day forum for delegates and civil society advocates to discuss related issues. In his report on the meeting in May, Ambassador Jean-Hugues Simon-Michel, the French chair of the meeting, identified key areas that he believes requires further study with regards to LAWS, such as the role of autonomy in machines and the morality of lethal autonomous machines.
  • The definition of what makes a platform a LAWS and what doesn’t creates the normative boundaries for the battlefield role of “killer robots.” At the UN’s CCW Meeting of Experts on LAWS in May 2014, presentations on the technical definitions of LAWS took up the greatest amount of time. Broadly speaking, the definition of a LAWS boils down to a single question: can the machine, when receiving and interpreting information from sensors, discriminate among targets and commit to using lethal force without a human actor intervening? As these weapons become more advanced, they might be able to learn from experience and communicate and cooperate with other robots. Read On: On the Concept of Autonomy,” a presentation by Dr. Raja Chatila, Centre National de la Recherce Scientifique. And: Defining the Scope of Autonomy,” by Nicholas Marsh (Peace Research Institute Oslo).
  • The MBDA Brimstone missile is one of the few truly lethal autonomous weapons system in use today, because it can navigate an unstructured environment. This missile is called “fire and forget,” meaning that once it is launched, it can decide where to go and what or whom to kill within a designated area. Like the Phalanx CIWS, the Brimstone uses radar to find its targets and, once locked on, it decides where the optimal place is to strike the target. The Brimstone was designed as an anti-tank missile; it saw first operational use in 2005, but it wasn’t until 2011, during Operation ELLAMY in Libya, that the missile was used extensively in combat. The Brimstone shares characteristics with the Hellfire anti-tank missile and has been tested on Reaper drones. Read On: Brimstone Datasheet (MBDA)
  • Other weapons systems in use today embody similar elements to the Brimstone. The Phalanx Close-In Weapon System is a ship defense system that is ubiquitous on U.S. Navy vessels. The Phalanx was designed by General Dynamics (now part of Raytheon) and is equipped with a Gatling gun. It is intended to protect ships from incoming missiles or artillery. It is computer-operated and uses radar to track incoming ordnance. The Phalanx can automatically detect, evaluate, track, and target objects for destruction. Unlike the Brimstone, the Phalanx is a stationary weapon and so might not be considered to be a LAWS.
  • Some experts are not convinced that LAWS are immoral, or that a ban on these weapons would prevent nations from developing and using them. They argue that automation in weapons is inevitable, and that semi-autonomous weapons—like the Phalanx or even Israel’s Iron Dome—are already on the battlefield, and they are saving lives and resources. Furthermore, as Georgia Institute of Technology Professor Ron Arkin believes, autonomous weapons will in fact be more humane because, by identifying appropriate military targets, these “smart” weapons will avoid civilian casualties and adhere more closely to the laws of war than “dumb” bombs. Read On: Law and Ethics for Autonomous Weapons Systems: Why a Ban won’t work and how the laws of war can,” by Kenneth Anderson and Matthew C. Waxman.
  • A few organizations that are pushing for an outright ban on laws are the Campaign to Stop Killer Robots, the International Committee for Robot Arms Control, and Reaching Critical Will. While these organizations have different missions and aims, the case against LAWS rests on the principle that weapons that act without human control are necessarily inhumane. A major ally of these campaigns is Christof Heyns, a South African lawyer and the UN’s Special Rapporteur on extrajudicial, summary, or arbitrary executions. In an April 2013 report to the UN Human Rights Council, Heyns advocated for a moratorium on the development of lethal robots until international rules could be established. “Tireless war machines, ready for deployment at the push of a button, pose the danger of permanent (if low-level) armed conflict, obviating the opportunity for post-war reconstruction,” writes Heyns in the “Report of the Special Rapporteur.”

[includeme file=”tools/sympa/drones_sub.php”]