By Arthur Holland Michel
At The Center for the Study of the Drone, we have been troubling ourselves with the question of whether war carried out entirely by autonomous machines will bring us closer to achieving “just war” – that is, war that perfectly adheres to international and humanitarian law – or farther away.
Jeffrey Thurner, a Professor of Law at the Naval War College, has furnished us with a thorough report on the realities, fantasies, and implications of autonomous warfare. Without descending to the condition of pure fantasy or speculation, the paper gives us a clear sense of what to expect.
First and foremost, in Thurner’s estimation autonomous war – when robots make targeting decisions without human intervention – is not only possible, but inevitable. “A force in the future that does not have fully autonomous systems may not be able to compete with an enemy who does. Many nations, including China, are already developing advanced systems with autonomous features.” Tactical advantage in theaters of war will no longer boil down to the guile of human adversaries with mechanical tools, but the decisions made by machines in real-time.
But strategic advantage will not be the only factor at play in the rise of autonomous war. New technologies of warfare are not only making humans more peripheral in real-time fighting; we are becoming a liability. “Adversaries are improving satellite communications jamming and cyber-attack capabilities,” writes Thurner. “As a result, systems tethered to a human controller may be incredibly vulnerable.”
In addition, he explains, later in the paper, “Lethal Armed Robots have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communications links have been severed.” In other words, as computer processors begin to outpace the time it takes the human brain to make decisions in conflict, the human pilot or combatant will actually struggle to keep pace with the speed of war. While fatigue, nostalgia for home, or other kinds battlefield stresses will will cause the human soldier to slow down or worse, violate the laws of armed conflict, the robot can continue making calculations that fully conform to the rules of war (or, at the very least, until the power source runs out) at a speed far greater than any healthy soldier. As Thucydides wrote, two and a half thousand years ago, “speculation is carried on in safety, but, when it comes to action, fear causes failure.”
As a result of these realities, and inevitabilities, we will likely see a new arms race for autonomous targeting technology. It would be foolhardy, according to Thurner, to refuse to participate on ethical grounds, or out of fear for unforeseen consequences. “Autonomous targeting technology will likely proliferate to nations and groups around the world. To prevent being surpassed by rivals, the United States should fully commit itself to harnessing the potential of fully autonomous targeting.” Though Thurner does not seem particularly excited by the prospect of autonomous war, he embraces it on purely practical grounds.
And yet will autonomous war be just war? For Thurner, it will. Robots can be programmed to consistently adhere to the rules of war; not only that, they are immune to passions that are often the cause of war crimes such as massacres, mistreatment of prisoners of war, and miscalculations of proportionality. Additionally, even if a robot did somehow break a rule of law, it would be down to a malfunction, or, if it had been programmed to break the law, it would be the fault of the programmer. “The feared legal concerns,” writes Thurner, “do not appear to be an impediment to the development or deployment of Lethal Autonomous Robots.”
But if we are to fully believe Thurner, we must face a deep and, in a way, discouraging irony: in war, the human element, the element that conceived of the notion that war should adhere to basic moral standards, is preventing war from achieving full adherence to those standards. It would appear that war will be better once you remove the human element. And yet something about such an idea doesn’t sit well. There is something deeply unsettling about the prospect of a war fought by robots, despite every assurance that those robots will be programmed to act according to human morals. Thurner himself is not immune to this unease. While his main argument seems to be that one should remove the human element altogether, he hesitates to go quite so far: “while the use of Lethal Autonomous Robots will arguably be deemed permissible under Law of Armed Conflict in most circumstances, prudent operational commanders should still implement additional control measures to increase accountability over such systems.”
While it is true that humans are less perfect than robots, and more likely to commit atrocities, or make mistakes, or fall asleep, we are still unwilling to defer all responsibility to the robot, even though the robot will probably do a better job. Napoleon is said to have remarked that nothing is more precious than the ability to decide. Up to this moment in the history of warfare, technology has made our decisions exponentially more consequential. But autonomous warfare will take away some part of our power. Ultimately, war is about power, the exercise of power, and the desire for power. But autonomous war will complicate how power works in theaters of war.
Ultimately, when we reach a point when autonomous war is possible, it will be the human element which holds us back. It is ironic that the same passion and fallibility which we hope to correct with the use of robots – that same part of our minds that refuses to be tamed by the laws of pure reason – will hold us back from fully giving war over to the machines. Even though we can accept the inevitability of autonomous warfare, we are unable to imagine that it won’t be a human who pulls the trigger.
(Photo credit: Swarm Robotics @ Idsia)
Pingback: Tech: Swarming at UPenn's GRASP Lab » Center for the Study of the Drone