Ryan Calo is an Assistant Professor of Law at the University of Washington. His work addresses a daunting question: “How should our legal system respond to the inevitable proliferation of autonomous robots in domestic life?” Calo is often looked to as a leading expert on the legal implications of domestic drone use. Whenever a big news story breaks in the domestic drone sphere, such as when Jeff Bezos announced Amazon’s plans for a drone delivery service, national news outlets turn to Calo for his take. Calo’s numerous publications, not to mention his regular contributions for Forbes, The Huffington Post and several other outlets, have steered the national conversation about the FAA’s drone policy, driverless cars and the implications that these systems have for our safety and privacy.
Calo is the author of Open Robotics and The Drone as Privacy Catalyst, among numerous other works. His forthcoming paper, Robotics and the New Cyberlaw, examines the ways in which the introduction and proliferation of robotic systems will challenge existing cyberlaw.
Interview by Arthur Holland Michel
Center for the Study of the Drone Is there a line between fears based in rational concerns and the basic human fear of the unknown and the uncanny, as you have put it, fears based on the “disquieting” effect of these machines? How much of the fear of autonomous systems is rational, and how much of it is irrational?
Ryan Calo I would not say that fear of robots is irrational, exactly. It is more that people tend to have more visceral reactions to robotics than other technologies. Thus, I have argued that drone surveillance helps citizens form a more accurate mental model of privacy harm than internet tracking. But I have also pointed out that an overblown reaction to an accident involving a driverless cars may lead to not adopt this technology at the cost of more human-error accidents overall.
Drone You have called for “open robotics” as a means of fostering innovation and progress in the field. Do you see any potential danger in allowing private individuals to program capable robotic platforms to carry out potentially harmful tasks? Are there spaces for abuse?
Calo I write about this issue in my paper ”Open Robotics,” from 2011. As with PCs and smart phones, open robots that allow third parties to write code will be much more useful than closed robotics systems that only do one thing. But this raises the prospect that people will do dumb or malicious things with multi-purpose robots, and, further, that the manufacturer will be left holding the bag should there be an injury and lawsuit. I end up arguing that we should immunize manufacturers of open robotic systems for what users do with those systems. To deal with the inevitable harm, I argue that owners of robots should take out insurance—an admittedly imperfect solution.
Drone The proliferation of robots in civilian spaces means, also, the proliferation of sensors. These sensors will also be storing data, just like computer browsers. Is this a legitimate concern? From a legal perspective, could this data be accessed by, say, the NSA? Could it be sold to third parties, the way our browsing data is sold?
Calo I argue in my book chapter ”Robots & Privacy” that robotics implicates privacy in three ways: direct surveillance, increased access and social meaning. Drones cut the costs of surveillance to worrisomely low levels. As you point out, law enforcement can gain access to anything a home robot records with process, and the NSA can gain access to video streams over the internet with less or no process. The real difference, however, is how we hardwired to react to anthropomorphic machines as though a person were really there. Robots feel social to us, such that introducing them into spaces historically reserved for solitude implicates privacy even where the machine does not collect any information.
Drone Do you agree with those that argue that we have to be realistic about the fact that true privacy in the traditional sense is a thing of the past?
Calo I believe that surveillance technology has outpaced privacy law in general, and that robotics present just one example.
Drone Could you see any lethal autonomous systems potentially finding their way into domestic law enforcement? And if so, what are some of the legal and ethical concerns that this might raise?
Calo When it comes to arming drones and other domestic robots, I think even non-lethal weapons are a bad idea. The problem is that officers lack situational awareness—not being physically there, they do not necessarily appreciate when the use of force would be appropriate. This effect is, if anything, exacerbated when officers don’t think the force will be lethal—although obviously the stakes are, in a sense, lower.
Drone What are some of the policy responses that you would like to see with respect to robotics in the civilian space? How can individuals effectively prepare for/respond to proliferation?
Calo I argue in a forthcoming article entitled ”Robotics and the New Cyberlaw” that the mainstreaming of robotics will present a specific set of challenges. Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument. The law is not particularly well-equipped to deal with these three qualities of robots, although I believe cyberlaw (internet law) represents a good starting point for the law to begin to address them.