Header SF PoliceRobots Efm

Killer Robots Have Been Approved to Fight Crime. What Are the Legal, Ethical Concerns?

Q. First, what is your reaction to San Francisco’s decision?

A. I’m surprised to see that coming from San Francisco, which is generally a very liberal jurisdiction, rather than from a city known for being “tough on crime!” But it’s also important to understand what capabilities San Francisco’s robots have and don’t have. These are not autonomous systems that could independently choose against whom to use force. Police officers will drive them, albeit remotely and from a distance. So calling them “killer robots” might be a bit misleading.

Q. When a police officer uses deadly force, he or she is ultimately responsible for the decision. What complications might legally arise if a robot did that actual murder?

A. According to the Washington Post, the San Francisco Police Department does not plan to equip its robots with firearms in the future. Instead, this policy seems to envision a situation in which police could equip a robot with something like an explosive device, a taser, or a smoke grenade. San Francisco’s policy would still involve a human being “in the loop,” as one controls the robot remotely, controls where it goes, and decides if and when the robot should detonate explosives or incapacitate a suspect. The connection between a human decision maker and the use of lethal force would therefore still be easy to identify.

Where it can get more complicated is if the robot stops working as intended and accidentally harms someone through no fault of the operator. If the victim or her family sues, there may be issues as to whether the manufacturer, the police department, or both should be held responsible. But that’s not as different a question as what happens when a police officer’s gun accidentally fires and injures someone because of a manufacturing defect.

Q. Aside from the legal questions, what are the ethical questions society faces when robots take life? Or are the legal and ethical questions intertwined?

A. The legal and ethical questions are related. Ideally, the legal rules that states and territories enact will reflect careful thought about ethics, as well as the Constitution, federal and state laws, and smart policy choices. On one side of the scale are the significant benefits that come from tools that help protect police officers and innocent citizens from harm. Since many uses of lethal force occur because officers fear for their lives, properly regulated and carefully used robots can reduce the use of lethal force because they can reduce the number of situations in which officers are in danger.

On the other hand, concerns about making police departments more willing to use force, even when it is not a last resort; about accidents that can occur if the robot systems are not carefully tested or the users are not well trained; and about whether using robots in this way somehow opens the door to the future use of systems that have more autonomy in law enforcement decision-making.

One new question that may arise is whether police departments should adopt more cautious use-of-force policies when it is a robot delivering that force, because the robot itself cannot be killed or harmed by the suspect. In other words, we may not want to allow robots to use violence to defend themselves.

Related Posts