If a robot breaks the law, who will face the penalties? On the surface it sounds like a silly question. After all, it’s not like we have robots roaming the streets picking pockets and raising a ruckus. There are, however, self driving cars hitting the streets in many areas, and they are going to become more and more common as time goes by and traffic laws are one of the, if not the, most common ones to be broken by human drivers.
Take the November 12 Mountain Home, California case of police pulling over a Google self-driving car for driving too slowly and impeding traffic. The police officer was surprised to see that the vehicle he had detained had no driver, only a passenger. The officer issued no ticket, however, as it turned out the prototype vehicles are currently programmed with a top speed of 25 miles per hour and that no laws had been broken.
It does raise interesting questions for many about how to handle such situations. To be sure, the passengers in these vehicles do have ultimate control over how they drive and would likely be liable, though Google has said they are willing to pick up the tab in cases involving their cars. What happens in cases where the programming breaks the law either unbeknownst to the operator or because they must choose between obeying one law over another, or breaking the law in the name of safety, such as running a stop sign or red light to avoid an accident? Will the vehicles be automatically programmed to stop when an officer is pulling them over, even if no violation has occurred? Could the vehicle be programmed to kill, even its passenger, if need be?
These all may sound like theoretical questions, but they matter a great deal in the new world of driverless vehicles. For example, if an officer pulls a car over for no violation because he suspects a passenger may have committed a crime, that could constitute a violation of their constitutional rights. It may sound like a silly concern, but given the state of policing in this country currently it will almost certainly be one at some point.
Then there are the concerns about whether following the law too closely is more of a hazard than breaking it. That was certainly the reason this vehicle was pulled over–going to slow can cause traffic jams and fast breaking, as well as rear-end accidents like those that occurred this summer. To be sure, the other drivers were at fault for those accidents, but there does remain the question of whether those accidents would have occurred with human drivers. In other words, humans are used to driving with other humans who utilize instinct and make mistakes that can be accounted for. Driverless vehicles do not have such restrictions and will obey rules, even when it may not logically be the best course of action.
For their part, Google is touting this incident as proof that autonomous cars are superior to human drivers. They brag that humans rarely get pulled over for driving too slow (a questionable assertion, but it was tongue in cheek), and it only proves how safe their cars are.
The reality is that either driverless cars will have to start driving more like humans, or humans will have to start driving more like robots in order to ensure that the roads are not only safe, but also can be traversed in a relatively smooth and reasonable pace. As with any new technology, there will be a period of growing pains and adapting, but the end result could make it worthwhile.