Can Self-Driving Cars Make Ethical Decisions?

By February 15, 2016April 1st, 2020Automotive, Telematics
Driverless Car Arthur Radebaurgh

Picture this. You are in your brand new 2022 self-driving automobile when a large piece of cargo falls off the truck in front of you. The car is not going to be able to stop in time to avoid colliding with the heavy object and hurting you. But the car can swerve to the right, crashing into an open-air sidewalk café and injuring some patrons, including a family with young children, enjoying the afternoon sun. Or, the car can decide to turn the other way, switching lanes quickly, and hitting a motorcyclist.

What should the car do?

Self-Driving Cars Are Here

Experimental self-driving cars are already cruising along California’s highways, equipped with an impressive array of image recognition, collision avoidance, navigation and robotic technologies. New technologies that improve driver safety and convenience such as back-up cameras, crash-imminent braking, self-parking and lane-departure warning systems are being introduced in new vehicles at a growing pace, at times almost unnoticeably. Some are becoming mandatory, thereby accelerating the maturation and reducing the cost of the building blocks of autonomous driving technology.

Indeed, the impressive accomplishments of self-driving cars from Delphi, Waymo (Google) and others give the impression that the future is practically here.

We envision cars that don’t crash, saving tens of thousands of lives every year. Cars that are able to drive themselves anywhere they are instructed to go, providing mobility to the elderly and the disabled. Cars that reduce traffic congestion and make commuting less stressful and more productive.

Are self-driving cars capable of undertaking the complex task of safe driving? Can they become an integral part of our everyday mobility experience? Are we ready to accept them?

Helping Self-Driving Cars Make Ethical Decisions

It’s tempting to offer general guidelines to define the expected behavior of autonomous vehicles.  This notion evokes the Three Laws for robots that science fiction author Isaac Asimov introduced in his 1942 short story Runaround. Interestingly, this story was later published in the series I, Robot  in 1950, the same year Alan Turing published his famous Turing Test for determining whether or not a computer is capable of human-like thinking.

The Three Laws, quoted as being from the Handbook of Robotics, 56th Edition, 2058 A.D., are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In the 1940s, this view into the far future was extremely intriguing (and we still have 40 years to go). But codifying complex human behavior as a series of general rules is unsatisfactory and fraught with ethical dilemma.

For example, one way to approach the dilemma described in the opening of this article could be to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing five. But how does this dry and clear-cut equation work in practice? Do we equip the car with an algorithm to decide between killing the car’s owner, three people at a café, or a single motorcycle rider? Is a family of four more valuable than a single person?

If we were driving the car, whichever instinctive split-second decision we make would be understood as an imperfect human reaction. Even if, in hindsight, we made the wrong decision, it would be considered negligence (and we might be punished for it). But if a car is designed and programmed in advance to make a choice between different outcomes, all of them bad, then the decision might be considered premediated. But whose fault is it?

In an unusual foresight and bravery not typical in the automotive industry, some automakers, including Google, Mercedes-Benz and Volvo  have publicly accepted all liability when their car is in an autonomous driving mode.

Mixed Company

In a 2015 shareholder meeting, Google’s Sergei Brin said: “We don’t claim that [our] cars are going to be perfect. Our goal is to beat human drivers.” Considering the number of human-caused crashes, injuries and fatalities and the state of the art of self-driving technology, we seem to be getting closer to achieving this goal.

Robotic cars will eliminate human errors and reduce the number of crashes, injuries and property damage. But accidents will happen, especially as long as human-driven cars are still around and share the roads and streets with self-driving cars.

Mixed operation will put human drivers on edge. Imagine a self-driving car making a quick and safe lane change in order to avoid hitting a child that ran into the street. Unlike a human driver that might anticipate that a child is likely to follow a ball that rolled into the street and starts braking before seeing the child, a robot might not take any action until it detects the child. The last second quick maneuver is too quick for the human driver following the autonomous vehicle and he hits the child.

Making Progress

Bryant Walker-Smith, an assistant professor at the University of South Carolina argues that given the number of fatal traffic accidents that involve human error today, it is imperative to accelerate the evolution of self-driving technology: “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”

Any algorithm for robotic driving is designed to make a decision by favoring or discriminating against an entity or a group based on factors that are outside the driving function itself. The behavior of a self-driving car is codified into software instructions, based on earlier decisions made and influenced by lawmakers, regulators and programmers.

However, we do not have the precedence, regulations and case law to define the expected behavior of self-driving cars when facing complex dilemmas such as those described in this article. Worse, we do not have an understanding of how society will approach such issues.  In most cases, society tends to be very forging towards human drivers that made wring split-second decisions.  Will we forgive a software algorithm that made a wrong choice? The programmers who made a coding error?

Philosophers, artificial intelligence experts, legal scholars and the general public will debate these topics for the years to come.

One simple intermediary step to improve the technology and reduce the stress for ethical and moral based decisions, is to allow autonomous cars to operate more freely in safe zones such as dedicated highway lanes, shuttle passengers between airport terminals and rental car office, or provide transportation within a large company campus. These will do much to improve the technology and enhance public trust in it.


Image: Future Car (Arthur Radebaurgh)