Can Driverless Cars Make Ethical Decisions?

Fully autonomous driverless cars may be in the future, but the vision is getting more vivid by the day. Great progress is being made in demonstrating autonomous car capabilities and enacting legislation to allow road testing, improving autonomous driving capabilities and inspire consumer confidence and acceptance of new mobility models.

But before autonomous cars are available at your local dealership or can be summoned from your smartphone app, there’s a number of significant hurdles that industry and governments need to overcome.  The purpose of this short article is not to contest the viability of autonomous cars, but rather to highlight some of the interesting and difficult problems engineers, legislatures, and ethicists (yes, ethicists; this is not a typo) are working on.

Human Behavior and Interaction

Mary Bara, GM’s CEOs said in a recent interview:  “I can put autonomous car out now, but the streets of New York are a great example [of the challenges]: the jogger, the dog, the baby carriage.” Recognizing static obstacles and even humans and small animals crossing the road is likely to improve considerably in the near future.

However, recognizing and responding to the complex and tacit interaction between drivers, cyclists and pedestrians will be much more difficult. For example, a police officer signaling for traffic to stop to allow pedestrian to cross the street. A driverless car doesn’t recognize the signal and proceeds through a green light but is unable to stop in time when the pedestrian steps into its path.

Weather

Daytime driving in the perfectly clear weather of California and Nevada doesn’t represent the average conditions autonomous vehicles will face in most other regions. Cars will have to be able to follow traffic lanes in snow covered roads, evade ice patches and potholes (I live in Boston!), and quickly recover when blinded by the sun.

Ethics

Making the right choice when there are conflicting objectives and restrictions is extremely difficult for advanced artificial intelligence based systems. The autonomous driving system is designed to obey the traffic rules, but will it be able to “break the law” to avoid hitting a pedestrian?

In fact, even if driverless cars eventually do possess advanced “moral judgment”, will all autonomous cars made by highly competitive OEMs employ identical decision “ethics” and collision avoidance strategies, or are they likely to make opposing decisions and stop in their track instead of evading each other?

Law and Liability

Issues surrounding accountability for damage caused by an autonomous and possibly even driver-assistant driving are only beginning to emerge. For instance, if a car’s software performed as advertised, but failed to prevent an accident, is the carmaker responsible for the damage?  Will we see a wave of drivers arguing that the car’s software failed to provide adequate lane departure or blind spot warning?

As GM’s Bara acknowledged: “[it’s a] huge responsibility whether you are steering or not.”

Cost and Critical Mass

Even as technology improves and consumer doubts eventually abate, the additional cost of sensor and computing technologies may prevent autonomous cars from being affordable by the mainstream for a long time. Of course, the cost will improve dramatically when autonomous cars enter volume production, but this will not happen until consumer acceptance is high enough.

This is a chicken-and-egg problem because the full potential of autonomous vehicles, and connected cars in general, will not be realized until the number of such vehicles reaches a critical mass. But reaching this point will require that enough consumers buy into advanced connected car and autonomous driving technologies.