Ethical and Societal Implications of Automated Vehicles

By July 23, 2016September 22nd, 2016Automotive
Philosopher Illuminated by the Light of the Moon and the Setting Sun

(The following is the transcript of my opening remarks at the Ethical and Societal Implications of Automated Vehicles Panel at the Autonomous Vehicle Symposium.)

Whenever we discuss autonomous vehicles: what are the desired behavioral models?  how can we design them to make the right decisions?  we place the car at the center of the problem definition: can it avoid crashing?  can it be programmed to make “ethical” decisions?  what if it doesn’t follow “ethics” rules?  what happens when two autonomous cars on a collision-course obey conflicting decision rules?

The recent incidents involving Tesla’s cars operating semi-autonomously in Autopilot mode led to renewed interest and, not surprisingly, concerns about the technology. Tesla’s public response was to explain the technical reasons the car failed to detect and avoid a fatal collision; again, putting the car at the center of the argument.

These incidents and the ensuing public discourse influence the public’s perception and the wide adoption of autonomous vehicles.

So let’s look briefly at public perception. Market research in this area, specially people’s expectations and willingness to use autonomous vehicles is very undecided.  For example, a recent Texas A&M Transportation Institute report published in April 2016, looked at “intent to use.” These were the findings, at a high level:

  • Rejecters (extremely unlikely to use): 18%
  • Traditionalists (somewhat unlikely to use): 32%
  • Pragmatists (somewhat likely to use): 36%
  • Enthusiasts (extremely likely to use): 14%

This distribution of opinions is typical to an early market situation, where the market is aware but not very knowledgeable.  The majority of the respondents—nearly 70%—may be classified as a wait-and-see group.  Other studies reflect a similar sentiment. And dramatic headlines about successes and failures swing the pendulum of public opinion (and Tesla’s stock price) up and down.

By keeping the car at the center of the debate, are we saying that the problem is simply a matter of identifying the right rules? Can we assume that if the car behaves according to predetermined set of specifications then we are done?

Will we be satisfied with “reasonable” behavior—which is all we can expect from humans—or do we demand a precise and predictable response every time, which we now realize is a goal that is very difficult to attain?

The answer to what constitutes “correct” behavior cannot be solved entirely in conferences like this one, in R&D labs, or by the legislature and the court. Not initially, not entirely. It must evolve Darwinistically, reaching an equilibrium between expectations and benefits on one hand, and the risks and penalties of this technology on the other.

What Drive Technology Adoption?

Obviously, there needs to be a perception of some value. We probably agree that the potential value of autonomous driving to society is clear enough:

  • Significant reduction in crashes and fatalities
  • Availability of mobility to the elderly and the disabled
  • And the hope of reduction in traffic congestion and pollution. I say “hope,” because a counter argument says that there is always greater demand than supply of mobility: The increase in road capacity as a result of fewer cars and optimized traffic patterns will be filled immediately by more people on the roads.

But, as many market research studies show, the public perception of value is offset greatly by concerns about the viability and safety of autonomous driving.  And while you could argue that, statistically speaking, today’s autonomous driving, under most circumstances, is as good—actually better—than of the average driver, this may not be enough to evoke trust.

So I’d like to include the notion of trust in technology in this discourse.

To drive adoption, the public—not only drivers, but also pedestrians, motorcycles etc.—needs to have trust that the car will always:

  • Obey the rules of physics
  • Obey the rules of the road
  • Obey the “rules” of society. i.e. exhibit “acceptable” behavioral norms
  • Respond “correctly” to random events generated by human-driven cars, pedestrians and obstacles, while still obeying all of the above.

But as we know, these goals are intrinsically incompatible, so trust is predicated upon the perception of correct dynamic prioritization of conflicting constraints.

The research shows that the majority of the public is still lukewarm about autonomous driving.  We need to put the public at the center of the discourse. We need to understand what will compel the public to trust driverless cars enough (and accept the inevitable failures of this technology) before they let the robots take the kids to a soccer practice, or an aging parent to a doctor appointment.


Image:Philosopher Illuminated by the Light of the Moon and the Setting Sun By Salvador Dali (1939)