To Crash is Human
It has been established that humans, not cars, nor the environment are responsible for auto accidents. The data from the National Motor Vehicle Crash Causation Survey (NMVCCS) conducted by the National Highway Traffic Safety Administration makes this abundantly clear.
National Motor Vehicle Crash Causation Survey (NMVCCS)
Automakers and their suppliers invest heavily in improving crash survivability and in active safety features such as lane departure warning, blind-spot alerts and adaptive cruise control. As the industry inches towards semi- and fully-autonomous vehicles, we expect the number of crashes and fatalities to lessen significantly. Even before we reach a critical mass of crash-avoiding cars, we should see a gradual reduction in the number and severity of car crashes.
But the ability of automakers to mature active safety technologies and get them to market is frequently challenged by quality issues and security breaches that plague the industry at an alarming rate. The prime culprit is the sophisticated software driving active safety features.
Some like to boast about the number of lines of software code in modern cars and wittily imply that because fighter jet software has fewer lines of code, car software is more advanced and sophisticated. In reality, the problem facing automakers is not driven simply by the volume of software in these cars. The challenge in developing the software is managing the complex functionality of many product configurations and variants.
In-vehicle software for active safety and autonomous operation will continue to push boundaries and will not only increase the burden of assuring its robust quality, but will also bring to the forefront some uncharted challenges in defining and validating the correct and safe operation of these advanced features.
Whose Fault is it?
As more driving tasks traditionally entrusted to human drivers, such as navigation, steering and avoiding crashes, are delegated to software, new concerns about safe operation and questions about liability will arise.
If the car’s software performs worse than a human driver under similar circumstances, we naturally blame the automaker. We might even hold the OEM liable if its vehicle failed to prevent an accident in circumstances that other cars would.
But what if, on average, the accident prevention capabilities of the autonomous car of the future is superior to human drivers, yet such a car caused damage?
And some of the most interesting questions involve the type of dynamic judgment and decision-making informed by legal and ethical considerations. Speeding or swerving into incoming traffic lanes in order to avoid hitting a pedestrian is probably justifiable. But is breaking the law and risking hitting another car justified if the obstacle is a small animal? This type of moral and ethical decision dilemma is often deliberated using the now classic Trolley Problem.
Blame the Software, Not the Driver
As software’s functionality becomes more interactive and “behavioral”, incorporating predictive models of human behavior–both drivers and pedestrians, the definition and validation of correct—even ethical—functionality is going to be extremely complex.
Human drivers may be forgiven for making bad split-second decision; but automakers will not be given that luxury. We are already witnessing an increasing number of quality, safety and cyber security problems in vehicle software, and with those, a growing sentiment of blaming software for driver mistakes and car crashes, some, perhaps, unjustifiably.
As drivers rely more on in-vehicle software, automakers will have to improve both the quality and the functional fidelity of the software. OEMs are not oblivious to the heft of this challenge. As GM’s Mary Bara said: “[it’s a] huge responsibility whether you are steering or not.”
(Photo: Continental Connected Car. Source: Times of India)