Artificial intelligence (AI) technology continues its aggressive foray into nearly every aspect of our lives, from the seemingly human-like (most of the time) interactions with Alexa to fully autonomous cars that in the not so distant future will be able to handle complex navigation and steering scenarios better than most human drivers.
Conversations about robot cars that make split-second life-and-death decisions involving car occupants and pedestrians inevitably invoke Isaac Asimov’s famous Three Laws of Robotics that first appeared in his 1942 short story Runaround:
- Law One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Law Two: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Law Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
- Asimov later added the “Zeroth Law” above the first three: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Asimov’s three (or four) laws are frequently offered as a framework for the design of self-directed machines and autonomous decision-making systems that interact and collaborate with humans. Proponents of these laws believe this form of hierarchical decision making provides unifying principles and form a foundation for the design of acceptable, even ethical, machine behavior algorithm. In other words, if we could only make machines that follow these rules faithfully, the outcome will be consistently ethical, fair and acceptable by most humans.
But these rules, which appear in some variations in other stories by Asimov, were intended as a mere literary plot device. They are inherently adversarial and designed to create interesting conflicts to drive the story’s plot, and are completely inadequate as a framework to govern the operations of the automatons of the future.
An inherent problem in Asimov-like methods is that they are binary, linear and unidirectional. For instance, Law One decrees that a robot may not harm a human being. But what if this human is Osama Bin Laden? Or if the person commanding the robot and giving it orders is Dr. Evil? These rules are restrictive and will invariably lead a robot to a deadlock caused by conflicting restrictions.
An article in a recent issue of Autonomous Vehicle Engineering (available in print only at the time this article is posted) builds upon Asimov Three Laws and suggests an extension in the form of a hierarchical guidelines to inflicting (acceptable) harm:
- Do not harm people
- Do not harm animals
- Do not damage self
- Do not damage property
Again, any attempt to attach value, monetary or otherwise, is fraught by complicated questions about who makes the determination of value and whether such value can and should change depending on circumstances and context. For instance, does the robot come before property under all circumstances? What if the property being threatened is the Eifel Tower?
One might consider consulting parallel practices that attach monetary value to human lives and bodies, such as the ghoulish list of the value of bodily appendages delineated in accidental death and dismemberment insurance policies. However, such lists are intended to quantify loss resulting from accidental dismemberment, not as a set of premediated algorithmic rules that guide future decisions.
And this is before we even begin the discussion about choices, ethics and the proverbial Trolley Problem: a paradoxical no-win dilemma.
Asimov’s Three Laws of Robotics were an early recognition—even if only in fiction—of the challenges facing engineers and society at large, and the need to ensure the safe behavior of autonomous machines. Asimov believed that the Three Laws could actually work. In a 1982 article in Compute!, he wrote:
“I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to able to choose among different courses of behavior. My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robots — or with anything else.”
Behavioral models such as Asimov’s Three Laws are based on an ethical theory that the morality of an action is based on whether that action itself is right or wrong as defined by a series of rules, rather than based on the consequences of the action. Rules in themselves will not make a machines moral, as morality blends both intent and action, as judged from the perspective of not only intent but also outcome. And, as we saw earlier, reaching an acceptable, let alone ethical, outcome is difficult (the trolley problem, again).
Instead of hierarchically structured policies and predefined rules to restrict their behavior, robots should be designed to optimize their actions to reach the best outcome for any given scenario. Driverless cars should be designed to operate in situations for which there is no identifiable perfect solution by evaluating scenarios and explore options within contexts it (or its designers) never experienced before and respond in a way that keep humans as safe as possible.
But a robot’s superior speed and accuracy may not guarantee the right decision, one that we will always accept the results after the fact. However, if the robot performs no worse than a human equivalent, would that be good enough? We forgive a human for making a wrong split-second decision in a tough no-win situation such as the one described by the trolley problem. If we create a suitable framework for robots, will we be able to forgive them? Should we?
Ethical rules, to the extent they work, are built upon human instincts and intuitions and continue to evolve as guidance for defining responsibilities and liabilities. The effort to understand, codify and program our ethics into machines will provide helpful insights into many ethical issues and improve our ability to design useful human-robot partnerships.