Explainable AI – Is Everybody Reading the Same Articles?

By February 10, 2020February 11th, 2020Artificial Intelligence and Machine Learning
The Explanation

Everybody is Reading the Same Articles on Artificial Intelligence and Machine Learning

In a Scientific American article titled Ethics in the Age of Artificial Intelligence, the author posits the following question: “If we don’t know how AIs make decisions, how can we trust what they decide?”

The author claims “AI also took away the transparency, explainability, predictability, teachability and auditability of the human move, replacing it with opacity.” And makers of certain classes of AI systems are quick to agree. Tesla, for instance, once claimed it had ‘no way of knowing’ if its autopilot self-driving system was used in a fatal crash.

I, too, expressed concerns about biases and algorithmic opacity in deep learning algorithms and suggested that industry must engage proactively in promoting algorithmic transparency and taking responsibility for the outcomes their software produces.

At time it seems, however, that many journalists, self-proclaimed experts and opinion bearers are  reading the same articles and relying on the same sources that incorrectly treat artificial intelligence and machine learning as a monolithic and somewhat mysterious technology. Frequently, the explanations offered by popular writeups about the causes and side effects of algorithmic opacity are incomplete and rift with myths misconceptions and inaccuracies. And some miss the point completely and claim—again, incorrectly—that demanding explainability  is akin to asking a company disclose its source code. Nothing could be further from the truth.

What many of the mainstream writing on AI tends to overlook is that there are robust AI technologies that are transparent and explainable, and unlike many machine learning systems, their output can be predicted and validated before they are fielded in mission-critical applications. Some technologies that fall under the general area of explainable artificial intelligence (XAI) are:

Rule-based Systems. Often dismissed as too simplistic, or, conversely, as impossible to maintain in large-scale systems, this approach is commonly used in lexical analysis in natural language processing (NLP), robot process automation (RPA) and control systems. Rule-based systems are intrinsically explainable and easy to build and deploy so long as the size of the rules base and regression testing efforts are in check.

Model-based Reasoning (MBR). For AI purists, model-based systems are nirvana. Deep heuristics, complex Bayesian models, entropy calculations: what’s not to like? When it comes to explainability, MBR systems can be surprisingly revealing, often outperforming human decision makers in their superior ability to make intelligent decisions in complex situations. On the other hand—and this is an important caveat—these systems can be difficult to build and maintain at scale.

Case-based Reasoning (CBR). I always found that case-based reasoning is a pragmatic approach that uses just-enough heuristics to get the job done. CBR does not suffer from the complexity of deep-knowledge model-based reasoning and knowledge codifying process is simpler and more responsive to new information and changing conditions.

Having spent more than a decade developing model-based and case-based AI system, I would be first to admit that some heuristic approaches can be quite onerous.  Knowledge engineering, which is the process of formalizing, codifying and validating knowledge can be resource intensive, especially when compared to the low-cost magic of self-learning systems (if you believe it).

Explainable AI

Smart algorithms with increasingly more reasoning power are shaping many aspects of how we work, live and play. They monitor machinery, predict failures and optimize operations. Wearable devices track our daily activities and offer guidance from nutrition to physical activity and sleep. Software robots screen job applications and loan requests. And medical AI assistants review x-rays and offer medical diagnoses.

Effective cooperation between powerful artificial intelligence agents and humans depends on trust. Humans need to know that these agents are unfailingly correct and unbiased. Developing this trust is predicated upon having methods and techniques that guarantee algorithmic and operational transparency and ensure the output of the AI system is interpretable and explainable.

To a great extent, this trust, whether in a diagnosis of a tumor, rejecting a loan application or reporting abusive online content, is predicated upon the ability of the system to explain the root causes and the rationale behind its recommendation.


Image: The Explanation (Rene Magritte, 1952)