Artificial Intelligence and Real Bias

By December 12, 2017July 2nd, 2019Artificial Intelligence and Machine Learning
Pandora's Box (Bezt (Etam Cru), 2011)

Is There a Need to Regulate Machine-Learning Algorithms?

We live in a world that is increasingly shaped by machine intelligence. Conversational bots, artificial-intelligent-based automatons, self-learning software and other seemingly intelligent systems appear to be in cars, businesses and in our homes.

Thanks to advances in artificial intelligence and virtually limitless availability of cloud-based storage and computing bandwidth, it is inevitable that computer algorithms will be used extensively throughout our economy and society and will have an increasingly far-reaching impact on all aspects of our everyday lives, from basic assistive tasks to complete decision authority in education, healthcare, finance, and employment.

But as Pamela McCorduck observed, no novel science or technology of such magnitude arrives without disadvantages, even perils. The more sophisticated and powerful artificial intelligence algorithms become, the more reason to pay attention to the risks associated with the design and operation of these algorithms, and to curtail errors and biases from the outset.

Big Data is Biased

Until a decade ago, most of the world’s data was created by scientific, industrial and administrative sources.  Today, these have been overtaken by the everyday activities of billions of people worldwide. As digitally connected people exchange emails and text messages, transmit data from wearable and mobile devices and share photos, they unwittingly leave behind a trail of personal information from which sophisticated machine learning algorithms use to extract and interpret patterns.

Artificial intelligence will take over many aspects of automating decision making and scientific research, especially in domains rich with recorded information in which application of statistics, AI and operations research is highly effective in discovering patterns and analyzing performance and outcomes. In healthcare, for example, automated image analysis will result in faster and potentially more accurate cancer screening. Analysis of complex medical data to find relationships between regimens and patient outcomes will enhance personalize treatment and identify novel uses for existing drugs.

But big data isn’t some obscure and amorphous technical concept is isolated from society and human behavior. Institutional data repositories aren’t perfect. They may not represent sufficient range and often include patterns of bad habits and poor decisions that have lingered, undetected, for years. Worse, enterprise data, and, more so, social media interactions, are very likely to be plagued by social biases, whether intentional or not. It’s easy to understand how datasets of human decisions may include cultural, educational, gender, race, and other biases, which will be propagated and amplified by self-learning systems and could result in incorrect actions and harmful discrimination.

IBM Watson’s Potty Mouth

Setting machine-learning algorithms free to scour for information to enrich their knowledge about the world could have its perils. IBM researchers wanted machine-learning program Watson to sound more human, so they introduced it to the web’s Urban Dictionary. While Watson learned plenty of human slang from the site, it also picked up the website’s plethora of profane slang and swear words. Unable to distinguish proper vocabulary from profanity, Watson developed a potty mouth and had to be banned from surfing the web unsupervised.

This isn’t Funny!

The anecdote of potty-mouthed Watson isn’t funny.

Tay is a Twitter bot capable of “conversational understanding” as Microsoft describes it: the more you chat with Tay, the smarter it gets, learning to engage people through “casual and playful conversation.” Unfortunately, Tay’s conversations didn’t stay playful for long. It took less than 24 hours for Twitter trolls to discover the bot, who was happy to repeat misogynistic, racist, and Nazi sympathizer remarks to its new followers.

When Harvard University professor Latanya Sweeney entered her name in Google, an ad appeared: “Latanya Sweeney, Arrested? 1) Enter name and state 2) Access full background. Checks instantly.” Upon further analysis, Sweeney discovered that African American-sounding names are as much as 25 percent more likely to be served with an arrest-related advertising.

Last year, Beauty.AI hosted the “first international beauty contest judged by artificial intelligence.” The winners, chosen from photographs submitted by 6,000 people from more than 100 countries, were overwhelmingly white: out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin.

Now, consider these types of algorithms are used to review and approve loans, process job applications, or predict crime.

Algorithmic Opacity is Dangerous

Certain types of algorithms used in machine learning systems are like black boxes: they cannot “explain” how they reach a certain conclusion or recommendation. Even their designers are unable to determine when their outputs may be biased or erroneous.  And, as we now know, AI-based automatons are not only highly influenced by bias, they are as diligent in propagating it indiscriminately.

The irony is that many people have automation bias: they believe that automated computer systems are both objective and predictable. Obviously, not true.

Unmasking Bias

Granted, bias is not created by intelligent machines. But machines are effective in discovering, amplifying and disseminating biases. The very same learning algorithms that can misbehave and become bigots overnight, can be instrumental in identifying behaviors and correlations that aren’t normal and might indicate social biases, statistical aberrations, and other common ills found in raw big data training datasets.

A Call for Algorithmic Transparency and Accountability

Artificial intelligence and various forms of machine learning systems will have a sweeping disruptive impact on individuals and society, creating a multitude of economic benefits, both tangible, such as scientific research, consumer convenience and time savings, as well as intangibles that have indirect impact on society and the economy.

But along with the many benefits of intelligent systems, the accelerated adoption of algorithms for automated decision-making can result in incorrect decisions and harmful discrimination that are difficult to discover and correct.

These concerns are prompting some to call for greater transparency of algorithms and training datasets, and to debate the merit of regulating algorithms themselves.

Should policymakers hold institutions using AI-based systems to the same standards as institutions where humans have traditionally made decisions? Should developers be required to prove algorithms respond to inputs the same way people would?

Joy Buolamwini, a researcher at the Civic Media group at the MIT Media Lab has established the Algorithmic Justice League to fight bias in machine learning algorithms.

At the World Economic Forum in Davos in January, 2017, IBM’s CEO Ginni Rometty announced the foundational principles for Transparency and Trust in the Cognitive Era to guide the development of  AI and cognitive systems at IBM.

On May 25, 2017 the Association for Computing Machinery (ACM) published Principles for Algorithmic Transparency and Accountability, proposing a set of principles intended to support the benefits of algorithmic decision-making while addressing these concerns. These principles are:

  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

Image: Pandora’s Box (Bezt Etam, 2011)
This article was updated on July 2, 2019