Doug Garnett’s Blog

Menu

Hypothesis: AI and Machine Learning Are Inherently Biased — It Can’t Be Otherwise

Hypothesis:  <strong>AI and Machine Learning Are Inherently Biased — It Can’t Be Otherwise</strong>

There’s an excellent interview out this morning in the New York Times tech newsletter in which the always perceptive Shira Ovide interviews Cade Metz about AI. In the interview, Metz observes “The original sin of the A.I. pioneers was that they called it artificial intelligence.“

I couldn’t agree more. What machines do which is beyond traditional programming isn’t intelligence — they’re clever algorithms often thought (usually wrongly) to mimic the way the human brain works. At absolute best, they can only be models of a narrow segment of the brain working — at worst they’re just algorithms.

Unfortunately, the umbrella marketing of these algorithms as “AI” has led to serious abuses and a serious concern that machine algorithms reach biased conclusions. Except we are told it’s not the machines fault because it is assumed a machine cannot be biased — it’s the creator, trainer, or manager of the algorithm who are biased.

I begin to disagree. Machine results ARE biased because activating the analysis inside a machines is inherently biased.

I state this hypothesis strongly in order to be clear. I’m sure there are many legitimate disclaimers which would be appropriate and I would love to engage a spirited discussion about the following idea. 

The Hypothesis:  Machines are Inherently Biased — They Cannot NOT be Biased

Iain McGilchrist suggests the human mind roughly works in two hemispheres (NOT the old right brain/left brain dichotomy which has been appropriately abandoned). His conclusions come from a lifetime study of humans — especially looking at studies where one or the other hemisphere is disabled either by physical accident or medical intervention. What he find is this:

  • The left hemisphere is primarily involved with concepts, abstractions, classifications, and idealizations of the world. The left hemisphere is the primary hemisphere of language — the talkative one demanding it’s way.
  • The right hemisphere is primarily involved with a set of immediacies — aware of the environment and the world around us — scanning for danger and opportunity found in the specifics of THIS situation we find ourselves in today. The right hemisphere sees with depth and sees extraordinary things which are hard to articulate — yet often more important that that articulated by the left.

McGilchrist expresses an opinion that the human mind, to work effectively, needs a robust left hemisphere but with the right hemisphere “in charge.” The left hemisphere is the portion of the brain we’d like to have be the emissary referred to in his book title of “The Master and His Emissary.”

Using this model of the human brain, today’s AI can do no more than attempt to replicate our left hemispheres — the hemisphere of sterotypes.

What trips up algorithms are things which the human mind could only process with active intervention from the right hemisphere. For example, it is difficult for machine detection to accurately recognize all the possible situations where a car needs to stop at a stop sign. However, the human mind does this easily and well — because the specifics of “the situation in front of us” are the most important at that time.

The human mind knows how to scan the specifics and make judgement calls drawing on a lifetime of learning and experience. A machine does not have that ability.

Today, AI Systems Are Systems of Stereotype

The programming and intent of today’s machine learning is to create what in a human we would call “stereotypes” — taking the real world which includes a broad range of individuals and individual circumstance and classifying them in a reduced set of categories. These categories are used to make quick decisions which dismiss a tremendous amount of the context within which the decision is made.

Even when trained on millions of photo’s, machine learning is about classification. I suppose we’ve all given machine learning a free ride excuse from stereotyping because of the “millions of photos.” Except, human stereotypes also learn from seeing a great many things but then classifying the millions (or billions) as “all alike” from a classification point of view. Remember what kinds of classifications result in humans:

  • All ___ are lazy.
  • All ___ uppity if they want a raise or promotion.
  • People who wear hoodies with the hood up are dangerous.
  • Anyone non-white walking in a white neighborhood at night must be a criminal.

Today’s algorithms express findings with tremendously more tact — often stated with the trappings of probabilities and statistical magic in order to sound “sophisticated.” But let’s remember that eugenics was once embraced by the highly intelligent and respected for it’s own trappings of science. These trappings do NOT prove any lack of bias.

Now, it may be that machines don’t see literal skin color, gender, preferences, or personalities. But even as I wrote those phrases, I can hear in them the human resources AI system telling bosses which individual to hire. Even as I write those phrases I can hear machines advising police on arrests, advising judges on setting bail, and what kind of sentence person X should receive as opposed to person Y. The reality of machine bias will be quite subtle — at least until it’s not.

The Machine Error and Bias

When my brother, Stan Garnett, was District Attorney for Boulder County in Colorado, he encountered AI systems for setting bail and was troubled.

Garnett said his office is especially worried about the risk of re-offending for defendants in sexual assault, domestic violence, and repeat DUI cases. One recent example he cited was Nathanal Lobato, who is accused of sexually assaulting a teenager and is now accused of assaulting his girlfriend while he was out on bond.

After his sexual assault arrest, Lobato was given a $10,000 bond with a $1,000 cash option, which he posted. After his second arrest, he was given a $15,000 bond with a $1,500 cash option. He was also able to post that amount, and currently is free on a total of $2,500 bond for the two cases.

“Lobato is a good example of a guy who comes in, gets a bond of $10,000, which he is immediately able to make, then turns around and allegedly commits another serious offense and then gets a bond of only $15,000 which is also able to immediately make,” Garnett said. “In hindsight, the (first bond) was not set high enough to protect the public.”

These errors are NOT benign. And biases? It’s quite easy for a prejudice against people of color to end up in the data used by this algorithm — data which might result in people going free which met the machine’s bias and people getting high bail solely because of machine bias.

Data is Never Neutral

Let’s take a look at a massive data lake inside of a company — the kind of large pool of data companies use to justify an erroneous belief that turning decisions over to a machine makes them color blind. Usually, those pools are augmented (for example) with data from outside sources — say a credit bureau. But credit bureau data is well known to include prejudices. Even the data you collect within your own limited world has bias built in.

Why do you choose to keep one point of data while not keeping another? Because someone believes that the first datapoint is important and the second one is not. Inherently there is never a situation where data is neutral.

So let’s stop kidding ourselves about neutrality in data — it doesn’t exist and won’t ever exist. I begin to yearn for at least a bit of the bad old days when real people made flawed decisions — and the flaws were evident.

Machine Bias Can Be Far Worse Than Human Bias

If you aren’t familiar with Wiener’s Laws — the work of aviation accident guru Earl Wiener — you should be. Here’s just one:

Digital devices tune out small errors while creating opportunities for large errors.

Today, machine bias can be far more damaging than human bias. At a minimum, it starts with an idea that some company owns the algorithm and, therefore, is legally allowed to refuse to allow anyone to know what biases have come to be built into it. We’ve seen this with machine algorithms for setting bail in criminal settings as well as awarding or dismissing teachers based on test scores. And the outliers here are so absurd that it’s kind of a shock anyone trusts a machine to do anything.

In other words, even when we know machine bias exists, we aren’t allowed to find out WHERE it is biased nor challenge decisions made by machines.

Instead of decisions being made behind closed doors in a cigar smoke filled room, they are made deep inside machines in tiny electrically charged rooms. At least a recording of the smoke filled room could be understood. Machine biases primarily can never be understood — partly because we are not allowed access to the algorithms and partly because most machine learning cannot be deconstructed in order to understand what the bias is based on.

Some of the Bias is Quite Funny

Melanie Mitchell of the Santa Fe Institute and Portland State University writes in her recent book about the state of AI that her team at PSU had trained a machine learning algorithm to recognize animals in photographs and it had become quite accurate. So they decided to dig in and sort out HOW it was making it’s choices.

It turned out, the algorithm knew nothing about animals. What it knew was that it was most often “correct” when it said there was an animal in a photo with a blurred background and most often correct when it said there was no animal in a photo with a crisp background. Not a bad assumption.

But let me say it again:  The machine algorithm knew NOTHING about animals.

Stereotypes — What Computers are Good At

In her book, Melanie Mitchell also relates how her mentor Douglas Hofstader (author of Gödel, Escher, Bach) had been shock in the 1990s to find an algorithm constructed to create new pieces of music which were thought to be by Chopin. He even had the results of the algorithm tested with a live audience of musicologists and other experts when two pieces (one by Chopin himself and one by the algorithm) were performed. When asked which was originally written by Chopin, the audience chose the one the algorithm had created.

We can choose to be shocked by this idea. But take a minute and really think about it. What did Chopin himself do and what did the algorithm do? Chopin developed from scratch a style and approach which was entirely his — and it was a style which continued to evolve and growth throughout his life. The algorithm scanned back over the entirety of Chopin’s music and…made a great sterotype of Chopin’s music.

In fact, it made a stereotype so strong that it fit OUR stereotypes. If you will, it was “more like Chopin than Chopin himself.”

What we fail to realize is what makes Chopin great is that he was always trying and discarding ideas. So ANY piece he might have written would have threads he would discard that today we’d say “that’s not Chopin-like.” His REAL music has those threads — the work of an algorithm never will.

On the other hand, a computer can make a stereotype of Chopin so good that it can fool people. Is that impressive? I don’t think nearly as impressive as many want to think.

What Next?

There are systems which learn from objective criteria — like systems for choosing investments. These systems are far distant from a risk of bias hurting society — yet the AI and data experts need to continually remain aware, as the good ones already do, of the risk of bias from the machine.

Based on what I’ve written I welcome further discussion. I am not deeply involved with algorithms yet remain enough of a software engineer to comprehend how these algorithms work. That said, perhaps there are very useful examples that would show something I’ve missed.

Supposing that what I’ve suggested holds together mostly, it is desperate that science, politics, ethics, and society call out what needs to be called out:  No algorithm can ever be bias free.

And once we know that, we need both legal changes and moral changes to rein in the abuses perpetuated by software as it creates and imposes stereotypes — damaging to individuals and society.

The evolution of a wide variety of algorithms from fake AI offers some interesting potential. But in order to reach that potential AND for society to emerge relatively unharmed from the change, we must come to grips with the inherent tendency of computers to stereotype.

Remember, it’s what they’re best at.

Be well. Let’s enjoy a springtime when vaccinations are increasing and the hope of new independence later this year.

©2021 Doug Garnett — All Rights Reserved


Through my company Protonik LLC based in Portland Oregon, I consult with companies on their efforts around new and innovative products and explore what marketers should learn from the field of complexity science. An adjunct instructor are Portland State University, I also teach marketing, consumer behavior, and advertising.
As a specialty, I also advise a select group of clients attempting to bring new life to Shelf Potatoes or taking existing products to new markets. We also produced marketing materials for artists including documentaries.
You can read more about these services and my unusual background (math, aerospace, supercomputers, consumer goods & national TV ads) at www.Protonik.net. 

Categories:   Business and Strategy

Comments

  • Posted: March 19, 2021 02:22

    Marcia Chapman

    A few thoughts: The choice of which data to review is perhaps the greatest creator of bias. You cannot reasonably model all the possible variables so your model inherently is limited by the variables that are chosen. If the computer chooses the variables that is based on some decisions that are based on underlying coding.
  • Posted: March 19, 2021 02:25

    Marcia Chapman

    The investment algorithms may not be so benign as they were a significant factor in the 2008 crash. Also they decide which stocks win or lose but miss certain more subjective factors such as employee loyalty to a company that does not lay off employees during a downturn