The Bias of AI
by Miki SaxonI’ve written before that AI is biased for the same reason children grow up biased — they both learn from their parents.
In AI’s case its “parents” are the datasets used to train the algorithms.
The datasets are a collection of millions of bits of historical information focused on the particular subject being taught.
In other words, the AI learns to “think”, evaluate information and make judgments based on what has been done in the past.
And what was done in the past was heavily biased.
What does that mean to us?
In healthcare, AI will downgrade complaints from women and people of color, as doctors have always done.
And AI will really trash you if you are also fat. Seriously.
“We all have cultural biases, and health care providers are people, too,” DeJoy says. Studies have indicated that doctors across all specialties are more likely to consider an overweight patient uncooperative, less compliant and even less intelligent than a thinner counterpart.
AI is contributing significantly to the racial bias common in the courts and law enforcement.
Modern-day risk assessment tools are often driven by algorithms trained on historical crime data. (…) Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.
Facial recognition also runs on biased AI.
Nearly 35 percent of images for darker-skinned women faced errors on facial recognition software, according to a study by Massachusetts Institute of Technology. Comparatively lighter-skinned males only faced an error rate of around 1 percent.
While healthcare, law and policing are furthest along, bias is oozing out of every nook and cranny that AI penetrates.
As usual, the problem was recognized after the genie was out of the box.
There’s a lot of talk about how to correct the problem, but how much will actually be done and when is questionable.
This is especially true since the bias in AI is the same as that of the people using it it’s unlikely they will consider it a problem.
Image credit: Mike MacKenzie