AI As Blunt Force Trauma
by Miki Saxon
While AI can do some things on its own, it’s a blunt force, ignorant of nuance, but embracing all the bias, prejudices, bigotry and downright stupidity of past generations thanks to its training.
Using AI to make judgement calls that are implemented sans human involvement is like using a five pound sledgehammer on a thumbtack.
Yesterday looked at what AI can miss in hiring situations, but candidates at least have more choice than others do.
AI is being used extensively around the world by government and law enforcement where its bias is especially hard on people of color.
The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.
Human judgement may be flawed and it does has the same prejudices, but it’s not inflexible, whereas AI is.
As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.
Now schools are jumping on the bandwagon claiming that facial recognition will make schools safer, but not everyone agrees.
“Subjecting 5-year-olds to this technology will not make anyone safer, and we can’t allow invasive surveillance to become the norm in our public spaces,” said Stefanie Coyle, deputy director of the Education Policy Center for the New York Civil Liberties Union. (…)
Critics of the technology, including Mr. Shultz and the New York Civil Liberties Union, point to the growing evidence of racial bias in facial recognition systems. In December, the federal government released a study, one of the largest of its kind, that found that most commercial facial recognition systems exhibited bias, falsely identifying African-American and Asian faces 10 to 100 times more than Caucasian faces. Another federal study found a higher rate of mistaken matches among children.
So what do the kids think?
Students 13 and older are invited to comment. All comments are moderated by the Learning Network staff…
Read the Q&A to find out.
Image credit: Mike MacKenzie