The chatter about how AI will change the world, take your job, out-consult the consultants, displace management, perform reviews, identify potential criminals and reoffenders, diagnose illnesses, etc., especially etc., is never ending.
AI is supposed to bring true objectivity to its many applications creating longed for change.
AI is good at increasing bias in the name of efficiency and objectivity.
It is even better at automating the loss of privacy and increasing surveillance in the name of safety.
Long before AI got hot Lou Gerstner knew the solution.
Computers are magnificent tools for the realization of our dreams, but no machine can replace the human spark of spirit, compassion, love, and understanding.
Something tech has forgotten in its love affair with data and its warped view of progress.
The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.
Human judgement may be flawed and it does has the same prejudices, but it’s not inflexible, whereas AI is.
As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.
“Subjecting 5-year-olds to this technology will not make anyone safer, and we can’t allow invasive surveillance to become the norm in our public spaces,” said Stefanie Coyle, deputy director of the Education Policy Center for the New York Civil Liberties Union. (…)
Critics of the technology, including Mr. Shultz and the New York Civil Liberties Union, point to the growing evidence of racial bias in facial recognition systems. In December, the federal government released a study, one of the largest of its kind, that found that most commercial facial recognition systems exhibited bias, falsely identifying African-American and Asian faces 10 to 100 times more than Caucasian faces. Another federal study found a higher rate of mistaken matches among children.
So what do the kids think?
Students 13 and older are invited to comment. All comments are moderated by the Learning Network staff…
Algorithms actually do a lousy job of screening resumes and companies that rely on them miss a lot of great hires.
Why?
Because the only thing an algorithm can do is match key words and experience descriptions. Based on 13 years of tech recruiter experience I can tell you that rarely does anyone change jobs in order to do the same thing somewhere else, unless they hate their manager or the culture.
Not things that an algorithm is going to pick up on. Nor will the initial phone call usually made not by the hiring manager, but by someone who know little about the job other than to match the candidates responses to a list of “preferred” answers.
No discretionary knowledge based on the manager’s experience or the candidate’s potential.
We all know that management loves to save money and many of them feel that AI will allow them to reduce the most expensive item of their fixed costs, people — including managers.
Imagine an app giving you a quarterly evaluation—without a manager or HR rep in sight—and you have an idea of where this is potentially going.
What management forgets is that a company isn’t an entity at all. It’s a group of people, with shared values, all moving in the same direction, united in a shared vision and their efforts to reach a common goal.
It exists only as long as people are willing to join and are happy enough to stay — excessive turnover does not foster success.
So what do workers think about the use of AI/algorithms?
However, workers don’t necessarily like the idea of code taking over management functions—or hiring, for that matter. Pew research shows 57 percent of respondents think using algorithms for résumé screening is “unacceptable,” and 58 percent believe that bots taught by humans will always contain some bias. Nearly half (48 percent) of workers between the ages of 18 and 29 have some distrust of A.I. in hiring, showing that this negative perception isn’t going away anytime soon.
This is bad news for companies looking to “increase efficiency,” but great news for companies that recognize they aren’t hiring “resources” or “talent,” but people, with their infinite potential and inherent messiness.
Yesterday’s redux was about the importance of liberal arts in a tech-gone-crazy world.
New studies, with hard salary data, bear out this truth.
Yes, tech starting salaries are higher, but that difference goes away relatively quickly.
Not only that, but the tech skills needed today, especially the “hot” skills, didn’t exist 10 years ago, or even three to five years ago, so a tech career requires a willingness to constantly learn the newest whatever that comes along.
That translates to 40 years of racing to keep up with the newly minted competition.
Even staying current won’t assure a good career path, since if you want to go higher more soft skills, such as written and verbal communications, are required.
And in case you are part of my millennial and under audience, written skills don’t refer to proficient texting, while verbal skills mean competently carrying on face-to-face conversations.
Liberal arts can (should) open your mind to other experiences and viewpoints increasing your EQ and SQ, which is critical to getting ahead (and getting along).
There’s another reason liberal arts is even more important now and in the future — AI.
Techies are so enamored with the technology they haven’t given much thought to the fact that AI is best at repetitive functions — like coding.
AI apps like Bayou, DeepCoder, and Commit Assistant automate some tedious parts of coding. They can produce a few lines of code but they can’t yet write programs on their own, nor can they interpret business value and prioritize features.
The stuff AI can’t do isn’t found in a tech education, but liberal arts provides the foundation to do them.
Sometimes a cliché is useful. The bottom line is an education that combines tech skills for the short-term and liberal arts for both short and long-term is the real career winner.
(Note: Although the image above says liberal arts is for sales and marketing, it’s even more crucial for techies.)
The world knows about tech’s love affair with, and misuse of, personal data. The continual ignoring, minimizing and excusing of hate speech, revenge porn, fake news, bullying, etc.
Then there is its totally irrational attitude/belief that people will be kind and good to each other online no matter what they are like in the real world.
Given the prevailing attitude, would a hot tech startup have a conscience?
So would a founder, a self-described “technology enthusiast,” create an AI app that went viral and then shut it down because of the way it was being used?
DeepNude was built on Pix2Pix, an open-source algorithm used for “image-to-image translation.” the app can create a naked image from any picture of a woman with just a couple of clicks. Revenge porn activists said the app was “absolutely terrifying.”
As to the above question, the answer is “yes.”
The DeepNude team was horrified, believing “the probability that people will misuse it is too high.”
“We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it,” DeepNude wrote in a tweet. “The world is not yet ready for DeepNude.”
Pix2Pix was developed by a team of scientists, who now believe the industry needs to do better and not just release their work to the world at large.
“We have seen some wonderful uses of our work, by doctors, artists, cartographers, musicians, and more,” the MIT professor Phillip Isola, who helped create Pix2Pix, told Business Insider in an email. “We as a scientific community should engage in serious discussion on how best to move our field forward while putting reasonable safeguards in place to better ensure that we can benefit from the positive use-cases while mitigating abuse.”
One can only hope that the scientific community does, indeed, find a way to do good while avoiding the worst of the negative fallout from discoveries.
And hats off to the DeepNude team.
It’s really inspiring to see such a concrete example of doing the right thing, with no shilly-shallying or dancing around the decision.
But I do wonder what would have happened if either the developers or the scientists were beholden to investors.
Now scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research are making faking it even simpler.
In the latest example of deepfake technology, researchers have shown off new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.
The result is that almost anyone can make anyone say anything.
Just type in the new script.
Adobe, of course, plans to consumerize the tech, with a focus on how to generate the best revenue stream from it.
It’s not their problem how it will be used or by whom.
Yet another genii out of the box and out of control.
You can’t believe what you read; you can’t believe what you read or hear; it’s been ages since you could believe pictures, and now you won’t be able to believe videos you see.
“We all have cultural biases, and health care providers are people, too,” DeJoy says. Studies have indicated that doctors across all specialties are more likely to consider an overweight patient uncooperative, less compliant and even less intelligent than a thinner counterpart.
Modern-day risk assessment tools are often driven by algorithms trained on historical crime data. (…) Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.
Nearly 35 percent of images for darker-skinned women faced errors on facial recognition software, according to a study by Massachusetts Institute of Technology. Comparatively lighter-skinned males only faced an error rate of around 1 percent.
While healthcare, law and policing are furthest along, bias is oozing out of every nook and cranny that AI penetrates.
As usual, the problem was recognized after the genie was out of the box.
There’s a lot of talk about how to correct the problem, but how much will actually be done and when is questionable.
This is especially true since the bias in AI is the same as that of the people using it it’s unlikely they will consider it a problem.
Poking through 13+ years of posts I find information that’s as useful now as when it was written.
Golden Oldies is a collection of the most relevant and timeless posts during that time.
KG wrote this five years ago and, sadly, many of the concerns he mentioned are happening. AI bias is rampant and, as usual with emerging tech, most people don’t know/understand/care about the danger that represents.
A few months ago I read the book, Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. It was a tremendously interesting book and confirmed many of the concerns I’ve been having about my own industry for some time. Subsequently there have been a slate of articles wondering about AI and how the industry is progressing.
One of the book’s premises was that we need to take a step back and think about the moral and ethical basis of what we’re doing and how and what we’re imparting to these machines.
I believe that it will be difficult, or impossible, for the AI industry to change direction mid-streams and start being concerned about morality and ethics. Most of the funding for AI comes from DARPA and other such institutions that are part of the military and affiliated organizations. Finance is second largest funding source.
Most of the people who are concerned about AI (including James Barrat) worry about when machines gain human level intelligence. I am much more concerned about what happens before that. Today it is said that the most sophisticated AI has the intelligence of a cockroach. This is no small feat, but it also brings with it some clear implications – cockroaches have important drives and instincts that guide their behavior. Survival, resource acquisition, reproduction, etc. are all things that cockroaches do. How far away are we from when our AI exhibit these characteristics? What about when we get to rat-level intelligence?
At that point machines will be very powerful and control many of the essential functions of society. Imagine a frightened rat (or 6 month old toddler) with infinite power – what actions would they take to protect themselves or get what they perceive they want or need? How would they react if we stood in their way? How concerned would they be with the consequences of their actions? Most adults don’t do this today.
Before we achieve human level intelligence in machines, we’ll have to deal with less intelligent and probably more dangerous and powerful entities. More dangerous because they will not have the knowledge or processing power to think of consequences, and also because they will be controlling our cars, airplanes, electricity grids, public transportation and many other systems.
Most AI optimists ignore the dangerous “lower mammal, toddler and childhood” stages of AI development and only see the potential benefits at the end. But we need to think about the path there and what we can do to prepare as individuals and as a society.
Not to speak about the fact that once we reach human level intelligence in AI, we’ll be dealing with an intelligence that is so alien to anything we know (after all, we have lots of experience with cockroaches, rats and toddlers), and no way of knowing what its motives are. But that will be left for another discussion.
Psychologists from Northwestern University have found that children as young as four show signs of racial bias, suggesting they pick up on cues to act intolerant from the adults around them from a very early age.
The digital world is an incredibly biased place. Geographically, linguistically, demographically, economically and culturally, the technological revolution has skewed heavily towards a small number of very economically privileged slices of society.
Knowing the datasets for both are biased for the same reason, it is the wise boss, from team leader to CEO, who takes time to learn their own biases and also understand the various biases of their team.
Only then can they develop approaches and work-arounds.
The bottom line in business is that you don’t have to change minds, you just have to create processes that neutralize the effects.
Introducing Re:scam – an artificially intelligent email bot made to reply to scam emails. Re:scam wastes scammers time with a never-ending series of questions and anecdotes so that scammers have less time to pursue real people. (…) Instead of junking or deleting a scam email, you can now forward it to Re:scam who will continue the conversation indefinitely – or until the scammer stops replying.
Add me@rescam.org to your address book and make sticking it to spammers effortless.
Entrepreneurs face difficulties that are hard for most people to imagine, let alone understand. You can find anonymous help and connections that do understand at 7 cups of tea.
Crises never end.
$10 really does make a difference and you’ll never miss it,