Home Leadership Turn Archives Me RampUp Solutions  
 

  • Categories

  • Archives
 

AI is Not Society’s Savior

Tuesday, March 31st, 2020

https://www.flickr.com/photos/77068017@N07/6779368830/

The chatter about how AI will change the world, take your job, out-consult the consultants, displace management, perform reviews, identify potential criminals and reoffenders, diagnose illnesses, etc., especially etc., is never ending.

AI is supposed to bring true objectivity to its many applications creating longed for change.

Yet it’s been proven over and over that AI contains the same biases that created our unfair, prejudiced world; not just in the US, but around the world.

AI is good at increasing bias in the name of efficiency and objectivity.

It is even better at automating the loss of privacy and increasing surveillance in the name of safety.

Long before AI got hot Lou Gerstner knew the solution.

Computers are magnificent tools for the realization of our dreams, but no machine can replace the human spark of spirit, compassion, love, and understanding.

Something tech has forgotten in its love affair with data and its warped view of progress.

And, of course, profit.

Image credit: safwat sayed

AI As Blunt Force Trauma

Wednesday, February 12th, 2020

https://www.flickr.com/photos/mikemacmarketing/30188200627/in/photolist-MZCqiH-SjCgwQ-78gAtb-4Wrk4s-Dcx4UC-24s3ght-2dZfNaQ-8nBs97-5JpQEE-4GXcBN-RNNXQ4-2eo1VjR-29REGc9-3iAtU2-8SbD9g-2aDXanU-dYVVaB-5Pnxus-29Jabm7-2em8eRN-24DS86P-4KTiY4-87gbND-TnPTMx-UWXASW-fvrvcc-9xaKQj-2dviv8X-7Mbzwn-4WrkmQ-EPaCDj-dWTnJy-4zWGpJ-2fuyjjE-23y8cHC-4HEcBa-585oYX-jR9gc-dZ2ueo-dZ2v6o-2etej9U-dZ2A5J-4vuuEb-TrNV8b-dYVQKp-4HCFvt-6kBMSR-7JvXoF-3Ym8Sz-ShBxCm

While AI can do some things on its own, it’s a blunt force, ignorant of nuance, but embracing all the  bias, prejudices, bigotry and downright stupidity of past generations thanks to its training.

Using AI to make judgement calls that are implemented sans human involvement is like using a five pound sledgehammer on a thumbtack.

Yesterday looked at what AI can miss in hiring situations, but candidates at least have more choice than others do.

AI is being used extensively around the world by government and law enforcement where its bias is especially hard on people of color.

The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

Human judgement may be flawed and it does has the same prejudices, but it’s not inflexible, whereas AI is.

As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.

Now schools are jumping on the bandwagon claiming that facial recognition will make schools safer, but not everyone agrees.

“Subjecting 5-year-olds to this technology will not make anyone safer, and we can’t allow invasive surveillance to become the norm in our public spaces,” said Stefanie Coyle, deputy director of the Education Policy Center for the New York Civil Liberties Union. (…)

Critics of the technology, including Mr. Shultz and the New York Civil Liberties Union, point to the growing evidence of racial bias in facial recognition systems. In December, the federal government released a study, one of the largest of its kind, that found that most commercial facial recognition systems exhibited bias, falsely identifying African-American and Asian faces 10 to 100 times more than Caucasian faces. Another federal study found a higher rate of mistaken matches among children.

So what do the kids think?

Students 13 and older are invited to comment. All comments are moderated by the Learning Network staff…

Read the Q&A to find out.

Image credit: Mike MacKenzie

How AI Can Kill Your Company

Tuesday, February 11th, 2020

https://www.flickr.com/photos/mikemacmarketing/30188200627/in/photolist-MZCqiH-SjCgwQ-78gAtb-4Wrk4s-Dcx4UC-24s3ght-2dZfNaQ-8nBs97-5JpQEE-4GXcBN-RNNXQ4-2eo1VjR-29REGc9-3iAtU2-8SbD9g-2aDXanU-dYVVaB-5Pnxus-29Jabm7-2em8eRN-24DS86P-4KTiY4-87gbND-TnPTMx-UWXASW-fvrvcc-9xaKQj-2dviv8X-7Mbzwn-4WrkmQ-EPaCDj-dWTnJy-4zWGpJ-2fuyjjE-23y8cHC-4HEcBa-585oYX-jR9gc-dZ2ueo-dZ2v6o-2etej9U-dZ2A5J-4vuuEb-TrNV8b-dYVQKp-4HCFvt-6kBMSR-7JvXoF-3Ym8Sz-ShBxCm

Yesterday included a post about how tech has sold itself as the silver bullet solution to hiring people.

Algorithms actually do a lousy job of screening resumes and companies that rely on them miss a lot of great hires.

Why?

Because the only thing an algorithm can do is match key words and experience descriptions. Based on 13 years of tech recruiter experience I can tell you that rarely does anyone change jobs in order to do the same thing somewhere else, unless they hate their manager or the culture.

Not things that an algorithm is going to pick up on. Nor will the initial phone call usually made not by the hiring manager, but by someone who know little about the job other than to match the candidates responses to a list of “preferred” answers.

No discretionary knowledge based on the manager’s experience or the candidate’s potential.

We all know that management loves to save money and many of them feel that AI will allow them to reduce the most expensive item of their fixed costs, people — including managers.

Imagine an app giving you a quarterly evaluation—without a manager or HR rep in sight—and you have an idea of where this is potentially going.

What management forgets is that a company isn’t an entity at all. It’s a group of people, with shared values, all moving in the same direction, united in a shared vision and their efforts to reach a common goal.

It exists only as long as people are willing to join and are happy enough to stay — excessive turnover does not foster success.

So what do workers think about the use of AI/algorithms?

However, workers don’t necessarily like the idea of code taking over management functions—or hiring, for that matter. Pew research shows 57 percent of respondents think using algorithms for résumé screening is “unacceptable,” and 58 percent believe that bots taught by humans will always contain some bias. Nearly half (48 percent) of workers between the ages of 18 and 29 have some distrust of A.I. in hiring, showing that this negative perception isn’t going away anytime soon.

They are right to be distrustful, since AI is trained on historical datasets its “intelligence” includes all the bias, prejudices, bigotry and downright stupidity of past generations.

This is bad news for companies looking to “increase efficiency,” but great news for companies that recognize they aren’t hiring “resources” or “talent,” but people, with their infinite potential and inherent messiness.

Image credit: Mike MacKenzie

Why Liberal Arts Boost Tech Careers

Tuesday, October 8th, 2019

https://www.flickr.com/photos/53272102@N06/28972252900/

Yesterday’s redux was about the importance of liberal arts in a tech-gone-crazy world.

New studies, with hard salary data, bear out this truth.

Yes, tech starting salaries are higher, but that difference goes away relatively quickly.

Not only that, but the tech skills needed today, especially the “hot” skills, didn’t exist 10 years ago, or even three to five years ago, so a tech career requires a willingness to constantly learn the newest whatever that comes along.

That translates to 40 years of racing to keep up with the newly minted competition.

Even staying current won’t assure a good career path, since if you want to go higher more soft skills, such as written and verbal communications, are required.

And in case you are part of my millennial and under audience, written skills don’t refer to proficient texting, while verbal skills mean competently carrying on face-to-face conversations.

Liberal arts can (should) open your mind to other experiences and viewpoints increasing your EQ and SQ, which is critical to getting ahead (and getting along).

There’s another reason liberal arts is even more important now and in the future — AI.

Techies are so enamored with the technology they haven’t given much thought to the fact that AI is best at repetitive functions — like coding.

AI apps like Bayou, DeepCoder, and Commit Assistant automate some tedious parts of coding. They can produce a few lines of code but they can’t yet write programs on their own, nor can they interpret business value and prioritize features.

The stuff AI can’t do isn’t found in a tech education, but liberal arts provides the foundation to do them.

Sometimes a cliché is useful. The bottom line is an education that combines tech skills for the short-term and liberal arts for both short and long-term is the real career winner.

(Note: Although the image above says liberal arts is for sales and marketing, it’s even more crucial for techies.)

Image credit: Abhijit Bhaduri

Tech with a Conscience

Tuesday, July 2nd, 2019

https://twitter.com/deepnudeapp

Sounds like an oxymoron.

The world knows about tech’s love affair with, and misuse of, personal data. The continual ignoring, minimizing and excusing of hate speech, revenge porn, fake news, bullying, etc.

Then there is its totally irrational attitude/belief that people will be kind and good to each other online no matter what they are like in the real world.

Given the prevailing attitude, would a hot tech startup have a conscience?

So would a founder, a self-described “technology enthusiast,” create an AI app that went viral and then shut it down because of the way it was being used?

DeepNude was built on Pix2Pix, an open-source algorithm used for “image-to-image translation.” the app can create a naked image from any picture of a woman with just a couple of clicks. Revenge porn activists said the app was “absolutely terrifying.”

As to the above question, the answer is “yes.”

The DeepNude team was horrified, believing “the probability that people will misuse it is too high.”

“We don’t want to make money this way. Surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones who sell it,” DeepNude wrote in a tweet. “The world is not yet ready for DeepNude.”

—deepnudeapp (@deepnudeapp) June 27, 2019

Pix2Pix was developed by a team of scientists, who now believe the industry needs to do better and not just release their work to the world at large.

“We have seen some wonderful uses of our work, by doctors, artists, cartographers, musicians, and more,” the MIT professor Phillip Isola, who helped create Pix2Pix, told Business Insider in an email. “We as a scientific community should engage in serious discussion on how best to move our field forward while putting reasonable safeguards in place to better ensure that we can benefit from the positive use-cases while mitigating abuse.”

One can only hope that the scientific community does, indeed, find a way to do good while avoiding the worst of the negative fallout from discoveries.

And hats off to the DeepNude team.

It’s really inspiring to see such a concrete example of doing the right thing, with no shilly-shallying or dancing around the decision.

But I do wonder what would have happened if either the developers or the scientists were beholden  to investors.

Image credit: deepnudeapp via Twitter

Say What?

Wednesday, June 26th, 2019

https://www.flickr.com/photos/m_kajo/10071501426/

Every day seems to bring more bad news from the AI front.

Google gives away tools for DIY AI, with no consideration for who uses them or for what.

One result is the proliferation of deepfakes.

Now scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research are making faking it even simpler.

In the latest example of deepfake technology, researchers have shown off new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.

The result is that almost anyone can make anyone say anything.

Just type in the new script.

Adobe, of course, plans to consumerize the tech, with a focus on how to generate the best revenue stream from it.

It’s not their problem how it will be used or by whom.

Yet another genii out of the box and out of control.

You can’t believe what you read; you can’t believe what you read or hear; it’s been ages since you could believe pictures, and now you won’t be able to believe videos you see.

All thanks to totally amoral tech.

Werner Vogels, Amazon’s chief technology officer, spelled out tech’s attitude in no uncertain terms.

It’s in society’s direction to actually decide which technology is applicable under which conditions.

“It’s a societal discourse and decision – and policy-making – that needs to happen to decide where you can apply technologies.”

Decisions and policies that happen long after the tech is deployed — if at all.

Welcome to the future.

Image credit: Marion Paul Baylado

The Bias of AI

Tuesday, June 25th, 2019

https://www.flickr.com/photos/mikemacmarketing/30188200627/in/photolist-MZCqiH-SjCgwQ-78gAtb-4Wrk4s-Dcx4UC-24s3ght-2dZfNaQ-8nBs97-5JpQEE-4GXcBN-RNNXQ4-2eo1VjR-29REGc9-3iAtU2-8SbD9g-2aDXanU-dYVVaB-5Pnxus-29Jabm7-2em8eRN-24DS86P-4KTiY4-87gbND-TnPTMx-UWXASW-fvrvcc-9xaKQj-2dviv8X-7Mbzwn-4WrkmQ-EPaCDj-dWTnJy-4zWGpJ-2fuyjjE-23y8cHC-4HEcBa-585oYX-jR9gc-dZ2ueo-dZ2v6o-2etej9U-dZ2A5J-4vuuEb-TrNV8b-dYVQKp-4HCFvt-6kBMSR-7JvXoF-3Ym8Sz-ShBxCm

I’ve written before that AI is biased for the same reason children grow up biased — they both learn from their parents.

In AI’s case its “parents” are the datasets used to train the algorithms.

The datasets are a collection of millions of bits of historical information focused on the particular subject being taught.

In other words, the AI learns to “think”, evaluate information and make judgments based on what has been done in the past.

And what was done in the past was heavily biased.

What does that mean to us?

In healthcare, AI will downgrade complaints from women and people of color, as doctors have always done.

And AI will really trash you if you are also fat. Seriously.

“We all have cultural biases, and health care providers are people, too,” DeJoy says. Studies have indicated that doctors across all specialties are more likely to consider an overweight patient uncooperative, less compliant and even less intelligent than a thinner counterpart.

AI is contributing significantly to the racial bias common in the courts and law enforcement.

Modern-day risk assessment tools are often driven by algorithms trained on historical crime data. (…) Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.

Facial recognition also runs on biased AI.

Nearly 35 percent of images for darker-skinned women faced errors on facial recognition software, according to a study by Massachusetts Institute of Technology. Comparatively lighter-skinned males only faced an error rate of around 1 percent.

While healthcare, law and policing are furthest along, bias is oozing out of every nook and cranny that AI penetrates.

As usual, the problem was recognized after the genie was out of the box.

There’s a lot of talk about how to correct the problem, but how much will actually be done and when is questionable.

This is especially true since the bias in AI is the same as that of the people using it it’s unlikely they will consider it a problem.

Image credit: Mike MacKenzie

Golden Oldies: KG on AI and Its Implications

Monday, June 24th, 2019

Poking through 13+ years of posts I find information that’s as useful now as when it was written.

Golden Oldies is a collection of the most relevant and timeless posts during that time.

KG wrote this five years ago and, sadly, many of the concerns he mentioned are happening. AI bias is rampant and, as usual with emerging tech, most people don’t know/understand/care about the danger that represents.

Read other Golden Oldies here.

A few months ago I read the book, Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.  It was a tremendously interesting book and confirmed many of the concerns I’ve been having about my own industry for some time.  Subsequently there have been a slate of articles wondering about AI and how the industry is progressing.

One of the book’s premises was that we need to take a step back and think about the moral and ethical basis of what we’re doing and how and what we’re imparting to these machines.

I believe that it will be difficult, or impossible, for the AI industry to change direction mid-streams and start being concerned about morality and ethics.  Most of the funding for AI comes from DARPA and other such institutions that are part of the military and affiliated organizations.  Finance is second largest funding source.

Most of the people who are concerned about AI (including James Barrat) worry about when machines gain human level intelligence.  I am much more concerned about what happens before that.  Today it is said that the most sophisticated AI has the intelligence of a cockroach.  This is no small feat, but it also brings with it some clear implications – cockroaches have important drives and instincts that guide their behavior.  Survival, resource acquisition, reproduction, etc. are all things that cockroaches do.  How far away are we from when our AI exhibit these characteristics?  What about when we get to rat-level intelligence?

At that point machines will be very powerful and control many of the essential functions of society.  Imagine a frightened rat (or 6 month old toddler) with infinite power – what actions would they take to protect themselves or get what they perceive they want or need?  How would they react if we stood in their way?  How concerned would they be with the consequences of their actions?  Most adults don’t do this today.

Before we achieve human level intelligence in machines, we’ll have to deal with less intelligent and probably more dangerous and powerful entities.  More dangerous because they will not have the knowledge or processing power to think of consequences, and also because they will be controlling our cars, airplanes, electricity grids, public transportation and many other systems.

Most AI optimists ignore the dangerous “lower mammal, toddler and childhood” stages of AI development and only see the potential benefits at the end.  But we need to think about the path there and what we can do to prepare as individuals and as a society.

Not to speak about the fact that once we reach human level intelligence in AI, we’ll be dealing with an intelligence that is so alien to anything we know (after all, we have lots of experience with cockroaches, rats and toddlers), and no way of knowing what its motives are.  But that will be left for another discussion.

Ducks in a Row: Biased Learning

Tuesday, February 12th, 2019

https://www.flickr.com/photos/psd/15155049298Have you ever wondered why bias is so deeply ingrained and prevalent?

The answer is simple.

The datasets are biased.

For humans

Psychologists from Northwestern University have found that children as young as four show signs of racial bias, suggesting they pick up on cues to act intolerant from the adults around them from a very early age.

For AI.

The digital world is an incredibly biased place. Geographically, linguistically, demographically, economically and culturally, the technological revolution has skewed heavily towards a small number of very economically privileged slices of society.

Knowing the datasets for both are biased for the same reason, it is the wise boss, from team leader to CEO, who takes time to learn their own biases and also understand the various biases of their team.

Only then can they develop approaches and work-arounds.

The bottom line in business is that you don’t have to change minds, you just have to create processes that neutralize the effects.

Image credit: Paul Downey

Stick It To A Spammer

Wednesday, December 13th, 2017

The last few weeks have focused on some pretty depressing topics, so it’s high time for something that’s

  • interesting,
  • useful,
  • creative, and
  • fun for both of us.

Re:scam fulfills all four criteria.

Introducing Re:scam – an artificially intelligent email bot made to reply to scam emails. Re:scam wastes scammers time with a never-ending series of questions and anecdotes so that scammers have less time to pursue real people. (…) Instead of junking or deleting a scam email, you can now forward it to Re:scam who will continue the conversation indefinitely – or until the scammer stops replying.

Add me@rescam.org to your address book and make sticking it to spammers effortless.

Video credit: Re:scam

RSS2 Subscribe to
MAPping Company Success

Enter your Email
Powered by FeedBlitz
About Miki View Miki Saxon's profile on LinkedIn

Clarify your exec summary, website, etc.

Have a quick question or just want to chat? Feel free to write or call me at 360.335.8054

The 12 Ingredients of a Fillable Req

CheatSheet for InterviewERS

CheatSheet for InterviewEEs

Give your mind a rest. Here are 4 quick ways to get rid of kinks, break a logjam or juice your creativity!

Creative mousing

Bubblewrap!

Animal innovation

Brain teaser

The latest disaster is here at home; donate to the East Coast recovery efforts now!

Text REDCROSS to 90999 to make a $10 donation or call 00.733.2767. $10 really really does make a difference and you'll never miss it.

And always donate what you can whenever you can

The following accept cash and in-kind donations: Doctors Without Borders, UNICEF, Red Cross, World Food Program, Save the Children

*/ ?>

About Miki

About KG

Clarify your exec summary, website, marketing collateral, etc.

Have a question or just want to chat @ no cost? Feel free to write 

Download useful assistance now.

Entrepreneurs face difficulties that are hard for most people to imagine, let alone understand. You can find anonymous help and connections that do understand at 7 cups of tea.

Crises never end.
$10 really does make a difference and you’ll never miss it,
while $10 a month has exponential power.
Always donate what you can whenever you can.

The following accept cash and in-kind donations:

Web site development: NTR Lab
Creative Commons License
This work is licensed under a Creative Commons Attribution-NoDerivs 2.5 License.