Home Leadership Turn Archives Me RampUp Solutions  
 

  • Categories

  • Archives
 

AI is Not Society’s Savior

Tuesday, March 31st, 2020

https://www.flickr.com/photos/77068017@N07/6779368830/

The chatter about how AI will change the world, take your job, out-consult the consultants, displace management, perform reviews, identify potential criminals and reoffenders, diagnose illnesses, etc., especially etc., is never ending.

AI is supposed to bring true objectivity to its many applications creating longed for change.

Yet it’s been proven over and over that AI contains the same biases that created our unfair, prejudiced world; not just in the US, but around the world.

AI is good at increasing bias in the name of efficiency and objectivity.

It is even better at automating the loss of privacy and increasing surveillance in the name of safety.

Long before AI got hot Lou Gerstner knew the solution.

Computers are magnificent tools for the realization of our dreams, but no machine can replace the human spark of spirit, compassion, love, and understanding.

Something tech has forgotten in its love affair with data and its warped view of progress.

And, of course, profit.

Image credit: safwat sayed

Fighting Tech

Wednesday, February 19th, 2020

Maybe it takes tech to beat tech.

Or founders who plan to walk their talk even after them become successful, unlike the “don’t be evil” guys.

More entrepreneurs are pursuing social or environmental goals, said Greg Brown, a professor of finance at the Kenan Institute of Private Enterprise at the University of North Carolina.

Companies like Toms, Warby Parker and Uncommon Goods have pushed this concept into the mainstream by creating successful business models built around helping others. This trend has led to the rise of B Corporations, a certification for companies that meet high standards of social responsibility. The program started in 2007, and now more than 2,500 companies have been certified in more than 50 countries.

Including Afghanistan.

Not all these startups make it and many are choosing to do it sans investors who often start pushing for growth and revenue, social mission be dammed.

And they are slowly succeeding.

Companies like Moka are a reflection of how consumers think as well, Professor Brown said. As people’s wealth increases, they think more about quality and less about quantity. They also consider the social context of what they’re buying.

Others are developing tech to defend against tech.

The “bracelet of silence” is not the first device invented by researchers to stuff up digital assistants’ ears. In 2018, two designers created Project Alias, an appendage that can be placed over a smart speaker to deafen it. But Ms. Zheng argues that a jammer should be portable to protect people as they move through different environments, given that you don’t always know where a microphone is lurking.

These may not be the solution, assuming there is one, but this definitely isn’t.

Rather than building individual defenses, Mr. Hartzog believes, we need policymakers to pass laws that more effectively guard our privacy and give us control over our data.

You have on to consider tech’s actions in Europe to know that laws don’t stop tech.

There’s another potential positive brewing in tech — actually a disruption of sorts.

That’s the long-time coming move away from current ageist thinking.

As brilliant as young coders are, though, the industry can’t survive on technical chops alone. Last year, Harvard Business Review shared that the average age of a successful startup founder isn’t 25 or 30—it’s 45 years old.

Call it a miracle, but investors, the majority over 40, are starting to value the experience that comes with age.

Hopefully, in the long-run, the potential for success will outweigh the hang-up on age.

As a whole, entrepreneurial communities also need to do more to bring diverse groups to meet-ups, panels and speaking engagements. The importance of having more voices at the table can’t be diminished.

Let’s just hope it isn’t too long.

Image credit: Ron Mader

AI As Blunt Force Trauma

Wednesday, February 12th, 2020

https://www.flickr.com/photos/mikemacmarketing/30188200627/in/photolist-MZCqiH-SjCgwQ-78gAtb-4Wrk4s-Dcx4UC-24s3ght-2dZfNaQ-8nBs97-5JpQEE-4GXcBN-RNNXQ4-2eo1VjR-29REGc9-3iAtU2-8SbD9g-2aDXanU-dYVVaB-5Pnxus-29Jabm7-2em8eRN-24DS86P-4KTiY4-87gbND-TnPTMx-UWXASW-fvrvcc-9xaKQj-2dviv8X-7Mbzwn-4WrkmQ-EPaCDj-dWTnJy-4zWGpJ-2fuyjjE-23y8cHC-4HEcBa-585oYX-jR9gc-dZ2ueo-dZ2v6o-2etej9U-dZ2A5J-4vuuEb-TrNV8b-dYVQKp-4HCFvt-6kBMSR-7JvXoF-3Ym8Sz-ShBxCm

While AI can do some things on its own, it’s a blunt force, ignorant of nuance, but embracing all the  bias, prejudices, bigotry and downright stupidity of past generations thanks to its training.

Using AI to make judgement calls that are implemented sans human involvement is like using a five pound sledgehammer on a thumbtack.

Yesterday looked at what AI can miss in hiring situations, but candidates at least have more choice than others do.

AI is being used extensively around the world by government and law enforcement where its bias is especially hard on people of color.

The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

Human judgement may be flawed and it does has the same prejudices, but it’s not inflexible, whereas AI is.

As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.

Now schools are jumping on the bandwagon claiming that facial recognition will make schools safer, but not everyone agrees.

“Subjecting 5-year-olds to this technology will not make anyone safer, and we can’t allow invasive surveillance to become the norm in our public spaces,” said Stefanie Coyle, deputy director of the Education Policy Center for the New York Civil Liberties Union. (…)

Critics of the technology, including Mr. Shultz and the New York Civil Liberties Union, point to the growing evidence of racial bias in facial recognition systems. In December, the federal government released a study, one of the largest of its kind, that found that most commercial facial recognition systems exhibited bias, falsely identifying African-American and Asian faces 10 to 100 times more than Caucasian faces. Another federal study found a higher rate of mistaken matches among children.

So what do the kids think?

Students 13 and older are invited to comment. All comments are moderated by the Learning Network staff…

Read the Q&A to find out.

Image credit: Mike MacKenzie

How AI Can Kill Your Company

Tuesday, February 11th, 2020

https://www.flickr.com/photos/mikemacmarketing/30188200627/in/photolist-MZCqiH-SjCgwQ-78gAtb-4Wrk4s-Dcx4UC-24s3ght-2dZfNaQ-8nBs97-5JpQEE-4GXcBN-RNNXQ4-2eo1VjR-29REGc9-3iAtU2-8SbD9g-2aDXanU-dYVVaB-5Pnxus-29Jabm7-2em8eRN-24DS86P-4KTiY4-87gbND-TnPTMx-UWXASW-fvrvcc-9xaKQj-2dviv8X-7Mbzwn-4WrkmQ-EPaCDj-dWTnJy-4zWGpJ-2fuyjjE-23y8cHC-4HEcBa-585oYX-jR9gc-dZ2ueo-dZ2v6o-2etej9U-dZ2A5J-4vuuEb-TrNV8b-dYVQKp-4HCFvt-6kBMSR-7JvXoF-3Ym8Sz-ShBxCm

Yesterday included a post about how tech has sold itself as the silver bullet solution to hiring people.

Algorithms actually do a lousy job of screening resumes and companies that rely on them miss a lot of great hires.

Why?

Because the only thing an algorithm can do is match key words and experience descriptions. Based on 13 years of tech recruiter experience I can tell you that rarely does anyone change jobs in order to do the same thing somewhere else, unless they hate their manager or the culture.

Not things that an algorithm is going to pick up on. Nor will the initial phone call usually made not by the hiring manager, but by someone who know little about the job other than to match the candidates responses to a list of “preferred” answers.

No discretionary knowledge based on the manager’s experience or the candidate’s potential.

We all know that management loves to save money and many of them feel that AI will allow them to reduce the most expensive item of their fixed costs, people — including managers.

Imagine an app giving you a quarterly evaluation—without a manager or HR rep in sight—and you have an idea of where this is potentially going.

What management forgets is that a company isn’t an entity at all. It’s a group of people, with shared values, all moving in the same direction, united in a shared vision and their efforts to reach a common goal.

It exists only as long as people are willing to join and are happy enough to stay — excessive turnover does not foster success.

So what do workers think about the use of AI/algorithms?

However, workers don’t necessarily like the idea of code taking over management functions—or hiring, for that matter. Pew research shows 57 percent of respondents think using algorithms for résumé screening is “unacceptable,” and 58 percent believe that bots taught by humans will always contain some bias. Nearly half (48 percent) of workers between the ages of 18 and 29 have some distrust of A.I. in hiring, showing that this negative perception isn’t going away anytime soon.

They are right to be distrustful, since AI is trained on historical datasets its “intelligence” includes all the bias, prejudices, bigotry and downright stupidity of past generations.

This is bad news for companies looking to “increase efficiency,” but great news for companies that recognize they aren’t hiring “resources” or “talent,” but people, with their infinite potential and inherent messiness.

Image credit: Mike MacKenzie

The Power of Early Adopters

Tuesday, October 22nd, 2019

https://www.pewresearch.org/fact-tank/2016/07/12/28-of-americans-are-strong-early-adopters-of-technology/

Have you ever wondered what makes a new app fly?

Have you heard of early adopters?

Would it surprise you to know that they make up only 13.5% of the population?

But that small percentage dictates what new products and services you will be able to do on your phone, tablet and computer.

Not 100%, obviously, but close, especially if you are an entrepreneur without “connections.”

Doubly so if you are a woman and triple (or more) for a person of color.

That 13.5% dates back to 2012. Two years later it had doubled to 28%, according to the Pew Research Center.

Still not much considering the outsize impact.

Image credit: Pew Research Center

Golden Oldies: How Well Do You Hear Past What You See?

Monday, July 8th, 2019

Poking through  13+ years of posts I find information that’s as useful now as when it was written.

Golden Oldies is a collection of the most relevant and timeless posts during that time.

We all have visual prejudices that have nothing to do with race, ethnicity, gender anything obvious. It’s important to know your own or you can’t hear past them. I worked hard to be aware of mine. I had no choice, because, back when I was a recruiter, I occasionally met my candidates. I vividly remember two of them, because if I had met them before I presented them and set up interviews I wouldn’t have, which would have cost me dearly, since both were hired (different companies). Why not? Because they both hit my visual prejudices.

Read other Golden Oldies here.

Discrimination comes in many forms.

All of them are grounded in stupidity, but it’s age and appearance that I want to focus on today.

Layoffs are always a time when age is in the limelight, but this time it’s working in reverse.

“The share of older Americans who have jobs has risen during the recession, while the share of younger Americans with jobs has plunged.”

It seems that at least parts of corporate America have learned to see past the obvious.

“…employees whom companies have invested in most and who have “demonstrated track records…tend to be more experienced and are often older.””

So some companies have discovered that years of experience have substantial value when it comes to the success of the company.

But what about appearance? How much is hearing influenced by how someone looks at first take?

What better venue in which to consider this than the original British version of American Idol where the contestants are mostly young, generally good-looking and always bust their tails to make an impression.

How well do you think a slightly frumpy-looking 47 year old woman would fare under the scathing tongue of Simon Fuller?

How much do you think talent would offset the obvious visual assumptions made by both the judges and the audience?

Watch the judges and audience reaction carefully before Susan Boyle performs and how quickly it changes when she starts singing (embedding is disabled on this video); check out some of the more than 50 thousand comments.

Think about what happens when a “Susan” comes to interview; how well do you hear past her (or his) appearance?

Then come back and share your thoughts with us.

PS For a fascinating look at Susan read this article in the NY Times.

Image credit: cwsillero on sxc.hu

Say What?

Wednesday, June 26th, 2019

https://www.flickr.com/photos/m_kajo/10071501426/

Every day seems to bring more bad news from the AI front.

Google gives away tools for DIY AI, with no consideration for who uses them or for what.

One result is the proliferation of deepfakes.

Now scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research are making faking it even simpler.

In the latest example of deepfake technology, researchers have shown off new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.

The result is that almost anyone can make anyone say anything.

Just type in the new script.

Adobe, of course, plans to consumerize the tech, with a focus on how to generate the best revenue stream from it.

It’s not their problem how it will be used or by whom.

Yet another genii out of the box and out of control.

You can’t believe what you read; you can’t believe what you read or hear; it’s been ages since you could believe pictures, and now you won’t be able to believe videos you see.

All thanks to totally amoral tech.

Werner Vogels, Amazon’s chief technology officer, spelled out tech’s attitude in no uncertain terms.

It’s in society’s direction to actually decide which technology is applicable under which conditions.

“It’s a societal discourse and decision – and policy-making – that needs to happen to decide where you can apply technologies.”

Decisions and policies that happen long after the tech is deployed — if at all.

Welcome to the future.

Image credit: Marion Paul Baylado

The Bias of AI

Tuesday, June 25th, 2019

https://www.flickr.com/photos/mikemacmarketing/30188200627/in/photolist-MZCqiH-SjCgwQ-78gAtb-4Wrk4s-Dcx4UC-24s3ght-2dZfNaQ-8nBs97-5JpQEE-4GXcBN-RNNXQ4-2eo1VjR-29REGc9-3iAtU2-8SbD9g-2aDXanU-dYVVaB-5Pnxus-29Jabm7-2em8eRN-24DS86P-4KTiY4-87gbND-TnPTMx-UWXASW-fvrvcc-9xaKQj-2dviv8X-7Mbzwn-4WrkmQ-EPaCDj-dWTnJy-4zWGpJ-2fuyjjE-23y8cHC-4HEcBa-585oYX-jR9gc-dZ2ueo-dZ2v6o-2etej9U-dZ2A5J-4vuuEb-TrNV8b-dYVQKp-4HCFvt-6kBMSR-7JvXoF-3Ym8Sz-ShBxCm

I’ve written before that AI is biased for the same reason children grow up biased — they both learn from their parents.

In AI’s case its “parents” are the datasets used to train the algorithms.

The datasets are a collection of millions of bits of historical information focused on the particular subject being taught.

In other words, the AI learns to “think”, evaluate information and make judgments based on what has been done in the past.

And what was done in the past was heavily biased.

What does that mean to us?

In healthcare, AI will downgrade complaints from women and people of color, as doctors have always done.

And AI will really trash you if you are also fat. Seriously.

“We all have cultural biases, and health care providers are people, too,” DeJoy says. Studies have indicated that doctors across all specialties are more likely to consider an overweight patient uncooperative, less compliant and even less intelligent than a thinner counterpart.

AI is contributing significantly to the racial bias common in the courts and law enforcement.

Modern-day risk assessment tools are often driven by algorithms trained on historical crime data. (…) Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.

Facial recognition also runs on biased AI.

Nearly 35 percent of images for darker-skinned women faced errors on facial recognition software, according to a study by Massachusetts Institute of Technology. Comparatively lighter-skinned males only faced an error rate of around 1 percent.

While healthcare, law and policing are furthest along, bias is oozing out of every nook and cranny that AI penetrates.

As usual, the problem was recognized after the genie was out of the box.

There’s a lot of talk about how to correct the problem, but how much will actually be done and when is questionable.

This is especially true since the bias in AI is the same as that of the people using it it’s unlikely they will consider it a problem.

Image credit: Mike MacKenzie

Golden Oldies: Ducks in a Row: Politically Correct is a False Positive

Monday, May 13th, 2019

Poking through 12+ years of posts I find information that’s as useful now as when it was written.

Golden Oldies is a collection of the most relevant and timeless posts during that time.

While politically correct has made a lot of noise since its rise in the media, it hasn’t made any real difference. Join me tomorrow for a look at the problem with many progressives and why it will undermine many of the changes they champion.

Read other Golden Oldies here.

I sent an article about the “frat house” (AKA, sexist) culture prevalent in ZocDoc’s sales department to “Kevin”, a good friend who works in sales.

While agreeing about problematic sales cultures, he had a different take on culture in general.

His viewpoint, from someone who has been there/done that, may not be socially acceptable and could probably get him in trouble if posted on social media, but I can share it here — anonymously

Whether you’re a nigger or a bitch, this is the shit you have to deal with. I prefer environments where it’s obvious what the culture is, like this, than politically correct cultures where bigotry is the norm, but you never know for sure why you didn’t get the bonus, promotion or accolade with superior performance. Screw political correctness!

I believe it’s important to know where you stand, because then you can make informed choices. Give me this culture anytime – when I enter, I will know what the rules are. If I stay, it’s to accomplish a particular personal goal. When I leave (if not immediately), I will know why I stayed, left, and what I gained. I’m richer, they are poorer.

There is no such thing as “politically correct”. The term itself is an oxymoron that implies consensus building, popular sentiment or sinister machinations. Politics is about popularity — we never let others know where we stand or what we stand for in order to win a popularity contest. It is giving in to the tyranny of the mob, not daring to have unpopular opinions or stances, because one will not be popular.

Being a black man, I prefer a racist that’s honest about who he is and what he is. I prefer working for such a person because I know what to expect. I presume it would be the same for you as a woman regarding sexists. These days no one is a racist, we just have “unconscious biases” that prevent us from taking unpopular positions and that ensure that the powerful can continue to exclude the less powerful.

Politically correct environments rob me of information, choice, and the ability to navigate astutely to attain my objectives.

I agree with Kevin, even in those instances where bias has its basis in neuroscience, it’s better to know.

Flickr image credit: Zaskoda:

Job Titles

Tuesday, May 7th, 2019

https://www.flickr.com/photos/atalaya/28580198103/

One of the dumbest (stupidest?) actions during the original dot com boom was two-fold.

The first was title inflation, with larger companies taking a leaf from the financial services industry where customer-facing positions, such as brokers and non-teller positions, were often VPs.

Second, bigger titles were often handed out in lieu of promotions and raises, while in the startup community titles bore little-to-no relationship to the person’s skills or experience.

Both created major problems for candidates when interviewing at new companies, especially for those who bought into their titles. It came as shock that the skills required to be a VP in a “real” company are seriously different than those needed in a startup.

That was then, but what’s happening now?

I got the answer in a list from CB Insights of tech’s silliest job titles.

It’s gotten worse.

Aside from confusing their customers and vendors, the titles sound totally idiotic to all but a very small slice of the tech world.

However, the titles do do a great job of strengthening gender bias and turning off women.

What more could any bro want?

Image credit: JJ Merelo

RSS2 Subscribe to
MAPping Company Success

Enter your Email
Powered by FeedBlitz
About Miki View Miki Saxon's profile on LinkedIn

Clarify your exec summary, website, etc.

Have a quick question or just want to chat? Feel free to write or call me at 360.335.8054

The 12 Ingredients of a Fillable Req

CheatSheet for InterviewERS

CheatSheet for InterviewEEs

Give your mind a rest. Here are 4 quick ways to get rid of kinks, break a logjam or juice your creativity!

Creative mousing

Bubblewrap!

Animal innovation

Brain teaser

The latest disaster is here at home; donate to the East Coast recovery efforts now!

Text REDCROSS to 90999 to make a $10 donation or call 00.733.2767. $10 really really does make a difference and you'll never miss it.

And always donate what you can whenever you can

The following accept cash and in-kind donations: Doctors Without Borders, UNICEF, Red Cross, World Food Program, Save the Children

*/ ?>

About Miki

About KG

Clarify your exec summary, website, marketing collateral, etc.

Have a question or just want to chat @ no cost? Feel free to write 

Download useful assistance now.

Entrepreneurs face difficulties that are hard for most people to imagine, let alone understand. You can find anonymous help and connections that do understand at 7 cups of tea.

Crises never end.
$10 really does make a difference and you’ll never miss it,
while $10 a month has exponential power.
Always donate what you can whenever you can.

The following accept cash and in-kind donations:

Web site development: NTR Lab
Creative Commons License
This work is licensed under a Creative Commons Attribution-NoDerivs 2.5 License.