Home Leadership Turn Archives Me RampUp Solutions  
 

  • Categories

  • Archives
 
Archive for June, 2019

Say What?

Wednesday, June 26th, 2019

https://www.flickr.com/photos/m_kajo/10071501426/

Every day seems to bring more bad news from the AI front.

Google gives away tools for DIY AI, with no consideration for who uses them or for what.

One result is the proliferation of deepfakes.

Now scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research are making faking it even simpler.

In the latest example of deepfake technology, researchers have shown off new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.

The result is that almost anyone can make anyone say anything.

Just type in the new script.

Adobe, of course, plans to consumerize the tech, with a focus on how to generate the best revenue stream from it.

It’s not their problem how it will be used or by whom.

Yet another genii out of the box and out of control.

You can’t believe what you read; you can’t believe what you read or hear; it’s been ages since you could believe pictures, and now you won’t be able to believe videos you see.

All thanks to totally amoral tech.

Werner Vogels, Amazon’s chief technology officer, spelled out tech’s attitude in no uncertain terms.

It’s in society’s direction to actually decide which technology is applicable under which conditions.

“It’s a societal discourse and decision – and policy-making – that needs to happen to decide where you can apply technologies.”

Decisions and policies that happen long after the tech is deployed — if at all.

Welcome to the future.

Image credit: Marion Paul Baylado

The Bias of AI

Tuesday, June 25th, 2019

https://www.flickr.com/photos/mikemacmarketing/30188200627/in/photolist-MZCqiH-SjCgwQ-78gAtb-4Wrk4s-Dcx4UC-24s3ght-2dZfNaQ-8nBs97-5JpQEE-4GXcBN-RNNXQ4-2eo1VjR-29REGc9-3iAtU2-8SbD9g-2aDXanU-dYVVaB-5Pnxus-29Jabm7-2em8eRN-24DS86P-4KTiY4-87gbND-TnPTMx-UWXASW-fvrvcc-9xaKQj-2dviv8X-7Mbzwn-4WrkmQ-EPaCDj-dWTnJy-4zWGpJ-2fuyjjE-23y8cHC-4HEcBa-585oYX-jR9gc-dZ2ueo-dZ2v6o-2etej9U-dZ2A5J-4vuuEb-TrNV8b-dYVQKp-4HCFvt-6kBMSR-7JvXoF-3Ym8Sz-ShBxCm

I’ve written before that AI is biased for the same reason children grow up biased — they both learn from their parents.

In AI’s case its “parents” are the datasets used to train the algorithms.

The datasets are a collection of millions of bits of historical information focused on the particular subject being taught.

In other words, the AI learns to “think”, evaluate information and make judgments based on what has been done in the past.

And what was done in the past was heavily biased.

What does that mean to us?

In healthcare, AI will downgrade complaints from women and people of color, as doctors have always done.

And AI will really trash you if you are also fat. Seriously.

“We all have cultural biases, and health care providers are people, too,” DeJoy says. Studies have indicated that doctors across all specialties are more likely to consider an overweight patient uncooperative, less compliant and even less intelligent than a thinner counterpart.

AI is contributing significantly to the racial bias common in the courts and law enforcement.

Modern-day risk assessment tools are often driven by algorithms trained on historical crime data. (…) Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.

Facial recognition also runs on biased AI.

Nearly 35 percent of images for darker-skinned women faced errors on facial recognition software, according to a study by Massachusetts Institute of Technology. Comparatively lighter-skinned males only faced an error rate of around 1 percent.

While healthcare, law and policing are furthest along, bias is oozing out of every nook and cranny that AI penetrates.

As usual, the problem was recognized after the genie was out of the box.

There’s a lot of talk about how to correct the problem, but how much will actually be done and when is questionable.

This is especially true since the bias in AI is the same as that of the people using it it’s unlikely they will consider it a problem.

Image credit: Mike MacKenzie

Golden Oldies: KG on AI and Its Implications

Monday, June 24th, 2019

Poking through 13+ years of posts I find information that’s as useful now as when it was written.

Golden Oldies is a collection of the most relevant and timeless posts during that time.

KG wrote this five years ago and, sadly, many of the concerns he mentioned are happening. AI bias is rampant and, as usual with emerging tech, most people don’t know/understand/care about the danger that represents.

Read other Golden Oldies here.

A few months ago I read the book, Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.  It was a tremendously interesting book and confirmed many of the concerns I’ve been having about my own industry for some time.  Subsequently there have been a slate of articles wondering about AI and how the industry is progressing.

One of the book’s premises was that we need to take a step back and think about the moral and ethical basis of what we’re doing and how and what we’re imparting to these machines.

I believe that it will be difficult, or impossible, for the AI industry to change direction mid-streams and start being concerned about morality and ethics.  Most of the funding for AI comes from DARPA and other such institutions that are part of the military and affiliated organizations.  Finance is second largest funding source.

Most of the people who are concerned about AI (including James Barrat) worry about when machines gain human level intelligence.  I am much more concerned about what happens before that.  Today it is said that the most sophisticated AI has the intelligence of a cockroach.  This is no small feat, but it also brings with it some clear implications – cockroaches have important drives and instincts that guide their behavior.  Survival, resource acquisition, reproduction, etc. are all things that cockroaches do.  How far away are we from when our AI exhibit these characteristics?  What about when we get to rat-level intelligence?

At that point machines will be very powerful and control many of the essential functions of society.  Imagine a frightened rat (or 6 month old toddler) with infinite power – what actions would they take to protect themselves or get what they perceive they want or need?  How would they react if we stood in their way?  How concerned would they be with the consequences of their actions?  Most adults don’t do this today.

Before we achieve human level intelligence in machines, we’ll have to deal with less intelligent and probably more dangerous and powerful entities.  More dangerous because they will not have the knowledge or processing power to think of consequences, and also because they will be controlling our cars, airplanes, electricity grids, public transportation and many other systems.

Most AI optimists ignore the dangerous “lower mammal, toddler and childhood” stages of AI development and only see the potential benefits at the end.  But we need to think about the path there and what we can do to prepare as individuals and as a society.

Not to speak about the fact that once we reach human level intelligence in AI, we’ll be dealing with an intelligence that is so alien to anything we know (after all, we have lots of experience with cockroaches, rats and toddlers), and no way of knowing what its motives are.  But that will be left for another discussion.

Amazon’s Terrifying Power

Wednesday, June 19th, 2019

https://apagraph.com/quote/6672

Every day when I look through the headlines there’s always another story about Facebook, Google, or another tech company abusing their users and offering the same old platitudes about how important user privacy is to them or being investigated/fined by the Feds, European Union and some other country.

Ho-hum, business as usual.

There is still a certain amount of choice about using Facebook, Google-Android, various apps, and smart products, such as Samsung’s smart TV, all of which can be hacked. And while it takes effort, to some extent you can protect yourself and your privacy.

But even Facebook and Google’s efforts to dominate pale in comparison, as do the dreams of power of every despot, politico, religious zealot, or military organization, to the future Amazon sees for itself.

Amazon’s incredible, sophisticated systems are no longer being used just to serve up good deals, fast delivery times, or cheap web storage. Its big data capabilities are now the tool of police forces, and maybe soon the military. In the corporate world, Amazon is positioning itself to be the “brains” behind just about everything.

Add to that Amazon’s belief that they have no responsibility in how their tech is used.

Rekognition, Amazon’s facial recognition software is a good example.

Civil rights groups have called it “perhaps the most dangerous surveillance technology ever developed”, and called for Amazon to stop selling it to government agencies, particularly police forces. City supervisors in San Francisco banned its use, saying the software is not only intrusive, but biased – it’s better at recognising white people than black and Asian people. (…) Werner Vogels, Amazon’s CTO,  doesn’t feel it’s Amazon’s responsibility to make sure Rekognition is used accurately or ethically.

In one form or another, with great power comes great responsibility has been a byword starting with the Bible and down through the ages to Spiderman.

When a company wields the power to bring the modern world to its knees one can only hope it will take that to heart.

Image credit: judon / aparagraph.com

Tech is Full of Isht

Tuesday, June 18th, 2019

From Maciej Cegłowski’s (a SF white hat techie) blog:

Writing in the New York Times last month, Google CEO Sundar Pichai argued that it is “vital for companies to give people clear, individual choices around how their data is used.” Like all Times opinion pieces, his editorial included multiple Google tracking scripts served without the reader’s knowledge or consent. Had he wanted to, Mr. Pichai could have learned down to the second when a particular reader had read his assurance that Google “stayed focused on the products and features that make privacy a reality.”

Writing in a similar vein in the Washington Post this March, Facebook CEO Mark Zuckerberg called for Congress to pass privacy laws modeled on the European General Data Protection Regulation (GDPR). That editorial was served to readers with a similar bouquet of non-consensual tracking scripts that violated both the letter and spirit of the law Mr. Zuckerberg wants Congress to enact.

TOS for new apps aren’t improving (paywall). Consider this from Ovis, an app women use to track their pregnancy.

An Ovia spokeswoman said the company does not sell aggregate data for advertising purposes. But women who use Ovia must consent to its 6,000-word “terms of use,” which grant the company a “royalty-free, perpetual, and irrevocable license, throughout the universe” to “utilize and exploit” their de-identified personal information for scientific research and “external and internal marketing purposes.” Ovia may also “sell, lease or lend aggregated Personal Information to third parties,” the document adds.

Good grief. As any search will tell you “de-identified” is a joke, since it’s no big deal to put a name to so-called anonymous data.

By now you should know that tech talks privacy, but walks data collection.

That means it’s up to you to do what you can, starting with always adjusting all default privacy settings.

 

Golden Oldies: Entrepreneurs: Tech vs. Responsibility And Accountability

Monday, June 17th, 2019

Poking through 13+ years of posts I find information that’s as useful now as when it was written.

Golden Oldies is a collection of the most relevant and timeless posts during that time.

This post and the quote from the FTC dates back to 2015. Nothing on the government side has changed; the Feds are still investigating and Congress is still talking. And as we saw in last weeks posts the company executives are more arrogant and their actions are much worse. One can only hope that the US government will follow in the footsteps of European countries and rein them in.

Read other Golden Oldies here.

Entrepreneurs are notorious for ignoring security — black hat hackers are a myth — until something bad happens, which, sooner or later, always does.

They go their merry way, tying all manner of things to the internet, even contraceptives and cars, and inventing search engines like Shodan to find them, with nary a thought or worry about hacking.

Concerns are pooh-poohed by the digerati and those voicing them are considered Luddites, anti-progress or worse.

Now Edith Ramirez, chairwoman of the Federal Trade Commission, voiced those concerns at CES, the biggest Internet of Things showcase.

“Any device that is connected to the Internet is at risk of being hijacked,” said Ms. Ramirez, who added that the large number of Internet-connected devices would “increase the number of access points” for hackers.

Interesting when you think about the millions of baby monitors, fitness trackers, glucose monitors, thermostats and dozens of other common items available and the hundreds being dreamed up daily by both startups and enterprise.

She also confronted tech’s (led by Google and Facebook) self-serving attitude towards collecting and keeping huge amounts of personal data that was (supposedly) the basis of future innovation.

“I question the notion that we must put sensitive consumer data at risk on the off chance a company might someday discover a valuable use for the information.”

At least someone in a responsible position has finally voiced these concerns — but whether or not she can do anything against tech’s growing political clout/money/lobbying power remains to be seen.

Image credit: centralasian

Bad Boys Facebook and Google

Wednesday, June 12th, 2019

https://www.flickr.com/photos/mysign_ch/8527753874/in/photolist-dZyY8d-HRv9gc-XaYGXv-e5CAgW-29Kkshj-anSkn7-9DdnrK-9k7Jan-ebtNpt-ohmijQ-5oubhB-nZU9J9-nZU9rA-bj8NSR-ohd3EY-9kaGPY-5MzoeQ-gjS9QU-ofmUa7-ohd4WL-5rQcbT-6K55ZR-nZUoFB-oj9VZM-9hmC9R-99BVQZ-t7ohKh-92x5xZ-5BKnf4-V96rVQ-mZPN5U-WmWEqd-9tQRav-a63sAi-dtGJev-nW7xNg-9gti5v-dtGPTx-97bqPt-4xrBt2-65L7JN-bJtwZ8-6tXvgR-rqaoff-j3PG8F-aPYzQz-ebtLaF-raTXZQ-btpW68-WVXxceYou’d have to be living on another planet not to be aware of the isht pulled by Facebook. Where do I start?

With the fact that Facebook is getting fined for storing millions of passwords in plain text or that they “unintentionally” uploaded a million and a half new member email contacts? Or that user data, such as friends, relationships and photos, was used to reward partners and fight rivals? Or might it bother you more to know that your posts, photos, updates, etc., whether public or private, are labeled and categorized by hand by outsourced works in India? Nastier is Facebook sharing/selling your data to cell phone carriers.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

Facebook owns Instagram, so it should come as no surprise that the private phone numbers and email addresses of millions of celebrities and influencers were scraped by a partner company.

Then there is Google, which dumps location data from millions of devices, not just Android, into a database called Sensorvault and makes it available for search to law enforcement, among others. On May 7 Google claimed it had found privacy religion, but on CNBC reported that Gmail tracks and saves every digital receipt, not just things, but services and, of course Amazon. Enterprise G Suite customers don’t fare much better. Their user passwords were kept un-encrypted on an internal server for years. Not hacked, but still…

YouTube is in constant trouble for the way it interprets its constantly changing Terms Of Service.

The list for all go on and on.

The European Union is far ahead of the US in terms of privacy, anticompetitive actions, etc., but US consumers are finally waking up. So-called Big Tech is no longer popular politically and the Justice Department is opening an antitrust investigation of Google (Europe already fined it nearly 3 billion in 2017 for anticompetitive actions).

Can Facebook be far behind?

A bit more next week.

Image credit: MySign AG

The Doings of Amazon and Apple

Tuesday, June 11th, 2019

https://www.flickr.com/photos/mysign_ch/8527753874/in/photolist-dZyY8d-HRv9gc-XaYGXv-e5CAgW-29Kkshj-anSkn7-9DdnrK-9k7Jan-ebtNpt-ohmijQ-5oubhB-nZU9J9-nZU9rA-bj8NSR-ohd3EY-9kaGPY-5MzoeQ-gjS9QU-ofmUa7-ohd4WL-5rQcbT-6K55ZR-nZUoFB-oj9VZM-9hmC9R-99BVQZ-t7ohKh-92x5xZ-5BKnf4-V96rVQ-mZPN5U-WmWEqd-9tQRav-a63sAi-dtGJev-nW7xNg-9gti5v-dtGPTx-97bqPt-4xrBt2-65L7JN-bJtwZ8-6tXvgR-rqaoff-j3PG8F-aPYzQz-ebtLaF-raTXZQ-btpW68-WVXxceAs promised yesterday, I’m updating the “don’t trust them, they lie” list (in mostly alphabetical order) with new links to the nefarious doings of your favorite “can’t live without ‘em” companies.

First up: Amazon. Anyone who has bought from Amazon is aware of how it uses your buying data to suggest additional purchases, as do all ecommerce sites. And there have been multiple stories about Alexa listening and responding even when it’s supposedly not on. But did you know that those supposedly anonymous recordings are discussed for amusement in Amazon employee chatrooms?

On a far more serious note, Ring, the video doorbell company Amazon acquired, is teaming up with police departments to offer free or discounted smart doorbells. And although it supposedly goes against Ring’s own policy, some of those PDs are adding to the terms of service the right to look at the saved video footage sans subpoena.

Sadly, Apple is on the nefarious list, in spite of it’s famous “What happens on your iPhone stays on your iPhone” philosophy. But, as with other companies, the facts are more complicated — the thieves are in the apps.

More tomorrow.

Image credit: MySign AG

Golden Oldies: You the Product

Monday, June 10th, 2019

https://www.flickr.com/photos/8693667@N05/4617735784/

Poking through 13+ years of posts I find information that’s as useful now as when it was written.

Golden Oldies is a collection of the most relevant and timeless posts during that time.

For years I’ve written about the lie/cheat/steal attitude of social media sites, such as Facebook, Google, Amazon, the list goes on and on. This post is only a year old, but I thought it could use some updating. What I can tell you today is that nothing has improved, in fact it has gotten much worse — as you’ll see over the next two days.

Read other Golden Oldies here.

you ever been to a post-holiday potluck? As the name implies, it’s held within two days of any holiday that involves food, with a capital F, such as Thanksgiving, Christmas and, of course, Easter. Our group has only three rules, the food must be leftovers, conversation must be interesting and phones must be turned off. They are always great parties, with amazing food, and Monday’s was no exception.

The unexpected happened when a few of them came down on me for a recent post terming Mark Zukerberg a hypocrite. They said that it wasn’t Facebook’s or Google’s fault a few bad actors were abusing the sites and causing problems. They went on to say that the companies were doing their best and that I should cut them some slack.

Rather than arguing my personal opinions I said I would provide some third party info that I couldn’t quote off the top of my head and then whoever was interested could get together and argue the subject over a bottle or two of wine.

I did ask them to think about one item that stuck in my mind.

How quickly would they provide the location and routine of their kids to the world at large and the perverts who inhabit it? That’s exactly what GPS-tagged photos do.

I thought the info would be of interest to other readers, so I’m sharing it here.

Facebook actively facilitates scammers.

The Berlin conference was hosted by an online forum called Stack That Money, but a newcomer could be forgiven for wondering if it was somehow sponsored by Facebook Inc. Saleswomen from the company held court onstage, introducing speakers and moderating panel discussions. After the show, Facebook representatives flew to Ibiza on a plane rented by Stack That Money to party with some of the top affiliates.

Granted anonymity, affiliates were happy to detail their tricks. They told me that Facebook had revolutionized scamming. The company built tools with its trove of user data (…) Affiliates hijacked them. Facebook’s targeting algorithm is so powerful, they said, they don’t need to identify suckers themselves—Facebook does it automatically. And they boasted that Russia’s dezinformatsiya agents were using tactics their community had pioneered.

Scraping Android.

Android owners were displeased to discover that Facebook had been scraping their text-message and phone-call metadata, in some cases for years, an operation hidden in the fine print of a user agreement clause until Ars Technica reported. Facebook was quick to defend the practice as entirely aboveboard—small comfort to those who are beginning to realize that, because Facebook is a free service, they and their data are by necessity the products.

I’m not just picking on Facebook, Amazon and Google are right there with it.

Digital eavesdropping

Amazon and Google, the leading sellers of such devices, say the assistants record and process audio only after users trigger them by pushing a button or uttering a phrase like “Hey, Alexa” or “O.K., Google.” But each company has filed patent applications, many of them still under consideration, that outline an array of possibilities for how devices like these could monitor more of what users say and do. That information could then be used to identify a person’s desires or interests, which could be mined for ads and product recommendations. (…) Facebook, in fact, had planned to unveil its new internet-connected home products at a developer conference in May, according to Bloomberg News, which reported that the company had scuttled that idea partly in response to the recent fallout.

Zukerberg’s ego knows no bounds.

Zuckerberg, positioning himself as the benevolent ruler of a state-like entity, counters that everything is going to be fine—because ultimately he controls Facebook.

There are dozens more, but you can use search as well as I.

What can you do?

Thank Firefox for a simple containerized solution to Facebook’s tracking (stalking) you while surfing.

Facebook is (supposedly) making it easier to manage your privacy settings.

There are additional things you can do.

How to delete Facebook, but save your content.

The bad news is that even if you are willing to spend the effort, you can’t really delete yourself from social media.

All this has caused a rupture in techdom.

I could go on almost forever, but if you’re interested you’ll have no trouble finding more.

Image credit: weisunc

Educational Fraud

Tuesday, June 4th, 2019

https://www.flickr.com/photos/dharmabum1964/3108162671/Have you ever wondered how much smarter VCs, money managers, corporate CEOs, and the super wealthy really are? (They’re not.)

What “due diligence” actually involves? (Not what HP did.)

Do they really fall for scams and do stupid stuff like the rest of us? (Absolutely.)

CB Insights recently shared 17 Of The Biggest Startup Frauds Of All Time.

I found it hilarious (I have a warped sense of humor) and well worth reading.

Click the link (or save it for later) and all your questions will be answered.

Image credit: Beatnik Photos

RSS2 Subscribe to
MAPping Company Success

Enter your Email
Powered by FeedBlitz
About Miki View Miki Saxon's profile on LinkedIn

Clarify your exec summary, website, etc.

Have a quick question or just want to chat? Feel free to write or call me at 360.335.8054

The 12 Ingredients of a Fillable Req

CheatSheet for InterviewERS

CheatSheet for InterviewEEs

Give your mind a rest. Here are 4 quick ways to get rid of kinks, break a logjam or juice your creativity!

Creative mousing

Bubblewrap!

Animal innovation

Brain teaser

The latest disaster is here at home; donate to the East Coast recovery efforts now!

Text REDCROSS to 90999 to make a $10 donation or call 00.733.2767. $10 really really does make a difference and you'll never miss it.

And always donate what you can whenever you can

The following accept cash and in-kind donations: Doctors Without Borders, UNICEF, Red Cross, World Food Program, Save the Children

*/ ?>

About Miki

About KG

Clarify your exec summary, website, marketing collateral, etc.

Have a question or just want to chat @ no cost? Feel free to write 

Download useful assistance now.

Entrepreneurs face difficulties that are hard for most people to imagine, let alone understand. You can find anonymous help and connections that do understand at 7 cups of tea.

Crises never end.
$10 really does make a difference and you’ll never miss it,
while $10 a month has exponential power.
Always donate what you can whenever you can.

The following accept cash and in-kind donations:

Web site development: NTR Lab
Creative Commons License
This work is licensed under a Creative Commons Attribution-NoDerivs 2.5 License.