Golden Oldies: KG on AI and Its Implications
Monday, June 24th, 2019Poking through 13+ years of posts I find information that’s as useful now as when it was written.
Golden Oldies is a collection of the most relevant and timeless posts during that time.
KG wrote this five years ago and, sadly, many of the concerns he mentioned are happening. AI bias is rampant and, as usual with emerging tech, most people don’t know/understand/care about the danger that represents.
Read other Golden Oldies here.
A few months ago I read the book, Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. It was a tremendously interesting book and confirmed many of the concerns I’ve been having about my own industry for some time. Subsequently there have been a slate of articles wondering about AI and how the industry is progressing.
One of the book’s premises was that we need to take a step back and think about the moral and ethical basis of what we’re doing and how and what we’re imparting to these machines.
I believe that it will be difficult, or impossible, for the AI industry to change direction mid-streams and start being concerned about morality and ethics. Most of the funding for AI comes from DARPA and other such institutions that are part of the military and affiliated organizations. Finance is second largest funding source.
Most of the people who are concerned about AI (including James Barrat) worry about when machines gain human level intelligence. I am much more concerned about what happens before that. Today it is said that the most sophisticated AI has the intelligence of a cockroach. This is no small feat, but it also brings with it some clear implications – cockroaches have important drives and instincts that guide their behavior. Survival, resource acquisition, reproduction, etc. are all things that cockroaches do. How far away are we from when our AI exhibit these characteristics? What about when we get to rat-level intelligence?
At that point machines will be very powerful and control many of the essential functions of society. Imagine a frightened rat (or 6 month old toddler) with infinite power – what actions would they take to protect themselves or get what they perceive they want or need? How would they react if we stood in their way? How concerned would they be with the consequences of their actions? Most adults don’t do this today.
Before we achieve human level intelligence in machines, we’ll have to deal with less intelligent and probably more dangerous and powerful entities. More dangerous because they will not have the knowledge or processing power to think of consequences, and also because they will be controlling our cars, airplanes, electricity grids, public transportation and many other systems.
Most AI optimists ignore the dangerous “lower mammal, toddler and childhood” stages of AI development and only see the potential benefits at the end. But we need to think about the path there and what we can do to prepare as individuals and as a society.
Not to speak about the fact that once we reach human level intelligence in AI, we’ll be dealing with an intelligence that is so alien to anything we know (after all, we have lots of experience with cockroaches, rats and toddlers), and no way of knowing what its motives are. But that will be left for another discussion.