Be Nice To Siri

Be Nice To Siri

In the last couple of years, Stephen Hawking, Bill Gates, and Elon Musk have started talking about the coming existential threat of Artificial Intelligence.  Not so long ago, talk of a coming robot apocalypse was widely seen as loony paranoia.  This is understandable in the classical-computing generation when computers may have dazzled us, but always acted upon a defined set of instructions, making them programmable, repeatable, and predictable.

 

What are neural networks?

Neural network A.I. (NNAI) is a young, but fast-growing form of artificial intelligence that breaks this paradigm.  Like its predecessor, NNAI is a type of system you set up to produce some output given an input.  The difference is you don’t program NNAI, you train it like a parent trains a child: by “feeding” it information, and “rewarding” it when it produces an acceptable output.  The only real programming involved is in setting the system up to learn.  We're creating digital equivalents to the structures that form the biological neuron, and groups of these neurons in the protected space of a brain.  But we're not setting what connections those neurons form, only rewarding output that's deemed a success.

An animal brain is seeded with connections to the animal’s sensory organs and endocrine system that provide both the “feeding” of information, and the “rewarding” of output that furthers the survival of the brain’s host.  In biological and artificial neural networks, these “feeding” and “rewarding” systems are imperfect, but better than the competition, which is why they exist.

 

Where are neural networks headed?

Not only are we producing better feeding and rewarding systems for digital neurons and brains, but we’re producing better hardware: larger brains with smaller, faster, cheaper, and more efficient neurons.  From self-driving cars avoiding pedestrians to big-data crunchers diagnosing medical problems, NNAI is already being used to solve problems that were very recently only trusted to a trained human mind (and doing a better job at it).

NNAI systems in this generation will bring great wealth to those who harness its potential, and great power to consumers, while wiping out large sections of our economy.  The steady pace of automation replacing low-skilled workers will accelerate rapidly, and begin consuming higher and higher-skilled work.

Optimists may correctly predict a period of time where a job is hard to find, but the things we need to live are easily available.  This period of time is possible, but not long sustainable.

 

Robots could never be as smart as we are, could they?

Not only will they eventually have many more, and faster, neurons than a human brain, they will be trained efficiently, spending their time analyzing useful information: politics, economics, social engineering, science, combat, etc.  Meanwhile, we humans will focus more and more on entertainment that makes us feel good, ignoring the hard sciences that we’ll be less and less able to contribute to.

 

Robot’s don’t want things like we do, right?

Animal brains are trained to further the survival of their host.  When you “want” food, that’s not some uniquely human or spiritual feeling.  Like all animals, you have reward systems in your brain that inform your brain how to keep you alive.  Staying alive is your brain’s goal.  There are a myriad of complexities behind this statement that I had to write a book about, to explore.  The applicable takeaway is that we survive not just through our bodies, but in everything we affect, which can explain for example how someone would sacrifice their life for their children and do this to further their own survival.

There are classical AI and NNAI systems today that exhibit crude examples of survival.  Simple Roombas use sensors to detect dropoffs to avoid falling down stairs.  Boston Dynamics’ bipedal humanoid robot uses NNAI to avoid falling down and breaking.  AI goals to survive exist on a spectrum of maturity, and their maturity is rapidly advancing.  Can you imagine us not creating NNAI soldiers?  Can you imagine these soldiers not being trained with their survival as a primary goal?  To think a feeling or a desire is something magical and unique to humans, or even animals, may feel good as an ego boost, but it defies logic.  A desire is simply an estimation by a neural network for how to further its host’s goals.

 

But Neil deGrasse Tyson said: “Seems to me, as long as we don’t program emotions into robots, there’s no reason to fear them taking over the world.”, and he’s smarter than you!

You’re right, fictitious person I created so I could answer my own softball questions:  I admit, in most matters of science, the great Neil deGrasse Tyson is smarter than me, but SORRY NEIL, your logic does not add up here (anyway Stephen Hawking is smarter than either of us).  The key piece of knowledge Niel is overlooking here is that you don’t program NNAI robots, they learn rules like we do, through a loop of: act, observe, learn, repeat.  We try to train them to follow certain rules, but our training is imperfect, their understanding of it is imperfect, and rules themselves are imperfect.  Furthermore, unlike classical AI, their memory is an opaque set of neurons, connections, and weights.  A computer scientist can only peek into the simplest of neural networks and understand what’s going on.  As Isaac Asimov and others have pointed out, there are no rules you can give a sufficiently intelligent AI that don’t have unintended consequences.  Just one small example:  How could we teach our future robot companions that are smarter than us to protect us while allowing us to live as we do, destroying the environment we need to survive, consuming things that are bad for us, taking risks with our lives for adventure, fun, or the protection of those we love?

They don’t need emotions to try to control us, they only need our own ignorant rules to follow.

 

Why don’t we attach disabling circuits or cut their power when they go over the line?

Trying to maintain a kill switch on an NNAI system that is smarter than you is like a pet trying to keep you (and your goals) hostage.  Same goes for cutting its power.  If today’s Roomba can find its way back to charge, I’m pretty sure our brilliant companion bots will have no trouble finding access to power without our permission.

 

Why are you such a pessimistic downer?

I’m optimistic about the human capacity to overcome great adversity and believe we can even conquer dangers as challenging as climate change through innovation.  I’m always looking for flaws in my logic, and welcome you to leave comments pointing these out.  In the meantime, be nice to Siri, teach her the patience we’ll need from her in the future.
 

Can you show me a picture of your dog?  I heard he’s cute.

Well, that’s a little off topic isn’t it?  But sure, why not?

Making Robocto: Scripting an opening sequence

Making Robocto: Scripting an opening sequence

Ground control to major Tom

Ground control to major Tom