The Drones Have Come Home To Roost

The Drones Have Come Home To Roost

A robot guided by police just killed a citizen.  A presumed to be actively sniping, murderous citizen that, perhaps, got what was coming to him.  But stepping back to see this is the first time law enforcement has used a robot to kill someone, we need a national discussion of the implications.

Trusting a robot to administer lethal force on the battlefield is something we Americans have relied upon, been ambivalent to, or hated, but not stopped.  It can sound completely reasonable to combat terrorists or in the aftermath of a shooting:

From Michael Liedtke and Bree Fowler, Associated Press:

Dallas Police Chief David Brown defending his department's decision. "Other options would have exposed our officers to great danger," he said.
Dallas Mayor Mike Rawlings applauded Brown for making "the right call" and said he would have no qualms about resorting to the same strategy in the future. "When there's no other way, I think this is a good example," he said. "The key thing is to keep our police out of harm's way."

If we accept this argument, then eventually the most minor threats will be engaged robotically by some far-off detached policeman.  No connection between a police force and the community or nuanced empathetic solutions to conflict, just law abiders and law breakers.  On the flip side, how can we avoid eventually enabling our police force to use weaponry at least as powerful as what can be 3D-printed and attached to a commercial drone for a few thousand dollars?  Still, we’re witnessing a mostly one-sided arms race between police and civilians, where weapons designed for the battlefield are being transferred wholesale to police departments.

The current generation of robots used by the police and soldiers are not just remote control cars with cameras, they already have some autonomy like obstacle avoidance for example.  The operator tells the robot where on the map to go, and the robot figures out how to get there.  Drones can even identify and suggest human targets through facial recognition.  The degree of autonomy robots have in making lethal judgement calls is only going to increase, because it makes the system achieve its goal more efficiently.  There will even come a time where we can avoid some uses of lethal force because a humanoid bot, unconcerned for its life, can enter a room and effectively incapacitate a group of hostage takers in a fraction of a second.  The AI decision-making neural-net engine (the brain) of this bot will have been trained from officer and soldier body-cam footage to do this job better than any human.  Google's parent company Alphabet has humanoid robots running on two legs through the woods.  How many years until these are on the battlefield?  Patrolling our streets?

Where is the thrust of this robot development focused on today?  Making neural-net humanoid robots more able to mimic human behaviors and improve upon them.

A new version of Atlas, designed to operate outdoors and inside buildings. It is specialized for mobile manipulation. It is electrically powered and hydraulically actuated. It uses sensors in its body and legs to balance and LIDAR and stereo sensors in its head to avoid obstacles, assess the terrain, help with navigation and manipulate objects.

Boston Dynamics founder Marc Railbert discussing the Atlas Robot: “We’re making pretty good progress on making it so that it has mobility that’s sort of in the shooting range of yours.  I’m not saying it can do everything you can do, but uh, you can imagine if we keep pushing, we’ll get there.”

Never mind his “shooting range of yours”, Freudian slip, this is the guy speaking for the team that’s building the post-human that can “do everything you can do” because they want to be the first to do so, and are heavily funded by Google.  Google, by the way, has an AI ethics board that is so opaque we don’t even get to know who sits on it.  Are they going to recommend Google act against its economic interests to end a project for a potential danger?  Even if they do,  would we ever know if Google’s board ignores that advice?  Almost certainly not.  So there’s no independent transparent board, let alone a government panel of people who can be voted out of office for dumb decisions, deciding what AI behaviors are dangerous. There’s only a race to be the first.  This is how “progress” happens.  It pushes ahead at the speed of capitalism, writing the new normal, the status quo, before the government has time to make laws to regulate, restrict, or prevent it for the greater good.  Trust the government or not, you’d be glad if they cracked down on a neighbor with a nuclear warhead in his basement.  But since “progress” moves exponentially with the growth of our technology, and our system to handle it moves pretty linearly, if at all, how can we keep up?  Are we accepting corporations as the new determiner of ethical norms?  Do we have a choice?  That last question is not rhetorical, and I don’t know the answer.

We’ve passed an important threshold.  There aren’t many lines this clear down the road we’ll cross to pervasive autonomous control of people by beings made in our image to be smarter, faster forms of ourselves.  The next advances will happen a baby-step at a time, in quick succession, and go unnoticed.

Lastly, there’s how these decisions are made:  The decision to cross this threshold wasn’t labored over by even an opaque ethics board at Google.  It was made by some police chief in Dallas TX whom the average citizen knows nothing about.  So when we talk about whether we as a society will ever decide to let our robots cross a line, we have to keep in mind that barring some new powerful governmental agency drawing out clear lines, inspecting corporate research labs, and enforcing those lines, we as a society will have no say in the matter.  These lines will be crossed by whatever state or local person in possession of these robots decides is best in an emergency.  That is until we have no lines left to draw that we can enforce.  If you think Mexico will pay to build a wall to keep its citizens out of the U.S., perhaps you’ll believe that a super-intelligent being will attack a research lab for trying to make it more intelligent.

Becoming a Feminist

Becoming a Feminist

What is Progress?

What is Progress?