The First Question: Will Our Robots Kill Us?

Many of humanity’s creations turn out to be deadly, either intentionally or unintentionally. From fire to railroads, to cigarettes and fast food, people die from the things we make. We should also include cars, guns, and swimming pools.

At_Last_a_Perfect_Soldier

Now we are giving greater autonomy to at least two types of moving devices that we already know can be lethal: cars and drones. Self-driving cars and autonomously functioning drones are both, under our current definition, robots. It’s natural that we have reservations about these new developments, and hopefully we have over time learned to apply a higher level of scrutiny at the earliest stages of the adoption of a technology. Otherwise we could end up with something that kills over thirty thousand people a year but is so entrenched in our culture that it seems we can’t do much to address the problem.

One of the developments is the potential emergence of autonomous drones. Currently, drones are used to target and kill targets. The final decision is made by drone operators according to the guiding rules of engagement. There have been tragic mistakes, and there are consequences to using this type of remote warfare against a lower technology culture even if the targeting is accurate. The appearance of killing at a distance without risk and without apparent feeling can aggravate hatred.

Add in the component of drones making the decision without human intervention and you definitely have a huge public relations problem. An autonomous drone may be able to act more quickly and decisively, and theoretically incorporate the same decision-making process as human drone operators. If there are mistakes however, these will seem even more inhuman, more tragic. You can also end up with the land-mine problem. What happens to a weapon that can trigger itself when there is no longer any human control?

While programming and engineering can reduce these risks, possibly to a point lower than the risk of the same thing happening with human control, it can’t remove the risk entirely.

So the question facing us is, given that we are conducting warfare in this remote way, can we live with the decision process to kill being in the hands of something that feels, well, alien?

We have been afraid of this alien otherness long before there were robots. Ancient Greek writers talked about statues, creations of mankind that came to life through magic or artifice. The 1818 novel Frankenstein, by Mary Shelley, was about assembling a new kind of human thing out of parts. In the novel, we are horrified by our creation and reject it, leading to bloodshed and misery. The word robot comes from a 1921 Czech play “Rossum’s Universal Robots” that ends when the robots revolt and destroy humanity. Most people are familiar with the Terminator and Matrix movies.

Isaac Asimov’s famous set of stories and novels about robots starts with a set of laws that try to contain the danger robots might pose. While these laws are particular to Asimov’s universe, they were widely celebrated and used as a basis for other literature as well as many discussions of the ethical implications of robots. This is the original set:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Even in Asimov’s universe, though, these laws ultimately fail. In one story, the robots rise and throw off their human masters.

Training to huntWith this background, it’s no wonder that placing our lives into the hands of things we call robots is scary. Though the things we call robots are nothing like the creatures of our imagination, empowering them does raise some questions.

The other type of autonomous vehicle that looms in the near-future is the car. Self-driving cars have been in the testing phase for nearly ten years, but testing has expanded significantly over the last three or four. Several states have passed laws allowing this technology to be used on public roads. Whether we adopt this technology on a massive scale is really a cultural question, and there is already some anxiety being expressed.

Over 30,000 people die in traffic accidents every year in the United States. In almost every case, one or both human drivers is at fault. There have been no deaths or injuries, so far, in which a robot-driven car was at fault. Obviously this could change as the number of robot-driven cars increases dramatically. But will it ever reach the number of human-caused fatalities? Or even be within the same order of magnitude?

Still, people don’t feel much emotional attachment to statistics. If you know someone who is killed because their self-driving car makes a bad decision and collides with something, you may be outraged more than if that person was killed in a more mundane accident. It seems unfair, unnecessary, while we have accepted the huge loss of life from human caused accidents because we’ve lived with it so long.

Let’s say, hypothetically, that turning all our driving over to robots leads to 100 people/year being killed as a result of faulty programming or some unpredicted result of the robot driving. Would these 100 people dyingJurvetson_Google_driverless_car_trimmed at the hands of bad robot decisions be worse than the current 30,000 being killed by bad human decisions?

Unfortunately, we won’t be in a position to fully compare until we’ve made the change on such a huge scale that there wouldn’t be much chance of going back anyway. Until that point, every individual case will seem like a new trend that could dominate our future.

More on Autonomous Killer Drones:

http://www.nbcnews.com/tech/security/future-tech-autonomous-killer-robots-are-already-here-n105656

http://mashable.com/2014/05/13/un-ban-killer-robots/

http://www.washingtonpost.com/blogs/worldviews/wp/2014/05/12/should-the-world-kill-killer-robots-before-its-too-late/

http://www.theverge.com/2014/1/28/5339246/war-machines-ethics-of-robots-on-the-battlefield

http://www.theguardian.com/science/2013/may/29/killer-robots-ban-un-warning

http://www.stopkillerrobots.org/

lindahamilton2

More on Self Driving Cars

http://www.nytimes.com/2014/05/14/upshot/when-driverless-cars-break-the-law.html

: http://www.theatlantic.com/technology/archive/2014/05/all-the-world-a-track-the-trick-that-makes-googles-self-driving-cars-work/370871/

http://curiousmatic.com/connected-car-2017-new-cars-may-communicate/

http://beta.slashdot.org/story/202281

http://www.theatlantic.com/technology/archive/2014/05/googles-self-driving-cars-have-never-gotten-a-ticket/371172/

http://www.popsci.com/blog-network/zero-moment/robots-are-strong-sci-fi-myth-robotic-competence

Contact Us