My Photo
Name:
Location: United States

Saturday, May 17, 2014

Robot cars and the trolley dilemma


- Google's self-driving car

Remember the trolley car ethical dilemma? It's ...

a thought experiment in ethics, first introduced by Philippa Foot in 1967 .... There is a runaway trolley barrelling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You do not have the ability to operate the lever in a way that would cause the trolley to derail without loss of life (for example, holding the lever in an intermediate position so that the trolley goes between the two sets of tracks, or pulling the lever after the front wheels pass the switch, but before the rear wheels do). You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

A more fun version was shown on an episode of Stargate Atlantis :) ...



Today I saw an article about a modern version of this thought experiment ... the programming of autonomous cars.

The Robot Car of Tomorrow May Just Be Programmed to Hit You ...

[I]magine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car? In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision .... it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states ....

Philosophers have been thinking about ethics for thousands of years, and we can apply that experience to robot cars. One classical dilemma, proposed by philosophers Philippa Foot and Judith Jarvis Thomson, is called the Trolley Problem .... This dilemma isn’t just a theoretical problem. Driverless trains today operate in many cities worldwide, including London, Paris, Tokyo, San Francisco, Chicago, New York City, and dozens more. As situational awareness improves with more advanced sensors, networking, and other technologies, a robot train might someday need to make such a decision.

Autonomous cars may face similar no-win scenarios too, and we would hope their operating programs would choose the lesser evil. But it would be an unreasonable act of faith to think that programming issues will sort themselves out without a deliberate discussion about ethics, such as which choices are better or worse than others. Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child? We don’t like thinking about these uncomfortable and difficult choices, but programmers may have to do exactly that. Again, ethics by numbers alone seems naïve and incomplete; rights, duties, conflicting values, and other factors often come into play .....


I was really looking forward to robot cars because finally seeing-challenged me would be able to drive again ... now I'm not so sure. OK, I'm depressed and need to watch this past video that ends with a glimpse of a self-driving car :) ...


0 Comments:

Post a Comment

<< Home