Trolley

In the world of ethics and moral philosophy, one of the most venerable thought experiments is the so-called ‘trolley problem,’ the most basic version of which is:

A runaway trolley is careening down a railway track. Up ahead, five people are tied onto the tracks, unable to move. Next to you is a lever; if you pull it, the trolley will switch to a different track. You notice that one person is standing on the second track. Is it more ethical for you to do nothing, and let the trolley kill the five people on the main track, or to pull the lever, and send the trolley over to kill one person instead?

Over the last fifty years, philosophers have debated the implications of the problem, complicated the question with a huge number of incrementally muddier variants, explored the neurobiology of how our brains consider such a choice, and polled vast swaths of respondents about what they might decide if faced by the (original or muddier variant) situation in real life.

But, up until recently, the thought experiment remained largely academic. In the past few years, however, with the rise of self-driving cars, it’s very much moved into the realm of practical concern.

While human drivers react too slowly to reason through hard choices in case of an accident, an artificially intelligent computer driver would have plenty of processor cycles to more fully consider its actions. Should it swerve your car away from a kid in the street to instead hit an older adult? How about away from that kid and into two older adults? Or away from that kid and into a concrete wall, even if it killed you, the driver, in the process?

Of course, technology tends to far outpace legislation, and in the rare occasions when we do legislate quickly around emerging technologies, the ‘solutions’ we bake into law often create problems far worse than the ones we intended to solve. So, for the near-term, I suspect we’ll be living in a world where private companies get to determine the ‘right’ answers to various trolley problem scenarios.

Which means, by basic game theory, that car companies will all default to solutions that save the driver, no matter what. (Consider choosing between two cars you might purchase: one has an ‘ethical’ decision algorithm that might kill you, while another has a more selfish algorithm that will always save your own ass; even though it may entail some rationalization about why you’re not a jerk for doing so, you’re buying that second, selfishly-programed car.)

That’s why we shouldn’t be surprised by a story in this month’s Car and Driver about Mercedes’ self-driving car plans, in which Mercedes became the first major manufacturer to explicitly stake a driver-first position. As their Manager of Driverless Car Safety explains:

If you know you can save at least one person, at least save that one. Save the one in the car. If all you know for sure is that one death can be prevented, then that’s your first priority.

So there you have it. As more than a handful of wags have pointed out in the days since, it’s kind of nice to know that an AI Mercedes driver will be just as much of a douchebag as a human Mercedes driver.

Going forward, however, I suspect we’ll be hearing increasingly about the trolley problem, and about the countless other related and equally hard situations in which we task AI’s with comparatively valuing human lives and well-being in their decisions.

Perhaps, a few years down the road, we’ll be legislating about it, too. Though I’m not too bullish on that kind of legislation having a broad impact. Given how hard people work to crack the DRM on DVDs just to avoid paying $3 rental fees, I can barely imagine the black-market of car upgrades that would emerge if a hack is all it took to convert your government-mandated ‘ethical’ smart-car into an ‘always put me first, no matter what’ machine.

But perhaps the inevitable popularity of that kind of hack should be comforting; whatever our differences, at the end of the day, it seems we’re all just Mercedes drivers at heart.