A pile of recent studies have used the Trolley Problem to think about the ethics of autonomous cars, and (I learned via Axios) a review has found them to be as reliable as a Ford Pinto. They mischaracterise the decisions facing drivers; they misrepresent how AI works; and they stoke public distrust of autonomous vehicles for no good reason.
That reminded me why I loathe the Trolley Problem in general, beyond its failings in this particular case. It’s not just because it’s a cliché, though it certainly is one, and anyone who’s done more than one ethics course will feel at home in The Good Place, where the Trolley Problem is used as a form of torture.
There are many variations on the Problem, but in the most common scenario it isn’t just about choosing between the death of one person or five. You ask the class to choose what they’d do (about 80% of people pull the lever so that one person dies) and then you introduce a scenario where there’s no lever, but instead you have to push a fat man off a bridge into the path of the trolley to make it stop. On a show of hands, most people choose not to push the guy, and a-ha isn’t that morally equivalent to pulling the lever though, did I just blow your mind.
Something never smelled right about that, and on the fourth or fifth time around, I finally realised what it was: conservation of momentum. With some basic assumptions about the mass and speed of the trolley, a few seconds’ thought reveal that the fat man must weigh more than 50,000 tonnes.
That sounds facetious, or like deliberately missing the point, but I'm quite serious. There’s nothing wrong with a thought experiment, even one with unusual premises, provided that it involves thought. But the Trolley Problem only makes sense if you’re willing to suspend your disbelief to an extraordinary degree, and completely accept the framing you’ve been given, so that you can focus on the point it’s trying to make. In other words, the Trolley Problem trains you not to ask questions.
There’s a word for that style of teaching: unethical.
Consider another thought experiment, the Ticking Time Bomb, which showed up all the time during the torture debates 15 years ago. In that scenario, you have a terrorist in custody who knows all about a time bomb that will shortly explode. Do you torture them to find out the location?
In reality, torture doesn’t work like that, for many reasons. The most pertinent one is this: it’s very difficult, even for trained professionals, to notice when someone breaks, and therefore to sort out the truth from lies and gibberish. Beyond that, what makes us so sure that this person is a terrorist, and that they know the information we need? Why are we so confident that they're guilty? Again, we’re encouraged not to consider that. To take the scenario seriously means taking for granted that the person in custody is guilty—without any proof beyond what someone told us—and imagining that we can choose to apply torture and have the answer to our question provided like popping a cork. This kind of self-flattering pseudo toughness was ubiquitous during the War On Terror, and is responsible for trash like Zero Dark Thirty.
Ethics classes should draw from movies and TV instead. Moral dilemmas in fiction always come with a rich context, which can be unpacked, discussed, and debated. Should Ingrid Bergman have stayed with the love of her life in Casablanca, or returned to fight for the Resistance? After Benedict Cumberbatch cracked the Enigma code, should the Allies have kept their knowledge secret, or saved the people they knew were about to be bombed? And so on.
Those exercises don’t have the veneer of rigour that pseudo-intellectual thought exercises like the Trolley Problem do. But at least they have a decent chance of teaching students to think critically, and ethically.
Background image credit: Streetcar.org