The Trolley Problem: Dodging moral questions

The trolley problem is one of the more famous thought experiments in moral philosophy, and studies by psychologists and anthropologists suggest that the response distributions to its major permutations remain roughly the same throughout all human cultures. Most people will permit pulling the lever to redirect the trolley so that it will kill one person rather than five, but will balk at pushing one fat person in front of the trolley to save the five if that is the only available option of stopping it.

However, in informal settings, where the dilemma is posed by a peer rather than a teacher or researcher, it has been my observation that there is another major category which accounts for a significant proportion of respondents' answers. Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright. They will attack the improbability of the premise, attempt to invent third options, or appeal to their emotional state in the provided scenario ("I would be too panicked to do anything",) or some combination of the above, in order to opt out of answering the question on its own terms.

However, in most cases, these excuses are not their true rejection. Those who tried to find third options or appeal to their emotional state will continue to reject the dilemma even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.

Those who appealed to the unlikelihood of the scenario might appear to have the stronger objection; after all, the trolley dilemma is extremely improbable, and more inconvenient permutations of the problem might appear even less probable. However, trolleylike dilemmas are actually quite common in real life, when you take the scenario not as a case where only two options are available, but as a metaphor for any situation where all the available choices have negative repercussions, and attempting to optimize the outcome demands increased complicity in the dilemma. This method of framing the problem also tends not to cause people to reverse their rejections. 

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.

When the respondents feel that they can possibly opt out of answering the question, the implications of the trolley problem become even more unnerving than the results from past studies suggest. It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all. They have placed themselves in a reality too accommodating of their preferences to force them to have a system for dealing with situations with no ideal outcomes.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:22 AM
Select new highlight date
All comments loaded

It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all.

Er, you're attaching too much value to hypothetical philosophical questions.

I'd have thought it obvious that they're dodging the question so as to avoid the possibility of the answer being taken out of context and used against them. Lose-lose counterfactuals are usually used for entrapment. This is a common form of hazing amongst schoolchildren and toward politicians, after all, so it's a non-zero possibility in the real world. It's the one real-world purpose contrived questions are applied to.

tl;dr: you have not given them sufficient reason to care about contrived trolley problems.

Er, you're overestimating how much value the other person attaches to hypothetical philosophical questions.

FTFY

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.

Of course we do. It would be crazy to answer such a question in a social setting if there is any possibility of avoiding it. Social adversaries will take your answer out of context and spin it to make you look bad. Honesty is not the best policy and answering such questions is nearly universally an irrational decision. Even when the questions are answered the responses should not be considered to have a significant correlation to actual behaviour.

I think I have a more plausible suggestion than the "spin it to make you look bad"

Think evolutionarily.

It absolutely sucks to be a psycho serial killer in public, if you are into making friends and acquaintances and likely to be a grandpa.

It sucks less to show that you would kill someone, specially if you were the actor of the death.

It sucks less to show that you would only kill someone by omission, but not by action.

It sucks less if you show that your brain is so well tuned not to kill people, that you (truly) react disgusted even to conceive of doing it.

This is the woman I want to have a child with, the one that is not willing to say she would kill under any circumstance.

Now, you may say that in every case, I simply ignored what would happen to the five other people (the skinny ones). To which I say that your brain processes both informations separately,"me killing fat guy" "people being saved by my action" and you only need one half to trigger all the emotions of "no way I'd kill that fat guy"

Is this an evolutionary nice story that explains a fact with hindsight. Oh yes indeed.

But what really matters is that you compare this theory with the "distortion" theory that many comments suggested. Admit it, only people who enjoy chatting rationally in a blog think it so important that their arguments will be distorted. Common folks just feel bad about killing fat guys.

I'd actually argue that social signaling is probably more important to "common folk" than a lot of the people here. Specifically, the old post about "Why nerds are unpopular" (http://www.paulgraham.com/nerds.html) comes to mind here. I'm entirely willing to say "I'm willing to kill", because I value truth above social signaling

It also occurs to me that a big factor in my answer is that my social circle is full of people that I trust not to distort or misapply my answer. Put me in a sufficiently different social circle and eventually my "survival instincts" will get me to opt out of the problem as an excuse to avoid negative signaling.

If I just really didn't want to kill the fat guy, it'd be much easier to say "oh, goodness, I could never kill someone like that!" rather than opting out of answering by playing to the absurdity of the scenario.

The purpose of thought experiments and other forms of simulation is to teach us to do better in real life. Obviously, no simulation can be perfectly faithful to real life. But if a given simulation is not merely imperfect but actively misleading, such that training in the simulation will make your real performance worse, then rejecting the simulation is a perfectly rational thing to do.

In real life, if you think the greater good requires you to do evil, you are probably wrong. Therefore, given a thought experiment in which the greater good really does require you to do evil, rejecting the thought experiment on the grounds of being worse than useless for training purposes, is a correct answer.

The purpose of thought experiments and other forms of simulation is to teach us to do better in real life.

Not at all. That's way too broad a claim and definitely not the case for the trolley problem. The purpose of the trolley problem is to isolate and identify people's moral intuitions.

Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright.

Counterfactual resistance is pretty common with all thought experiments, indeed it is the bane of undergraduate philosophy professors everywhere. We have no evidence that resistance is more common in ethical thought experiments or the trolley problem particularly than in thought experiments in other subfields: brain-in-vat hypotheticals, brain-transplant/hemisphere transplant, teleportation, Frankfurter cases etc. Which is to say most of this post is in need of citations. Maybe people just don't like convoluted thought experiments! I'm not even sure it's the case that many people do refuse to answer the question- how many instances could you possibly be basing this judgment on?

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms.

How do you know this? I'm not demanding p-values but you haven't given us a lot to go on.

I've used the trolley problem a lot, at first to show off my knowledge of moral philosophy, but later, when I realized anyone who knows any philosophy has already heard it, to shock friends that think they have a perfect and internally consistent moral system worked out. But I add a twist, which I stole from an episode of Radiolab (which got it from the last episode of MASH), that I think makes it a lot more effective; say you're the mother of a baby in a village in Vietnam, and you're hiding with the rest of the village from the Viet Cong. Your baby starts to cry, and you know if it does they'll find you and kill the whole village. But, you could smother the baby (your baby!) and save everyone else. The size of the village can be adjusted up or down to hammer in the point. Crucially, I lie at first and say this is an actual historical event that really happened.

I usually save this one for people who smugly answer both trolly questions with "they're the same, of course I'd kill one to save 5 in each case", but it's also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I'm not sure why this works so well, but I think our bias toward narratives we can place ourselves in helps. Almost everyone at this point says they think they should kill the baby, but they just don't think they could, to which I respond "Doesn't the world make more sense when you realize you value thousands of complex things in a fuzzy and inconsistent manner?". Unfortunately, I have yet to make friends with any true psychopaths. I'd be interested to hear their responses.

This is only equivalent to a trolley problem if you specify that the baby (but no one else) would be spared, should the Viet Cong find you. Otherwise, the baby is going to die anyway, unlike the lone person on the second trolley track who may live if you don't flip the switch.

You could hack that in easily; surely most soldiers have qualms about killing babies.

Unfortunately, I have yet to make friends with any true psychopaths. I'd be interested to hear their responses.

They would say the same thing only with more sincerity.

I immediately thought, "Kill the baby." No hesitation.

I happen to agree with you on morality being fuzzy and inconsistent. I'm definitely not a utilitarian. I don't approve of policies of torture, for example. It's just that the village obviously matters more than a goddamn baby. The trolley problem, being more abstract, is more confusing to me.

The answer that almost everyone gives seems to be very sensible. After all, the question: "What do I believe I would actually do" and "What do I think I should do" are different. Obviously self modifying to the point where these answers are as consistent as possible in the largest subset of scenarios as possible is probably a good thing, but that doesn't mean such self modifying is easy.

Most mothers would simply be incapable of doing such a thing. If they could press a button to kill their baby, more would probably do so, just as more people would flip a switch to kill than push in front of a train.

You obviously should kill the baby, but it is much more difficult to honestly say you would kill a baby than flip a switch: the distinction is not one of morality but courage.

As a side note, I prefer the trolley-problem modification where you can have an innocent, healthy young traveler killed in order to save 5 people in need of organs. Saying "fat man", at least for me, obfuscates the moral dilemma and makes it somewhat easier.

"Remember, you can't be wrong unless you take a position. Don't fall into that trap." - Scott Adams

I get frustrated by this every time someone mentions the classic short story The Cold Equations (full text here). The premise of the story is a classic trolley problem (...In Space!), where a small spaceship carrying much-needed medical supplies gets a stowaway, which throws off its mass calculations. If the stowaway is not ejected into space, the ship will crash and the people on the planet will die of a plague. So the (innocent, lovable) stowaway is killed and ejected, and the day is saved. The end.

Whenever this comes up, somebody will attack the story as contrived, pointing out that it could have been prevented by some "Keep Out" signs and a few more door locks. This is usually treated as an excuse to dismiss the premise of the story entirely -- exactly what you describe as a common reaction to maximally inconvenient trolley problems.

(By the way, I searched on Less Wrong for previous discussions of The Cold Equations, and was pleasantly surprised that people around here seem much less inclined to use the story's plot holes as an excuse to dismiss the whole idea. The nits still get picked, but not to a facepalm-worthy extent.)

When you're writing an actual story, I feel like you have to maintain higher standards for plausibility than when you're writing a straight moral dilemma. I only know The Cold Equations by its reputation, but I can certainly understand how that sort of contrivance could hurt it on a literary level.

Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don't control nuclear weapons aren't that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.

So it needs to have heuristics that are robust against incomplete information. There's definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murder when you aren't all-knowing.

This looks just like the bayesian 101 example of a medical test that is 99% accurate on a disease that has 1% occurance rate. If you say that I'm in a very rare situation that requires me to commit murder, I have to assume that there are going to be many more situations that could be mistaken for this one. The "least convenient universe" story is tantalizing, but I think it leads astray here.

Having posted lots in this thread about excellent reasons not to answer the question, I shall now pretend to be one of the students that frustrates Desrtopa so and answer. Thus cutting myself off from becoming Prime Minister, but oh well.

The key to the problem is: I don't actually know or care about any of these people. So the question is answered in terms of the consequences (legal and social) to me, not to them.

e.g. in real life, action with a negative consequence tends to attract greater penalties than lack of action. So pushing one in front to save five is right out. Actively switching to kill one instead of leaving the switch to five, that one would be tricky - I might feel it was a less bad response and hence do it, despite possible penalties for having dared take an action instead of just floundering. (There, an actual answer.)

If I actually know and like any of these people, the problem gets more complicated. If all the friends are on one branch, they win, everyone else loses. If there's options of which friends I kill (and that phrase popped into my head as "which friends I kill" rather than "which friends die" - I seem not to be shirking responsibility), then I have some tricky calculation to do.

Whatever happens, I do expect I would be extremely upset and not fully functional for a little while afterwards.

There. Is that enough not to fall at the first hurdle in Philosophy 100?

An implicit assertion underlying this post seems to be that the sorts of people who answer trolley problems rather than dodge them are more likely to take action effectively in situations that require doing harm in order to minimize harm.

Or am I misunderstanding you?

If you are implying that: why do you believe that?

even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.

I am a conscientious "third-alternativer" on trolley problems, and to me this seems like an abuse of the least convenient possible world principle. If there is a world with no possibility of implementing alternative solutions, I will pick the outcome with the best consequences, but I don't believe there actually is a world with no possibility of alternatives - I reject the "possible" part of your least convenient possible world.

It would be like arguing to an atheist: "The least convenient possible world is one where the Christian God exists with probability 1."

I am an atheist, and I have no problems in answering questions of type "if creationism were true, would you support its teaching in schools" or ''if Christian God exists, would you pray every day" (both answers are yes, if that matters). What's the problem with those hypotheticals? The questions are well formed, and although they are useless in the sense that their premise is almost certainly false, the answers can still reveal something about my psychology. I don't think answering such questions would turn me into a creationist.

The top 10% of humanity accumulates 30% of the worlds wealth. 20% of the humanity dies from preventable, premature death (and suffers horribly)

The proposition...

10% of the top 10% had all their wealth taken from them (lottery selection process) They are forced to work as hard and effectively as they had previously and were given only enough of the profits they produce to live modestly. They lose everything and work for 5 years and recieve 10% of original wealth back The next 10% of the top 10 % is selected The wealth taken is used to ensure the survival of the 20% dying from preventable premature death.

In this scenario 1% of people are forced to live modestly in order to save up to 20% of humanity. No-one need to kill or be killed.

It would probably be reasonable to say the top 20% of earners would be against this proposal. The majority of the bottom 40% would be in favour. If your reading this you are likely on of the other 40% of humankind who can choose to support or reject the proposal. What would you say?

I am aware there are many holes in the proposition (unintended consequences etc) however this is a hypothetical that is based on a real situation that exists now that we are all contributing to in one way or another.

There is a major flaw in your proposal: the bottom 40% would not be in favor. Some of them would be, but there is a demonstrable bias which causes people to be irrationally optimistic about their own future wealth. This bias is a major factor in the Republicans maintaining much of their base, among other things.

However, to answer your question, while I would not favor your proposal, I would favor a tax on all of that top ten percent which would garner the same revenue as your proposal.