The trolley problem is one of the more famous thought experiments in moral philosophy, and studies by psychologists and anthropologists suggest that the response distributions to its major permutations remain roughly the same throughout all human cultures. Most people will permit pulling the lever to redirect the trolley so that it will kill one person rather than five, but will balk at pushing one fat person in front of the trolley to save the five if that is the only available option of stopping it.

However, in informal settings, where the dilemma is posed by a peer rather than a teacher or researcher, it has been my observation that there is another major category which accounts for a significant proportion of respondents' answers. Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright. They will attack the improbability of the premise, attempt to invent third options, or appeal to their emotional state in the provided scenario ("I would be too panicked to do anything",) or some combination of the above, in order to opt out of answering the question on its own terms.

However, in most cases, these excuses are not their true rejection. Those who tried to find third options or appeal to their emotional state will continue to reject the dilemma even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.

Those who appealed to the unlikelihood of the scenario might appear to have the stronger objection; after all, the trolley dilemma is extremely improbable, and more inconvenient permutations of the problem might appear even less probable. However, trolleylike dilemmas are actually quite common in real life, when you take the scenario not as a case where only two options are available, but as a metaphor for any situation where all the available choices have negative repercussions, and attempting to optimize the outcome demands increased complicity in the dilemma. This method of framing the problem also tends not to cause people to reverse their rejections. 

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.

When the respondents feel that they can possibly opt out of answering the question, the implications of the trolley problem become even more unnerving than the results from past studies suggest. It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all. They have placed themselves in a reality too accommodating of their preferences to force them to have a system for dealing with situations with no ideal outcomes.

New Comment
131 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all.

Er, you're attaching too much value to hypothetical philosophical questions.

I'd have thought it obvious that they're dodging the question so as to avoid the possibility of the answer being taken out of context and used against them. Lose-lose counterfactuals are usually used for entrapment. This is a common form of hazing amongst schoolchildren and toward politicians, after all, so it's a non-zero possibility in the real world. It's the one real-world purpose contrived questions are applied to.

tl;dr: you have not given them sufficient reason to care about contrived trolley problems.


Er, you're overestimating how much value the other person attaches to hypothetical philosophical questions.


You are, of course, correct. Thank you.
Google shrugs at this. “I wish I could understand that too”?
I Wish I Could Upvote This Twice. (Didn't quite catch on.)

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.

Of course we do. It would be crazy to answer such a question in a social setting if there is any possibility of avoiding it. Social adversaries will take your answer out of context and spin it to make you look bad. Honesty is not the best policy and answering such questions is nearly universally an irrational decision. Even when the questions are answered the responses should not be considered to have a significant correlation to actual behaviour.

I think I have a more plausible suggestion than the "spin it to make you look bad"

Think evolutionarily.

It absolutely sucks to be a psycho serial killer in public, if you are into making friends and acquaintances and likely to be a grandpa.

It sucks less to show that you would kill someone, specially if you were the actor of the death.

It sucks less to show that you would only kill someone by omission, but not by action.

It sucks less if you show that your brain is so well tuned not to kill people, that you (truly) react disgusted even to conceive of doing it.

This is the woman I want to have a child with, the one that is not willing to say she would kill under any circumstance.

Now, you may say that in every case, I simply ignored what would happen to the five other people (the skinny ones). To which I say that your brain processes both informations separately,"me killing fat guy" "people being saved by my action" and you only need one half to trigger all the emotions of "no way I'd kill that fat guy"

Is this an evolutionary nice story that explains a fact with hindsight. Oh yes indeed.

But what really matters is that you compare this theory with the "distortion" theory that many comments suggested. Admit it, only people who enjoy chatting rationally in a blog think it so important that their arguments will be distorted. Common folks just feel bad about killing fat guys.

I'd actually argue that social signaling is probably more important to "common folk" than a lot of the people here. Specifically, the old post about "Why nerds are unpopular" ( comes to mind here. I'm entirely willing to say "I'm willing to kill", because I value truth above social signaling It also occurs to me that a big factor in my answer is that my social circle is full of people that I trust not to distort or misapply my answer. Put me in a sufficiently different social circle and eventually my "survival instincts" will get me to opt out of the problem as an excuse to avoid negative signaling. If I just really didn't want to kill the fat guy, it'd be much easier to say "oh, goodness, I could never kill someone like that!" rather than opting out of answering by playing to the absurdity of the scenario.
Are you sure you can't have both?
If attempting to avoid the question will also elicit a negative response, and the person really only wants to optimize their social standing, then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from. Mere signaling fails to account for many of these cases.

then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from

No they wouldn't. Ambiguity is their ally. Both answers elicit negative responses, and they can avoid that from most people by not saying anything, so why shouldn't they shut up?

EDIT: In case it's not clear, I consider this tactic borderline Dark Arts (please note who originally said that ambiguity-ally line in HPMOR!), a purely political weapon with no role in conversations trying to be rational. I wouldn't criticize its use as a defense against some political nitwit who's trying to hurt you in front of an inexperienced audience; I would be unhappy with first-use of it as a primary political strategy.

In that case, I would expect them to reverse their rejection in the case of sufficient peer pressure, but this is frequently not the case. Now I really do want to systematically test how people rejecting the dilemma respond to peer pressure. I've spent a great deal of time watching others deal with this particular dilemma , but my experience isn't systematically gathered or well documented. In retrospect, I should have held off on making this post until gathering that data; I wrote it up more in frustration at dealing with the same situation again than out of a desire to be informative, and I feel like I should probably have taken a karma hit for that.

I'd be interested in a trolley version of the Asch conformity experiment: line up a bunch of confederates and have them each give an answer, one way or another, and act respectfully to each other. Then see how the dodge rate of the real participant changes.

Then you could set it up so that one confederate tries to dodge, but is talked out of it. Etc.

I would too. My prediction (~80% confidence) is, given one subject and six confederates and a typical Asch setup, if all confederates give the non-safe answer (e.g., they say "I'd throw one person under the train" or whatever), you'll see a 40-60% increase in the subject's likelihood of doing the same compared to the case where they all dodge. If one confederate dodges and is chastised for it, I really don't know what to expect. If I had to guess, I'd guess that standard Asch rules apply and the effect of the local group's pressure goes out the window, and you get a 0-10% increase over the all-dodge case. But my confidence is low... call it 20%. What I'd really be interested in is whether, after going through such a setup, subjects' answers to similar questions in confidential form change.
That's not the normal Asch setup- the dissenter isn't ridiculed for it; the subject feels free to dissent because they've seen someone else dissent and 'get away with it'. I would expect that the chastistement variation on any Asch test would produce even more, rather than less, conformity.
Yeah, I can see why you say that, and you might be right, but I'm not entirely sure. I've never seen the results of an Asch study where the dissenter is chastised. And this particular example is even weirder, because the thing they're being chastised for -- dodging the question -- is itself something that we hypothesize is the result of group conformity effects. So... I dunno. As I say, my confidence in this case is low.
Unless, of course, they're willing to put up with some short-term hassling to avoid long-term problems. Given that either answer could be taken out of context and used against them by all the people currently applying that pressure, there's no point (short of, say, locking them in a room and depriving them of sleep for an extended period of time, which is really a whole different kettle of fish) where answering the question becomes preferable.

Giving either response can be harmful if you are trying to avoid the disapproval of someone who fails at conservation of expected evidence. (This failure could happen even to us rationalists who are aware of the possibility, by simply not thinking about how we would interpret the alternative response we did not observe, especially if our interpretation is influenced by a clever arguer who wants us to disapprove.)

If attempting to avoid the question will also elicit a negative response, and the person really only wants to optimize their social standing, then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from.

You appear to be saying "but they could give a perfect zinger of an answer!" Yes, they could. But refusing the question - "Homey don't play that" - is quite a sensible answer in most practical circumstances, and may discourage people from continuing to try to entrap them, which may be better than answering with a perfect zinger.

Well, it needn't be a zinger, per se. They could, for example, give an answer that at the same time signaled their deep and profound compassion for people who are run over by trolleys and their willingness to... reluctantly... after exploring all available third alternatives to the extent that time allowed... and assuming as a personal favor to the questioner that they somehow were certain of all the facts that the problem asserts, even though that state isn't epistemically reachable... and with the understanding that they'd probably be in expensive therapy for years afterwards to repair the damage to their compliant-with-social-norms-really-honest-no-fooling psyches... throw one person under the train to save five people. Wincing visibly while saying it would help, also. This both signals their alliance with the "don't throw people under trains!" social norm and their moral sophistication. This is a general truth of political answers... the most useful answer is the one that lets everyone hear what they want to hear while eliciting disapproval from nobody. (Of course, in the long term that creates a community that disapproves of ambivalence. Politics is a semantic arms race, after all.) In this vein, my usual answer to trolley questions and the like starts with "It depends: are you asking me what I think I would actually do in that setting? Or are you asking me what I think is the right thing to do in that setting? Because they're different." But, yeah, I agree that refusing to answer the question can often be more practical, especially if you don't have an artful dodge ready to hand and aren't good at creating them on-the-fly.
A non-answer is still safer. That parry, in and of itself, could be twisted into an admission that you routinely and knowingly violate your own moral code.
Not even twisted, really; it is such an admission. But entirely agreed that a non-answer is safer than such an admission. (I suppose "In this vein" is a mis-statement, then.)

If attempting to avoid the question will also elicit a negative response, and the person really only wants to optimize their social standing, then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from.

It may be easier in the short term but in the future it will come back to haunt you with sufficient probability for it to dominate your decision making. Never answer moral questions honestly, lie (to yourself first, of course). If there is no good answer to give the questioner then avoid the question. If possible, make up the question you wish they asked and answer that one instead. Don't get trapped in a hostile frame of interrogation.

Mere signaling fails to account for many of these cases.

When it comes to morality there is nothing 'mere' about signalling. Signalling accounts for all of these cases.

Do you predict, then, that if you put a person in a group where every other person disapproves more of attempts to dodge the question than to provide either answer, and makes this known,then they will never refuse to answer the question on its own terms? Also, what makes you believe that providing an answer will lead to negative repercussions? I've participated in more discussions of this topic than I could reasonably hope to count, never refused to provide my own answer, and have never observed others to revise their behavior towards me as a result. I can imagine how it might have negative repercussions for a person to provide an answer, but I've never known it to happen to anyone to a significant enough degree that they'd notice. It's possible that signaling accounts for some of these cases, but I think you're generalizing your own attitude to the entire population in a situation where it really doesn't apply.
Excuse me? The opposite is closer to the truth. I've realised that my own attitude to interpreting things primarily in the abstract isn't universal. Even a minority of people who use verbal symbols primarily politically is enough to warrant caution.
Then I'll ask again whether you predict that in a group where everyone else projected disapproval of attempts to dodge the question, nobody would refuse to answer the question on its own terms. This should not be that hard to test; with a few Less Wrong collaborators, we should at least be able to carry it out in online form.
You could certainly engineer a circumstance in which answering questions about hypothetical lose-lose scenarios is considered better than avoiding them, e.g. philosophical discussion of hypothetical lose-lose scenarios. However, your original post does not restrict itself to these scenarios, but generalises to everyone who doesn't want to play that game, with no apparent understanding of the practical reasons people here are trying to explain to you for why people might very sensibly not want to play that game.
Not to speak for wedrifid, but I agree with their main point, and I would not predict this. What I would predict is that fewer people in such a group would dodge the question (and those that did would dodge it less strenuously) than in a group where everyone projected disapproval of throwing people under trolleys. I would further predict that the reduction in dodging (DR) would be proportional to how confident the subject was that the group really did disapprove more of dodging the question than of throwing people under trolleys... that is, that the group wasn't lying, and that he wasn't misinterpreting the group norm. Given that priors strongly suggest the opposite -- that is, given that most groups are more opposed to throwing people under trolleys than avoiding a question -- I would expect obtaining significant confidence to be nontrivial. Relatedly, I predict that DR would be proportional to how certain the subject was that their answer would be kept confidential. By the way, as long as we're doing this exercise, I'd also predict that people who don't dodge the question in normal settings, but rather claim they'd throw someone under the train, are more likely to be contrarian in general -- that is, I'd expect that to correlate well with making other controversial claims. This is even more true for people who often bring up trolley problems in ordinary conversation.
Seriously? Well, sure. I for one would not dodge the question then, in case they would throw me under a trolley for it. :)
3Eliezer Yudkowsky
I predict this will have a large effect on the number who refuse to answer the question, increasing with the closeness of the peer group and the level of disapproval. Enough to flip 75% nonresponse to 25% nonresponse or something like that.
Do you have any evidence that the largest negative reaction comes from actually answering the question? My feeling is that the largest negative social repercussion comes from rejecting the question. However, I'm not positive that I'm not generalizing from my own initial reaction to those who reject the question. My general feeling is that taking a stance on such questions would be respected by those who I deal with on a day-to-day basis, and dodging the question would be less respected.
I respect answering these questions more than dodging them and answer them myself whenever I know the answer (I would pull the switch; I would also push the fat man). I don't have a problem with being candid because most people whose opinions I care about prefer candidness. In one previous discussion with a bunch of non-rationalists who probably don't consider the questions often, there wasn't much dodging.

Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright.

Counterfactual resistance is pretty common with all thought experiments, indeed it is the bane of undergraduate philosophy professors everywhere. We have no evidence that resistance is more common in ethical thought experiments or the trolley problem particularly than in thought experiments in other subfields: brain-in-vat hypotheticals, brain-transplant/hemisphere transplant, teleportation, Frankfurter cases etc. Which is to say most of this post is in need of citations. Maybe people just don't like convoluted thought experiments! I'm not even sure it's the case that many people do refuse to answer the question- how many instances could you possibly be basing this judgment on?

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms.

How do you know this? I'm not demanding p-values but you haven't given us a lot to go on.

Pretty many; I certainly haven't kept close count, but going by the age at which I was first introduced to the dilemma and the approximate number of times per year it's come up, I would estimate somewhere between 100 and 150 It would have been more accurate to say that many people will dismiss the dilemma on counterfactual grounds once, and a second prompt will separate the pedants and jokers from the true rejectors, who will persist in dismissing the question regardless of how it is framed, even in the face of peer pressure. Anyway, on reflection, I feel like this post was probably not that well considered; I should have at least held off until I had reliable documentation of the phenomenon, with tests to winnow out alternate hypotheses. I still strongly suspect that a significant proportion of those rejecting the hypothesis are doing so based on a rejection of the idea that they should have any coherent moral system, but this impression rests too strongly on interpretations of what the rejectors have actually said to come across well in the post. I really didn't provide adequate data.
On the question itself I'm not sure having a coherent moral system is something it is important for people to have- though I'm hesitant to make the point since I'm not confident in my ability to make the claim convincing enough to avoid the downvotes that come from saying something that sounds so dumb at first. Morality is the product of a chaotic, random and unguided process. There is no particular reason to expect human morality to be coherent. That isn't what evolution optimized it for. If the morality we evolved isn't coherent (a precise definition of coherent in this context I'll leave for later, or someone else) what should we do? A lot of people here seem to want to cull, shape or ignore our intuitions so that we act according to a coherent normative theory (preference utilitarianism for example). But to me this looks just like trying to shove a square peg into a round hole. You don't get more moral by sacrificing parochial deontological rules for abstract principles. If a hodge-podge is what we got then a hodge-podge is what we're stuck with (until we evolve a different hodge-podge). To demand that folk morality meet the demands of logic and coherence feels like a mistake to me. It also feels anti-human.

The purpose of thought experiments and other forms of simulation is to teach us to do better in real life. Obviously, no simulation can be perfectly faithful to real life. But if a given simulation is not merely imperfect but actively misleading, such that training in the simulation will make your real performance worse, then rejecting the simulation is a perfectly rational thing to do.

In real life, if you think the greater good requires you to do evil, you are probably wrong. Therefore, given a thought experiment in which the greater good really does require you to do evil, rejecting the thought experiment on the grounds of being worse than useless for training purposes, is a correct answer.


The purpose of thought experiments and other forms of simulation is to teach us to do better in real life.

Not at all. That's way too broad a claim and definitely not the case for the trolley problem. The purpose of the trolley problem is to isolate and identify people's moral intuitions.

Well, depending on what you're trying to nail down as "the purpose", that's not true. The purpose of the trolley problem was to serve as an example of the kinds of ridiculous thought experiments conceived of by moral philosophers (via Philippa Foot). But you know, Poe's Law.
I'm sure you've seen this at some point, but for others... Consider the following case:
Choose the left track, because cancer kills more people than Hitler (assuming the cure would be delayed by at least 10 years, implementing it doesn't cost more than is currently spent on cancer and a few other things).
And what is the purpose of identifying moral intuitions?
Figuring out how to manipulate those intuitions in order to increase sales of Frosted Flakes.
In which case those who neither currently want Frosted Flakes nor want to want them are still best served by not participating.
1. We just need to infiltrate the philosophy departments and get them to post to blogs to try to convince people that answering hypotheticals is what an honest thinking person should do. 2. Lots of manipulable intuitions. 3. Profit!

I've used the trolley problem a lot, at first to show off my knowledge of moral philosophy, but later, when I realized anyone who knows any philosophy has already heard it, to shock friends that think they have a perfect and internally consistent moral system worked out. But I add a twist, which I stole from an episode of Radiolab (which got it from the last episode of MASH), that I think makes it a lot more effective; say you're the mother of a baby in a village in Vietnam, and you're hiding with the rest of the village from the Viet Cong. Your baby starts to cry, and you know if it does they'll find you and kill the whole village. But, you could smother the baby (your baby!) and save everyone else. The size of the village can be adjusted up or down to hammer in the point. Crucially, I lie at first and say this is an actual historical event that really happened.

I usually save this one for people who smugly answer both trolly questions with "they're the same, of course I'd kill one to save 5 in each case", but it's also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I'm not sure why this works so well, but I think our bias... (read more)

This is only equivalent to a trolley problem if you specify that the baby (but no one else) would be spared, should the Viet Cong find you. Otherwise, the baby is going to die anyway, unlike the lone person on the second trolley track who may live if you don't flip the switch.

You could hack that in easily; surely most soldiers have qualms about killing babies.
Great point. I've never thought of that and no-one I've ever tried this one has mentioned it either. This makes it more interesting to me that some people still wouldn't kill the baby, but that may be for reasons other than real moral calculation.
For my own part: I have no idea whether I would kill the baby or not. And I have even less of an idea whether anyone else would... I certainly don't take giving answers like "I would kill the baby in this situation" as reliable evidence that the speaker would kill the baby in this situation. But I generally understand trolley problems to be asking about what I think the right thing to do in situations like this is, not asking me to predict whether I will do the right thing in them.
I agree, I can't really reliably predict my actions. I think I know the morally correct thing to do, but I'm skeptical of my (or anyone's) ability to make reliable predictions about their actions under extreme stress. As I said, I usually use this when people seem overly confident of the consistency of their morality and their ability to follow it, as well as with people who question the plausibility of the original problem. But I do recall the response distributions for this question mirroring the distribution for the second trolley problem; far fewer take the purely consequentialist view of morality than when they just have to flip a switch, even independent from their ability to act morally. I still don't find it incredibly illuminating, as all it shows is that our moral intuitions are fundamentally fuzzy, or at least that we value things other than just how many people live or die.
Maybe this can work as an analogy: Right before the massacre at My Lai, a squad of soldiers are pursuing a group of villagers. A scout sees them up ahead a small river and he sees that they are splitting and going into different directions. An elderly person goes to the left of the river and the five other villagers go to the right. The old one is trying to make a large trail in the jungle, so as to fool the pursuers. The scout waits for a few minutes, when the rest of his squad team joins him. They are heading on the right side of the river and will probably continue on that way, risking to kill the five villagers. The scout signals to the others that they should go to the left. The party follows and they soon capture the elderly man and bring him back to the village center, where he is shot. Should the scout instead have said nothing or kept running forward, so that his team should have killed the five villagers instead? There are some problems with equating this to the trolley problem. First, the scout cannot know for certain before that his team is going in the direction of the large group. Second, the best solution may be to try and stop the squad, by faking a reason to go back to the village (saying the villagers must have run in a completely different direction).
Even then would be rather a lot different to a trolley problem. After all it involves asking a mother whether she would sacrifice her own child for the 'greater good'. The only reasonable response I can think of for that question is a solid slap in the face! How dare they ask someone that!

I immediately thought, "Kill the baby." No hesitation.

I happen to agree with you on morality being fuzzy and inconsistent. I'm definitely not a utilitarian. I don't approve of policies of torture, for example. It's just that the village obviously matters more than a goddamn baby. The trolley problem, being more abstract, is more confusing to me.

They would say the same thing only with more sincerity.
The answer that almost everyone gives seems to be very sensible. After all, the question: "What do I believe I would actually do" and "What do I think I should do" are different. Obviously self modifying to the point where these answers are as consistent as possible in the largest subset of scenarios as possible is probably a good thing, but that doesn't mean such self modifying is easy. Most mothers would simply be incapable of doing such a thing. If they could press a button to kill their baby, more would probably do so, just as more people would flip a switch to kill than push in front of a train. You obviously should kill the baby, but it is much more difficult to honestly say you would kill a baby than flip a switch: the distinction is not one of morality but courage. As a side note, I prefer the trolley-problem modification where you can have an innocent, healthy young traveler killed in order to save 5 people in need of organs. Saying "fat man", at least for me, obfuscates the moral dilemma and makes it somewhat easier.
...weighted by the likelihood of those scenarios, and the severities of the likely consequences of behaving inconsistently in those scenarios. Most problems of this sort are phrased in ways that render the situation epistemicly unreachable, which makes their likelihood so low as to be worth ignoring. Re: your side note... am I correct in understanding you to mean that you find imagining killing a fat man less uncomfortable than imagining killing a healthy young traveler?
If this were a real situation rather than an artificial moral dilemma, I'd say that if you can't silence the baby just by covering its mouth, you should shake it. It gets them to stop making noise, and while it's definitely not good for them, it'll still give the baby better odds than being smothered to death.
I would smother the baby and then feel incredibly, irrationally guilty for weeks or months. I am not a psychopath, but I am a utilitarian. I value having a consistent set of values more than I value any other factor that has come into conflict with that principle so far.
I hope I'd do the same. I've never had to kill anyone before though, much less my own baby, so I can't be totally sure I'd be capable of it.
Utilitarian specifically or consequentialist?
Consequentialist; I should know better than to be imprecise about that here, especially because there are sad things I find to have great value.
The at this point part is interesting. Have you ever tried asking the question without the abstract priming? I'd like to see the difference.

"Remember, you can't be wrong unless you take a position. Don't fall into that trap." - Scott Adams

An implicit assertion underlying this post seems to be that the sorts of people who answer trolley problems rather than dodge them are more likely to take action effectively in situations that require doing harm in order to minimize harm.

Or am I misunderstanding you?

If you are implying that: why do you believe that?

I wouldn't say that; just because a person can answer the question doesn't mean they have an outcome optimizing moral system, or even that they're not simply creating post hoc rationalizations of their knee jerk reactions, but it suggests that they believe in the value of having a comprehensive moral system. Whether anyone responding to the dilemma would take action effectively is another question entirely.
OK. I had inferred from statements like "They [question-evaders] have placed themselves in a reality too accommodating of their preferences to force them to have a system for dealing with situations with no ideal outcomes." that you were comparing them to question-answerers, who do develop such a system and consequently deal effectively with such situations. If your position is instead that whether people answer trolley questions or not in no way predicts whether they deal effectively with such situations, then what's the problem? That is: OK, they evade the question, or they answer it. Either way, why is this an "unnerving implication"?
There may be other explanations I haven't adequately considered, but the impression I get from the people with whom I've discussed the matter, and on whom I based the post, is that they haven't internalized the idea that the world is inconvenient enough to call for a systematic way of dealing with problems that lack ideal solutions. In consequentialist terms, I don't suppose that this is actually worse than constructing an ethical system that simply justifies natural non utilitarian inclinations post hoc, but it strikes me as sigificantly more naive.

they haven't internalized the idea that the world is inconvenient enough to call for a systematic way of dealing with problems that lack ideal solutions.

Perhaps they have had bad experience with "a systematic way of dealing with problems that lack ideal solutions."

Hard cases make bad law is a well known legal adage. There is, I think, some wisdom exhibited in resisting systematizers armed with trolley problems.

Nicely put. This seems to me a special case of the "bar bet" rule: if someone offers to bet me $20 that they can demonstrate something, I should confidently expect to lose the bet, no matter how low my priors are on expecting the thing itself. (That said, in many contexts I should take the bet anyway.)
I realize that this is off topic, but why?
It has to do with the social exchange of "bar bets" (I don't actually hang out at bars, but that's the trope; similar things happen in a lot of contexts). If I'm among friends (that is, it's an iterated arrangement) and I flatly refuse to participate just on the grounds that there has to be a catch somewhere, without being able to articulate a good theory for what the catch is, I lose status that may well be worth more to me than the bet was.
Also, if someone says to you "I'll bet you $20 I can ", what they're really saying is "I'm going to do , and it'll be super interesting and fun for all involved, especially if you put in $20 so as to add an element of risk to the proceedings". The expectation is that some other night, you can bet them $20 about some interesting thing you can do.
Agreed. I think we're kind of saying the same thing here, though your explanation is a lot more accessible. (I really should know better than to try to talk about social patterns when my head has been recently repatterned by software requirements specification.)
I suspect that they have internalized the idea that the world allows for ideal solutions, or at least non-negative solutions, because so much current fiction is based on happy endings. I wonder if people from cultures which include tragic fiction would tend to answer the trolley problem differently.
There hasn't been an extensive global survey that I'm aware of but reasonably diverse samples have turned up apx. zero divergence between demographic groups Btw folks, Phillipa Foot died last month. RIP.

I get frustrated by this every time someone mentions the classic short story The Cold Equations (full text here). The premise of the story is a classic trolley problem (...In Space!), where a small spaceship carrying much-needed medical supplies gets a stowaway, which throws off its mass calculations. If the stowaway is not ejected into space, the ship will crash and the people on the planet will die of a plague. So the (innocent, lovable) stowaway is killed and ejected, and the day is saved. The end.

Whenever this comes up, somebody will attack the story a... (read more)

When you're writing an actual story, I feel like you have to maintain higher standards for plausibility than when you're writing a straight moral dilemma. I only know The Cold Equations by its reputation, but I can certainly understand how that sort of contrivance could hurt it on a literary level.
(Reply to old post) The problem with "The Cold Equations" isn't just that it could have been prevented by signs and door locks. The problem is that the fact that it could have been prevented by signs and door locks turns it from "the laws of nature results in having to kill someone" to "human irresponsibility results in having to kill someone". Failing to take precautions to keep people out of a situation where they could die means the death is caused by negligence, not impersonal forces of nature.
What's frustrating about that? It doesn't make any sense, as if the fuel / weight had to be optimized that much, then they'd better damn well weigh the thing before takeoff, or whatever they need to do as a second-best option to detect stowaways / extra cargo / etc.
The frustrating thing is that people produce a specific criticism ("In this story, they could have thrown tables out the airlock, or put up more signs!") and presume they have shattered the premise of the story (there are situations where physical laws will require hard, horrifying choices, in these situations the physical laws will not bend no matter how immoral a decision it requires).
Ah. I don't think most folks would consider that very abstract notion "the premise of the story", though the author clearly thought it was the relevant detail. The characters behaved unrealistically, and shouldn't have been there in the first place. The same point is made very believably in many less contrived contexts, like stories about people trying to get on the Titanic's too-few lifeboats.
Well, the premise of the story was more to go directly against the grain of the current science fiction trend, which was clever-but-contrived escapes from seemingly physical-law-bound situations. So the author was restricted to science-fiction stories.
Actually, the author kept writing "clever-but-contrived escapes" and it was the editor, John Campbell, who wanted to go against the grain:

Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don't control nuclear weapons aren't that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.

So it needs to have heuristics that are robust against incomplete information. There's definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murd... (read more)

Having posted lots in this thread about excellent reasons not to answer the question, I shall now pretend to be one of the students that frustrates Desrtopa so and answer. Thus cutting myself off from becoming Prime Minister, but oh well.

The key to the problem is: I don't actually know or care about any of these people. So the question is answered in terms of the consequences (legal and social) to me, not to them.

e.g. in real life, action with a negative consequence tends to attract greater penalties than lack of action. So pushing one in front to save fiv... (read more)

As an ethicist who routinely rejects trolley problems, I feel I must respond to this.

The trolley problem was first formulated by Philippa Foot as a parody of the ridiculous ethical thought experiments developed by philosophers of the time. Its purpose was to cause the reader to observe that the thought experiment is a contrived scenario that will never occur (apparently, it serves that purpose in most untrained folks), and thus serves as an indictment of how divorced reasoning about ethics in philosophy had become from the real world of ethical decision-m... (read more)

But trolley-style problems have real application, e.g. for politicians. Someone with actual political power will frequently have lose-lose problems that aren't hypothetical, and know that they will be blamed whatever they do or don't.
If you're just genuinely curious where people who go, push come to shove, then all the creative solutions are obviously worthless data. If you're trying to get people to think about the real world, and firm up their own understanding, shouldn't we be berating the people who would blithely kill one person to save five, without thinking about a creative approach? ---------------------------------------- I'd say one is occasionally, rarely, in a situation where immediate action is truly required, and the trolley problem is good for developing a "moral reflex" there - just as marital arts give one a physical reflex for a fight that gives no time for thought. However, the more common situation is the one where a creative approach, a third option, is exactly what we want. By discouraging such responses, I'd think this reinforces the rule "don't try creative solutions" and "you have no power except this little bit" - it encourages an attitude of mindless acceptance of the situation as presented, and insists that everything should be a dry moral arithmetic. I'd feel most comfortable around someone whose answer is "I'd try to find a creative solution but, given push comes to shove, I'd kill one to save five".
I've never heard this before, and nothing I've read on the history or uses of the problem as a tool of psychological study suggest that this is the case. Where did you hear this?
I'm not sure. It's more or less the received wisdom in virtue ethics, for which in the 20th century Foot was a foundational figure. I'll see if I can find a reference, though I'm sure I got that impression from the original text.
I believe this is the original and she seems to be using these thought experiments unironically, though I haven't read closely.

I think you are overly generalizing against people who don't like or don't understand philosophy.

even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.

I am a conscientious "third-alternativer" on trolley problems, and to me this seems like an abuse of the least convenient possible world principle. If there is a world with no possibility of implementing alternative solutions, I will pick the outcome with the best consequences, but I don't believe there actually is a world with no possibility of altern... (read more)


I am an atheist, and I have no problems in answering questions of type "if creationism were true, would you support its teaching in schools" or ''if Christian God exists, would you pray every day" (both answers are yes, if that matters). What's the problem with those hypotheticals? The questions are well formed, and although they are useless in the sense that their premise is almost certainly false, the answers can still reveal something about my psychology. I don't think answering such questions would turn me into a creationist.

IAWTC in principle, but have noticed in practice that similarly formed questions almost always segue into an appeal to popularity or an appeal to uncertainty. Since dealing with these arguments is time-consuming and frustrating (they're clearly fallacious, but that's not obvious to most audiences), it usually works better to reject the premises at step one. Same goes for most trolleylike problems posed in casual debate.
The least convenient world for the purposes of what argument? The point of the least convenient world principle is to prevent yourself from taking outs on dilemmas that will prevent you from learning anything about your actual moral principles. The relevance of the trolley problem is not, for the most part, to situations where there are only two alternatives (of which there are few,) but situations where there are no options without negative repercussions (of which there are many.)
And in that case I pick the option with the least negative repercussions. I guess that shows that I am consequentialist in my morality. I expressed concern in the other trolley problem thread that there are in fact many situations that appear to have two negative options and no obvious alternatives; when faced with these problems people may attempt to solve them with "trolley-problem logic" rather than looking for third alternatives, which leads to them systematically performing worse on these kinds of moral problems.
Talking to people about philosophical thought experiments seems extremely unlikely to affect their problem solving abilities in the real world. The trolley problem is a transparently unrealistic scenario, convoluted so that answers reveal part of the structure of someone's moral code. It isn't presented the way real-time crises are presented nor are participants encouraged to "solve it". Obviously looking for third options is a good idea in seemingly lose-lose scenarios but something is wrong with people if they are, in fact, incapable of accepting that for the purposes of a philosophical thought experiment there are only two choices and then making a decision between the two.
With all the literature on priming and pattern-matching (the common case of people presented with the real-world cryonics option pattern-matching it to Pascal's Wager and rejecting it), I don't think this possibility can be rejected out of hand. I don't think trolley problems are in need of censoring, I know what the purpose of trolley problems are, I can give you that information without having to accidentally prime myself to harm one friend to stop em from harming the entire friendship group. Also, The field of decision theory seems somewhat predicated on this not being the case. Something about it mustn't be all that obvious - or maybe, it's obvious in hindsight. I don't think any trolley-problem-rejector is actually incapable of accepting that. I think wedrifid is right, what happens is they come up with their answer (push the fat guy), they attempt to phrase it in a way that doesn't sound like murder (stop the cart with his ... body ...), they realise that no matter how they say it, the obvious answer is going to make them look like a cold-blooded killer (hey everyone! e just said e'd push a fat guy in front of a runaway cart!), and so they reject the question. Saying their rejection shows there's something wrong with them is the spinning-it-badly they were worried about in the first place (hey everyone! e can't even answer a simple question!).
I'm sure it isn't surprising that most people lack the typical Less Wrong poster's ability to articulate the abstract. The trolley problem is important precisely because it lets us get this information from people who aren't so articulate. Even if people aren't capable of answering a question without it priming them (which, loosely speaking is probably true for all questions), thats a bad reason not to answer the question unless they think they're about to face some kind of crisis with a lot riding on their decision. The field of decision theory is predicated on philosophical thought experiments priming the decision making of those who engage with them? It's obvious in abstract. I'm not convinced there are many trolley-problem-rejectors but certainly the kind of trolley-problem-rejector the OP talks about is easily explained by wedrifid's comment (and by several other explanations probably). The thesis that all trolley-problem-rejectors are pushers who realize they're in the minority is really interesting though. When I was saying something was wrong with the problem-rejectors I meant the idea of a principled rejection, not a rejection based on peer pressure, social fears and signaling. Incidentally, I have trouble answering the problems on an object level, I think because I've spent too much time on the meta level questions the object level question no longer has a meaningful answer to me. I'd say both switching and pushing are acceptable but non-obligatory or supererogatory; but thats just an expression of my value pluralism. If you ask what I personally would do, I guess I wouldn't push the guy in front of the train but that doesn't feel like it communicates anything meaningful about my moral intuitions.
Sorry, I meant that the field of decision theory is based on the idea that philosophical thought experiments (like the prisoner's dilemma, stag hunt, etc) can affect your real-world problem solving skills (ie improve them). If I could develop it, I would probably say something along the lines of "The trolley problem is a cage match, deontological ethics against consequentialist. Rejectors are consequentialists who have a large weight on the consequences of breaking with deontological prescriptions. Rejecting the question is preferable to lying about one's own ethics, or breaking with one's ethical environment."
I think it's more generally explicable by lose-lose counterfactuals being in common use in the real world (politics, schoolyard) for purposes of entrapment - a rejection of lose-lose counterfactuals in general, rather than of the trolley question in particular. This would also explain why philosophy lecturers have such a hard time getting many people not to just outright reject counterfactuals, because a philosophy class will for many be the first time a lose-lose-counterfactual wasn't being used as a form of entrapment. Edit: TheOtherDave below nails it, I think: it's not just lose-lose counterfactuals, people heuristically treat any hypothetical as a possible entrapment and default to the safe option of refusing to play. If they don't know you, they aren't just being stupid.
IME this is a special case of a more general refusal to answer "hypothetical questions", even when they aren't lose-lose. I used to run into this a lot... someone says something, I ask some question about it of the form "So, are you saying that if X, then Y?" and they simply refuse to answer the question on the (sometimes unarticulated) grounds that I'm probably trying to trick them. (Tone of voice and bodyparl is really important here; I started running into this reaction less when I became more careful to project an air of "this is interesting and I'm exploring it" rather than "this is false and I am challenging it".) This also used to infuriate me: I would react to it as an expression of distrust. It helped to explicitly understand what was going on, though... once I recognized that it actually was an expression of distrust, and that the distrust was entirely reasonable if they couldn't read my mind, I stopped getting so angry about it. (Which in turn helped with the bodyparl and tone issues.)

I'd honestly find the far more plausible answer to be that people just have trouble with truly direct, unambiguous communication. My own experience is that either I'm very bad at such communication, or else other people are very bad at receiving it. When I ask extremely specific questions, people will usually assume a more generalized motive to asking it, and try to answer THAT question. I've had conversations with very smart people who kept re-interpreting my questions because they assumed I was trying to make a specific point or disprove some specific de... (read more)

Kind of late to get back to this, but

The Trolley scenario is a strong binary decision with perfect information and absolutely no creative thinking or alternate solution possible. Do you really think that comes up frequently in real life? If not, why not use an exercise that accommodates and praises creative solutions instead of rejecting them as being outside the binary scope of the exercise?

Real life trolleylike dilemmas are generally ones where creative thinking has already been done, but has not turned up any solutions without serious downsides. In such cases, deferring the decision for a perfect solution, when enough time has been dedicated to creative thinking that more is unlikely to deliver a new, better solution, is itself a failure condition.

The top 10% of humanity accumulates 30% of the worlds wealth. 20% of the humanity dies from preventable, premature death (and suffers horribly)

The proposition...

10% of the top 10% had all their wealth taken from them (lottery selection process) They are forced to work as hard and effectively as they had previously and were given only enough of the profits they produce to live modestly. They lose everything and work for 5 years and recieve 10% of original wealth back The next 10% of the top 10 % is selected The wealth taken is used to ensure the

... (read more)
There is a major flaw in your proposal: the bottom 40% would not be in favor. Some of them would be, but there is a demonstrable bias which causes people to be irrationally optimistic about their own future wealth. This bias is a major factor in the Republicans maintaining much of their base, among other things. However, to answer your question, while I would not favor your proposal, I would favor a tax on all of that top ten percent which would garner the same revenue as your proposal.
an increase in tax would only create an increase in product prices as the wealthy try to recoup their losses. This would adversely affect the very people you would be trying to help. The middle class whose support you wold require would also be affected negarive and the proposal would be then over turned. Increasing taxes would not work.
Huh? You're going to have to explain how increasing the tax (on the wealthy) would lead to increased product prices. They might try to recoup their losses. (Or they might decide it's not worth working as hard for less reward -- this is the usual assumption in Economics) But what's the mechanism that leads from that to raised prices? There is an optimal price to set to maximize profit. Raising prices past that point isn't going to increase profits, because the volume sold will be lower.
I guess I was thinking of necessities like food, water, electricity, medicine etc which the lack of is causinG the preventable premature deaths . Passing the costs of production on to consumers (including increases in tax) in order to maintain or grow profit margins is at the heart of our economic reality. 'Not worth working as hard for less reward" is the reason for the lottery for the top 10% of earners. Most of these kind of individuals would have the belief that this would be a lottery they would not win and therefore continue to work as they would. An increase for all the top 10% (tax) would only modify their behaviour and at least some proportion of the cost would enievitably be passed on to all consumers.
This is just too complicated a scenario to boil down to such a simple question. The efficacy of that kind of redistribution would depend on all sorts of other properties of the economy and of society. I can imagine cultures in which that would work well, and others in which it would trigger a bloodbath. I don't think it's meaningful to ask whether someone would support it "in general."
I was aware of the many possible negative consequences such an action could have ( and the impossiblity of it ever having a chance of happening) however if there was a majority support across a society above 75% would the basic idea of sacrificing a small number of people to a modest lifestyle in order to save a large number of people be something you could support. Would a bloodbath be triggered with such support. I pose the question and think its a meaninful question because it is in a "general" sense a decision societies and civilization as a whole ( and by extension all individuals) are making every day. I spend $70 a month on entertainment. If I redirect this money I could save 7 people a month from a preventable premature death. We all make these decisions. If the question was a choice between throwing the fat person in front of the trolley of yourself in order to save people which would you prefer. Also remember it is the "fat person" or wealthy that propels the trolley into these people to varying degrees.
IIRC, the actual cost of saving a life is about $100-$1000, but certainly not $10.
Unless you're willing to save expected lives instead of having a high chance of saving currently-existing lives, of course. (In which case (IIRC) the cost of saving around 8 expected lives is $1, by Anna Salamon's estimate.)
How does she estimate $0.13 per expected life saved?
Replying to another old post, but isn't this suggestion just Omelas, except that you're replacing the one child with the 1%?

It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all.

We live in a world where most people refuse complicity in a disaster in order to "maintain a certain quality of life even though it costs many lives".

Perhaps this is the reason for opting out of answering the question, acting is just to hard. The decision and its consequences is for s... (read more)


This is still true:

  • Trolley problems make a lot of sense in deontological ethics, to test supposedly universal moral rules in extreme situations.
  • Trolley problems do not make much sense in consequentialist ethics, as optimal action for a consequentialist can differ drastically between messy complicated real world and idealized world of thought experiments.

If you're a consequentialist, trolley problems are entirely irrelevant.

The messy complicated real world never contains situations where you can sacrifice a few people to benefit many people? Or if it does, in such situations we'll figure out the optimal action using completely different considerations from those we would use in the idealized case? I don't believe either of those.
In messy complicated real world always contains people with different agendas, massive uncertainty and disagreement about likely outcomes, moral hazard, and affected people pushing to get their desired result by any means available. If assume them away, trolley problem has nothing to do with the real world.
Exactly. The central problems of real-world morality center around dealing with the uncertainty, bias and signaling issues of realistic high-stakes scenarios. By assuming all that complexity away trolley problems end up about as relevant as an economics problem without money or preferences. A more useful research program would focus on probing the effects of uncertainty and socal issues on moral decision-making. But that makes for poor cocktail party conversation.
Trolley problems may be useful if you're e.g. an extremely smart person doing Philosophy, Politics and Economics at Oxford and you're destined for a career in politics where dealing with real-life lose-lose hypotheticals is going to be part of the job. Or if you want to understand such people, e.g. because you're on one of the metaphorical tracks.
Of course, it does. This is why such hypotheticals are used to entrap politicians, the ones who usually have the job of making the decision. It's not clear to me whether the avoidance or entrapment came first.
If you're a consequentialist, trolley problems are easy.
Only if you know whether or not someone is watching! That is, getting caught not acting like a deontologist is a consequence that must sometimes be avoided. This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.
Relevant, perhaps, but if you absolutely can't talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal. Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren't so poorly built for utility calculations we might not even need AGI in the first place.

I think there have posts about this before. Well, this and the "if it's not my responsibility, it's not my problem" mindset, which the trolley problem also touches on.


It dawns on me that there is a much more general tendency among most people to try to bail out of moral dilemmas or other hypotheticals. I personal experience sometimes I wish it was socially accepted to shout "Stop making up alternate courses of action in my thought experiments!" but alas we all have to deal with the single inference step.

(Is there a generalization of that "take a third option" tendency on dilemmas and hypothetical situations?)

[This comment is no longer endorsed by its author]Reply
And so they should. Moral dillemas are a social trap! If you must answer at all, never answer directly. I just went in search of a comment thread in which myself and Eliezer both mentioned this issue. But it turns out that it was actually elsewhere in this thread

Excellent post. Seems to me that your points about how people react to moral problems apply to decision problems as well.