Recently, the following poll was posted on Twitter:

Everyone responding to this poll chooses between a blue pill or red pill.

  • if > 50% of ppl choose blue pill, everyone lives
  • if not, red pills live and blue pills die

Which do you choose?

If we linearize the problem of how much you value the lives of other poll respondents compared to your own, there's actually a way to quantify whether you should vote red or blue depending on your beliefs about how the rest of the population will vote.

Suppose that we normalize the value of choosing the red pill at zero, and let be the total number of people responding to the poll excluding you. Then, choosing the blue pill only makes a difference in the following two cases:

  1. If the number of people who vote blue excluding you, hereafter denoted , is exactly , then your vote saves the lives of people.
  2. If , then you die and get nothing in return.

If you value the lives of other poll respondents at a constant multiple of your own life, then you should pick blue if

and you should pick red if the inequality goes in the other direction.

A rough heuristic here is that if the distribution of is unimodal with a maximum to the right of , you expect

so if (meaning you value the lives of other poll respondents equally to your own) and you expect a blue majority in the precise sense defined above, you should always take the blue pill yourself. The exact cutoff on for you to take the blue pill is

A general heuristic is that if we think a red victory is a distinct possibility, so that , and the distribution doesn't decay too sharply around , in general, we'll have so that the cutoff ends up being . In other words, you pick blue in realistic circumstances if you value other people's lives similarly to your own: can't be too far away from if picking blue is to make sense as an individually altruistic decision, ignoring social pressures to pick blue et cetera.

Approximating by a beta distribution for large gives the following rough results:

Probability of a red victory Expected share of blue votes Cutoff value of , i.e. minimum value of your life in units of others' lives to choose red
22.4
7.1
3.7
11.6
3.5
1.4

I think the degree of altruism implied by choosing blue in this context is pretty strong for plausible distributions over . For that reason, I think picking red is easily defensible even from an individually altruistic point of view (would you sacrifice your life to save the life of five strangers?)

There are higher-order considerations that are relevant beyond individual altruism, of course: society might have a set of norms to impose penalties on people who choose to take the red pill. However, the possible cost of not taking the red pill is losing your life, which suggests that such penalties would have to be quite steep to change these basic calculations as long as there is a non-negligible probability of a red victory that survives.

I suspect that if this were a real situation most people would choose to take the red pill in a situation where monitoring costs are high (e.g. which pill you take is a secret decision unless red wins) and social punishments are therefore difficult to enforce.

New Comment
64 comments, sorted by Click to highlight new comments since: Today at 7:56 PM
[-]arabaga8mo3018

Note that another way of phrasing the poll is:

Everyone responding to this poll chooses between a blue pill or red pill.

  • if you choose red pill, you live
  • if you choose blue pill, you die unless >50% of ppl choose blue pill

Which do you choose?

I bet the poll results would be very different if it was phrased this way.

[-]jmh8mo60

A great example of why conducting polls/surveys is difficult to do well and not in some leading way.

[-]shminux9mo2112

A bit less than game-theoretical argument here.

Note that a significant consideration is that one does not want to be "morally responsible" for "those clueless bluepillers" dying. Absent uniformity or perfect coordination, PEOPLE DIE!!!111. 

Another note: people are often willing to risk their own lives in order to avoid this guilt. Survivor's guilt is a thing you can't easily logic away. These people will vote blue, because the alternative is unthinkable. 

Finally, if you care about these people, even though you realize that redpilling is the safe personal choice, bluepilling is a decision worth considering seriously.

Survivor's guilt is a thing you can't easily logic away.

Easily? Perhaps not. Possibly? Yes. See various schools of psychotherapy or counselling that proceed by getting at the objective truth of whatever happened, surfacing the client’s beliefs about it, perhaps unconscious ones, looking at them as beliefs, and deciding their truth or falsity.

But your expression “logic away” suggests to me that you are imagining straw Vulcans. The methods I just sketched always involve a lot of emotion, a lot of thinking, and a lot of attention to reality. These three things are no more separate than arms, legs, and torso. They must work harmoniously together, none subordinate to another.

Would that also be your analysis of arabaga's restatement of the problem?

This problem is not a utilitarian expected utility calculation. If you redpill and someone dies partly because you redpilled, well, some people don't care and others can't take it.

That is an argument that proves too much, because to a utilitarian, every decision problem is an expected utility calculation.

No-one dies because I redpilled. The blue-pillers committed suicide for no reason. I mean, who do the blue-pillers think they are saving? Other blue-pillers. But who are those ones saving? Other blue-pillers. And so on. Nothing of value is actually being saved.

It's like one of those scams where money is moved around in a complicated way and somehow everyone is supposed to end up with more, yet nothing of value is being created. Or a perpetual motion machine that looks really complicated, but there is no source of energy, so we know without analysing it that it can't work. Blue-pillers be like an ant mill.

Please see this analysis, which is a somewhat more complete answer to “who do the blue-pillers think they are saving?”.

Exactly my view.

I see literally zero problems with saving lives of madmans and idiots.

Same here. I take my hat off to those who dedicate their lives to doing that, but I won't be one of them.

However, please note the recursion. Those who take the blue pill are people who risk their lives to save madmen, idiots, and people who risk their lives to save madmen and idiots. Would you risk your life to save such people? It seems, at least, less obvious that doing so is of no benefit to you.

(Of course, I would still choose the red pill.)

In real-world analogs of the conundrum, the way to deal with madmen and idiots is to keep the pills away from them. Personally, I'll respect the people who devote their lives to saving those who cannot save themselves, but not the ones who only destroy themselves in the attempt; and I will not be in either class.

Yep, makes sense.

All else being equal, do you prefer to live in a society where many members are madmen and idiots or in a society where few members are madmen and idiots?

Latter, but definitely not by means like "letting them die". Especially because from POV of some transhuman being with 1000 IQ, I'm sure a madman and idiot.

No-one dies because I redpilled.

That's where bluepillers disagree. They feel they cannot hide behind the numbers. Someone WILL bluepill, humans are diverse enough for that to guarantee to happen. There is an easy way to contribute to their survival, unless you don't have empathy for people who are not like you. As you say "Nothing of value is actually being saved." --- bluepillers are not people, as far as you are concerned.

When I said “nothing of value is actually being saved”, I was referring to these facts:

Before the pills: no-one dies.

Best possible outcome: no-one dies.

Everyone is people, and the best possible outcome is merely the status quo. No value has been created, it has merely been risked for no reason.

A problem with the scenario is that it is not isomorphic to any real situation, and people tend to respond according to what it emotionally feels like, which is collective action (yay!) vs. individual action (boo!). The nearest real-world version I can think of is where there is an improvement on the status quo that can only be achieved if a large number of people unite, but smaller numbers of people striving for it will be penalised. For example, overthrowing an authoritarian regime. The obstacle there is usually lack of common knowledge of what people want and coordination on when it is to happen, which the regime tries to prevent.

But in the original example, everyone runs as fast as they can to stay in the same place.

"Before the pills" is not a valid reference point, because "not taking pills at all" is not an option in original dilemma. Default state here is "everybody uses bad decision theory and bunch of people dies".

Default state here is "everybody uses bad decision theory and bunch of people dies".

Whichever decision you think is the bad one, if everyone does it then everyone lives.

There are plenty of bad decisions, like "randomize" or "do whatever feel like it".

Imagine situation where you already know that exactly one person is needed to take blue pill so everyone lives. Correct decision here is to take blue pill, if you value others' life. Everything else here is a question of probability of such situation and various logical decision theory puzzles.

Your scenario is not clear to me. In the problem as given, if no-one takes the blue pill, everyone lives. What happens in your scenario if no-one takes the blue pill?

I mean scenario "There are 50 red-pillers and 49 blue-pillers, if you take blue pill, everyone lives, if not, blue-pillers die".

Well, sure, that’s like I just have to press a button and save 49 people. I’ll do that. But no-one will be in that situation.

And still we see different results in polls.

[-]WalterL9mo16-4

It seems like everyone will pick red pill, so everyone will live.  Simple deciders will minimize risk to self by picking red, complex deciders will realize simple deciders exist and pick red, extravagant theorists will realize that the universal red accomplishes the same thing as universal blue.

No matter what the game theory says, a non-zero number of people will choose blue and thus die under this equilibrium. This fact - that getting to >50% blue is the only way to save absolutely everyone - is enough for me to consider choosing blue and hope that others reason the same (which, in a self-fulfilling way, strengthens the case for choosing blue).

That would be questioning the assumption that your cost function as an altruist should be linear in the number of lives lost. I'm not sure why you would question this assumption, though; it seems rather unnatural to make this a concave function, which is what you would need for your logic to work.

I'm not quite sure what you mean by that.

Unless I expect the pool of responders to be 100% rational and choose red, then I should expect some to choose blue. Since I (and presumably other responders) do expect some to choose blue, that makes >50% blue the preferred outcome. Universal red is just not a realistic outcome.

Whether or not I choose blue then depends on factors like how I value the lives of others compared to mine, the number of responders, etc - as in the equations in your post.

Emperically, as GeneSmith points out, something is wrong with WalterL's suggestion that red is the obvious choice no matter your reasoning. Applying his logic would push the twitter poll away from the realistically ideal outcome of >50% blue and closer to the worst possible outcome (51% red).

[-]Dagon9mo102

A LOT depends on how you model the counterfactual of this poll being real and having consequences.  I STRONGLY predict that 90+% of people who are given the poll, along with enough evidence that they believe the consequences are real, will pick red.  Personal safety aligns with back-of-the-envelope calculations here - unless you can be pretty sure of meeting the blue threshold, you're basically committing suicide by picking blue.  And if it's well over 50% blue without you, you may as well choose red then, too.

There IS a superrationality argument for blue, in the case where you model that you're sufficiently similar to 50%+ of people that you will naturally vote in a block due to shared priors and models.  Then voting blue to save those dissimilar people who voted red may be justified.  I don't believe this holds for myself, nor any sizeable subset of humans.

I don't share your intuition here. I think many people would see blue as the "band together" option and would have confidence that others will do the same. For the average responder, the question would reduce to "choose blue to signal trust in humanity, choose red to signal selfish cowardice".

"Innate faith in human compassion, especially in a crisis" is the co-ordination mechanism, and I think there is pretty strong support for that notion if you look at how we respond to crises in real life and how we depict them in fiction. That is the narrative we tell ourselves at least, but narrative is what's important here.

I would be surprised if blue was less than 30%, and would predict around 60%.

[-]dr_s9mo51

Both all red and all blue are rational if I can expect everyone else to follow the same logic as me. Which one you prefer depends only on amount of disagreement you expect and value you place on other lives compared to your own. In any world that goes "I am perfectly rational, everyone else is too, and thus they will do the same as me", it's irrelevant what you pick.

If you just look at the poll, a majority of the respondents picked blue.

So I think your theory is wrong because a lot of people are trying to be good people without actually thinking too hard.

I suppose if their lives were actually at stake they might think a bit harder and maybe that would shift the balance?

My initial reaction was to pick blue until I thought about it for a moment and realized everyone could survive if they picked red.

This entire question boils down to whether people can coordinate.

"It seems like everyone will pick red pill" 

-- but in the actual poll, they didn't! So, something has gone wrong in the reasoning here, even if there is some normative sense in which it should work. 

[-]Dagon9mo130

I suspect the self-selection of participants outweighs all other parts of the calculation.  The choices are actually red, blue, or no response (with an undefined outcome, but I'd expect that the vast majority of humans on earth chose it, or had it chosen for them because they didn't even see the poll).   Followed by the confounder that the setup is a lie - it's FAR more likely that the pollster is faffing about and nobody will be killed as a result of any outcome.  I don't think there's anything to learn from this setup - some participants may make the calculation, but the vast majority will ignore it, or choose for signaling or amusement value.

If you did this with forced participation, and actual belief that it worked as claimed, the vast majority of humans would choose red.  In fact, why would anyone choose blue?  100% red means everyone lives, and it doesn't require any trust or coordination to achieve.

If you change it so there are hostages (people who don't get to choose, but will die if the blue threshold isn't met), then it becomes interesting.  But still not that interesting - coordination and trust become the primary questions, and your setup doesn't mention anything about what mechanisms are available or what population is voting.  Basically, the "depending on your beliefs about how the rest of the population will vote" dominates, and you don't provide any evidence to update those beliefs upon.

100% red means everyone lives, and it doesn't require any trust or coordination to achieve.

 

--Yes this.

 

If you change it so there are hostages (people who don't get to choose, but will die if the blue threshold isn't met), then it becomes interesting.

-- That was actually a strongfemaleprotagonist storyline, cleaving along a difference between superheroic morality and civilian morality, then examined further as the teacher was interrogated later on.

If your true goal is "everyone lives", then 50% blue cutoff is waaay more achievable than 100% red one.

[-]Dagon8mo1-1

True, but kind of misleading.  Neither 50.1% blue nor 100% red are achievable (in my estimation, in any sizable population of real humans), but missing that goal by a bit is a LOT better for red than for blue.  

100% red means everyone lives, and it doesn't require any trust or coordination to achieve.

I don't think there are any even halfway plausible models under which >95% of humanity chooses red without prior coordination, and pretty unlikely even with prior coordination. Aiming for a majority red scenario is choosing to kill at least 400 million people, and possibly billions. You are correct that it doesn't require trust, but it absolutely requires extreme levels of coordination. For example, note that more than 10% of the population is less than 10 years old, some have impaired colour vision, and even competent adults with no other impairments make mistakes in highly stressful situations.

[-]Dagon9mo1111

I think the setup is so far away from plausible that we'll never know how many people would choose blue and die.  I agree that there's probably going to be a lizardman-constant amount of blues, but I don't think there's any path to a blue-majority without a fair bit of campaigning and coordination (which would be better spent getting more reds), and even then it's not guaranteed, so my expectation is that a blue win is just a phantom option - there's no actual future universe that contains that outcome.

which, of course means that maximizing red is not "choosing to kill at least 400 million people, and possibly billions", it's minimizing the kill that whoever set up this evil scheme is responsible for.

This is basically my position as well. Without (very) strong evidence that a majority would pick blue, red is the obvious choice. I would choose red in the "real" version and red in the "fake" version as well. If there was a "practice" version so people could indicate their intentions that would be later followed by a real version, then I would pick red in the practice version and would switch to blue in the real version if blue got at least around 2/3 in the practice version.

In the fake version conducted as a twitter poll, 70% picked blue.

There are a lot of potential coordination mechanisms that could convince me to go blue.  In the case as given, where there is no such ability, I think red is the choice which maximizes total utility (by keeping more people alive than a doomed pretense that blue can win).

[-]dr_s9mo10

no response (with an undefined outcome, but I'd expect that the vast majority of humans on earth chose it, or had it chosen for them because they didn't even see the poll)

These artificial thought experiments usually require assuming that's not on the table. Otherwise, yes, please don't ingest either of the Alien Death Pills.

Embarrassingly minor nitpick I'm too neurotic to not mention: It's the ceil of N/2 instead of floor.

Yeah, that's right. Fixed.

The poll is interesting insofar as it generates discussion and analysis, but meaningless as an evaluation of human empathy. This would be true even if done in standard laboratory conditions. Stated preferences, particularly for issues of morality and self-sacrifice, map poorly to revealed preferences. A study evaluated how many people said they would donate blood freely in the UK vs how many actually did, with tragically predictable poor matching (https://onlinelibrary.wiley.com/doi/full/10.1002/hec.4246). 

The actually interesting study would be to complete this poll to people who truly believed that the poll was real. How would people respond. How would parents recommend their children to respond? Would the father really tell his daughter to potentially step into the blender when she could have a guaranteed survival ticket?

I would state with 95% probability that if a) if those polled truly believed that the poll were real; b) that there were families involved; and c) that they could not communicate beforehand then nearly all (if not all) respondents would press red. This may even be true if they could communicate beforehand. 

[-]Max H9mo5-1

Looks about right to me. Of course, the hard part is actually estimating  for a given population under specific conditions, and the even harder part is figuring out what you can do in general to create populations where this probability tends to be very low. In such populations where the probability of a red victory is negligible, choosing blue is a way of protecting the tiny fraction of people who misread or misclick, suffer from a temporary bout of insanity, or just aren't smart enough to comprehend the choice at all, at negligible risk to yourself.

Among earthlings, I think the issue would be more about a lack of coherence and understanding of decision theory and a desire to solve coordination problems, as well as common knowledge that such knowledge and desire was common (if it were in fact common), rather than a lack of altruism or any social pressure issues.

I would comfortably choose blue in dath ilan; on Earth under most conditions, I'd pick red for basically the reasons you give, but feel pretty sad about it.

I think my minimum condition for choosing blue would be if a supermajority of the population could pass a competence test, i.e. demonstrate understanding and independently generate valid explanations about games like this at at least the quality level in this post. 

Judging from twitter, this is a condition which doesn't hold even among rationalists, unfortunately. A further condition would be that there is no one playing who is confidently spouting wrong / crazy explanations and getting people to agree with them, whether they're arguing for red or blue. (On a binary question, there will be lots of people who arrive at the right answer by chance; but that's only relevant if they arrive at the right answer for actually-valid reasons, and if everyone can immediately see and validly refute invalid reasoning.)

 

You are offered the investment of a lifetime! If only enough people each chip in $10000, then they'll all get their money back! Otherwise they'll all lose it!

There's a monster at the top of that mountain. It doesn't hunt people, but it will eat anyone who walks into its mouth. Unless enough people walk in at once. Come on, we've got to walk into its mouth to save the people who are going to walk into its mouth!

Every year, a handful of small children, and visiting tourists, and people chasing stray pets, and people who get lost in the dark, accidentally wander up into the monster's cave and into its mouth. 

How big does this number have to be before it's worth whipping up the village to kill the monster by all walking in together? 

What if the monster has only recently settled in the mountain, so no wayward children have been eaten yet. The town holds a vote: either we can commit to drilling it into our children, and our stumbling drunks, and our visiting tourists to never, ever, ever, ever go up the mountain, or we can go up as one and get rid of the monster now. How do you think most people will vote? 

The answer is public education, rescue missions to bring people back before they get inside, and planning to kill the monster.

I expect that other voters correlate with my choice, and so I am not just deciding 1 vote, but actually a significant fraction of votes.

If the number of uncorrelated blue voters, plus the number of people who vote identical to me exceeds 50%, then I can save the uncorrelated blue voters.

More formally: let R, B, C denote the fraction of uncorrelated red, uncorrelated blue and correlated voters that will vote the same as you do. Let S be how large a fraction of people you'd let die in order to save yourself (i.e. some measure of selfishness).

Then choosing blue over red gives you extra utility/lives saved depending on what R,B,C,S are.

If B>0.5 then the utility difference is 0.

If B<0.5 and B+C>0.5 then the difference is +B.

If B+C<0.5 then the difference is -(C+S).

By taking the expectation over your uncertainties about what B,R,C might be, for example by averaging across some randomly chosen scenarios that seem like they properly cover your uncertainty, you get the difference in expected utility between voting blue and red.

Estimating C,R,B can be done by guessing which algorithms other voters use to decide their votes, and how much those algorithms equal your own. Getting good precision on the latter part probably involves also guessing the epistemic state of other voters, i.e. their guesses for C,R,B, and doing some more complicated game theory and solving for equilibria.

If people vote as if their individual vote determines the vote of a non-negligible fraction of the voter pool, then you only need (averaged over the whole population, so the value of the entire population is instead of , which seems much more realistic.

So voting blue can make sense for a sufficiently large coalition of "ordinary altruists" with who are able to pre-commit to their vote and think people outside the coalition might vote blue by mistake etc. rather than the "extraordinary altruists" we need in the original situation with . Ditto if you're using a decision theory where it makes sense to suppose such a commitment already exists when making your decision.

Kickstarter version:

Everyone who takes a red pill lives.

If enough people take a blue pill, they live. If fewer than that, they regurgitate the pill and live.

Tangential, but my response to the dilemma differs massively depending on the mechanism I imagine.

It shouldn't.

I mean, this is standard ordinary wisdom, it doesn't need VNM-type mathematics or exotic decision theory to arrive at. "Keep your eye on the ball." "Follow the money." "Ignore the hype." "Cui bono?"

I disagree, and conversely it's also standard ordinary wisdom that you can't just boil down real life decisions to a table of a handful of numbers.

"Keep your eye on the ball." "Follow the money." "Ignore the hype." "Cui bono?"

If you keep your eye on the ball, you'll realize that the way everyone else responds also massively differs depending on the mechanism, and so if you want to coordinate to e.g. save as many people as possible, you ought to respond differently depending on the mechanism too.

In the problem as given, I don't much care what happens to people bent on committing suicide out of being too dull-witted to realise what they're doing. Dissuading them from doing it is as far as I'll go, then I'll leave them to it if they persevere.

Think of it as respecting their autonomy if you like.

My understanding is that 70% of Twitter respondents chose "blue", and I'd expect the Twitter poll was both seen by, and responded to, at higher rates by people with an interest in game theory and related topics, i.e. the people more likely to understand the principles necessary to arrive at "red" as an answer.

Obviously a Twitter poll isn't the real life situation, but a) it is far from clear that "blue"s are committing suicide and b) if you find yourself arguing that a supermajority of humanity is below your intellectual threshold of concern, I think that's a good sign in general to reflect on how much you really mean that.  

To claim that a supermajority of humanity is stupider than me is only to claim to be above the median. Half of all people are, and I expect that half includes most of the readers here. In fact, I am sure we are mostly well into the upper percentiles. It is not arrogant to say so.

But what is to be done about people who buy into Ponzi schemes, invest in perpetual motion, or walk into a blender and must be bailed out by others at the risk of their lives? These are choices, bad choices, unforced errors. No-one ever had to choose to do those things, although many have done.

Personally, I’ll pay my taxes, respond as I may feel moved to by individual cases that I come in contact with…and that’s all. This world is full of bottomless pits of suffering. My fortune is to not be in them, and my choice is to stay away from them. That’s just my choice, I’m not pushing it on anyone else, only saying that the choice is available.

The instant case, of the red-pill-blue-pill puzzle, is egregiously futile, because there is no pit but the one created by people trying to save people from the pit which was only created by the people trying to save people from the pit which was only…

And those in the crowd who see the scam are derided as psychopathic.

Seems kind of psychopathic to me, but you do you I guess.

I bet an individual's red pill or blue pill selection correlates extremely well with red tribe or blue tribe affiliation.