The source is here. I'll restate the problem in simpler terms:

You are one of a group of 10 people who care about saving African kids. You will all be put in separate rooms, then I will flip a coin. If the coin comes up heads, a random one of you will be designated as the "decider". If it comes up tails, nine of you will be designated as "deciders". Next, I will tell everyone their status, without telling the status of others. Each decider will be asked to say "yea" or "nay". If the coin came up tails and all nine deciders say "yea", I donate $1000 to VillageReach. If the coin came up heads and the sole decider says "yea", I donate only $100. If all deciders say "nay", I donate $700 regardless of the result of the coin toss. If the deciders disagree, I don't donate anything.

First let's work out what joint strategy you should coordinate on beforehand. If everyone pledges to answer "yea" in case they end up as deciders, you get 0.5*1000 + 0.5*100 = 550 expected donation. Pledging to say "nay" gives 700 for sure, so it's the better strategy.

But consider what happens when you're already in your room, and I tell you that you're a decider, and you don't know how many other deciders there are. This gives you new information you didn't know before - no anthropic funny business, just your regular kind of information - so you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying "yea" gives 0.9*1000 + 0.1*100 = 910 expected donation. This looks more attractive than the 700 for "nay", so you decide to go with "yea" after all.

Only one answer can be correct. Which is it and why?

(No points for saying that UDT or reflective consistency forces the first solution. If that's your answer, you must also find the error in the second one.)

New Comment
116 comments, sorted by Click to highlight new comments since: Today at 8:34 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

After being told whether they are deciders or not, 9 people will correctly infer the outcome of the coin flip, and 1 person will have been misled and will guess incorrectly. So far so good. The problem is that there is a 50% chance that the one person who is wrong is going to be put in charge of the decision. So even though I have a 90% chance of guessing the state of the coin, the structure of the game prevents me from ever having more than a 50% chance of the better payoff.

eta: Since I know my attempt to choose the better payoff will be thwarted 50% of the time, the statement "saying 'yea' gives 0.9*1000 + 0.1*100 = 910 expected donation" isn't true.

2Vulture10y
This seems to be the correct answer, but I'm still not sure how to modify my intuitions so I don't get confused by this kind of thing in the future. The key insight is that a group of fully coordinated/identical people (in this case the 9 people who guess the coin outcome) can be effectively treated as one once their situation is permanently identical, right?

It is an anthropic problem. Agents who don't get to make decisions by definition don't really exist in the ontology of decision theory. As a decision theoretic agent being told you are not the decider is equivalent to dying.

5wedrifid13y
Or more precisely it is equivalent to falling into a crack in spacetime without Amelia Pond having a crush on you. ;)
2Emile13y
Depends of what you mean by "Anthropic problem". The first google result for that term right now is this post, so the term doesn't seem to have a widely-agreed upon meaning, though there is some interesting discussion on Wikipedia. Maybe we could distinguish * "Anthropic reasoning", where your reasoning needs to take into account not only the facts you observed (i.e. standard bayesian reasoning) but also the fact that you are there to take the decision period. * "Anthropic scenarios" (ugh), where the existence of agents comes into account (ike the sleeping beauty problem, our universe, etc.) Anthropic scenarios feature outlandish situations (teleporters, the sleeping beauty) or are somewhat hard to reproduce (the existence of our universe). So making scenarios that aren't outlandish anthropic scenarios but still require anthropic reasoning is nice for intuition (especially in an area like this where everybody's intuition starts breaking down), even if it doesn't change anything from a pure decision theory point of view. I'm not very happy with this decomposition; seems to me "is this an anthropic problem?" can be answered by "Well it does require anthropic reasoning but doesn't require outlandish scenarios like most similar problems", but there may be a better way of putting it.

It is a nice feature that Psy-kosh's problem that it pumps the confusing intuitions we see in scenarios like the Sleep Beauty problem without recourse to memory erasing drugs or teleporters-- I think it tells us something important about this class of problem. But mathematically the problem is equivalent to one where the coin-flip doesn't make nine people deciders but copies you nine times- I don't think there is a good justification for labeling these problems differently.

The interesting question is what this example tells us about the nature of this class of problem- and I'm having trouble putting my finger on just what that is.

7cousin_it13y
RIght. That's the question I wanted people to answer, not just solve the object-level problem (UDT solves it just fine).
0[anonymous]13y
So I have an idea- which is either going to make perfect sense to people right away or it's going to have to wait for a post from me (on stuff I've said I'd write a post on forever). The short and sweet of it is: There is only one decision theoretic agent in this problem (never nine) and that agent gets no new information with which to update on. I need to sleep but I'll start writing in the morning.

I can't formalize my response, so here's an intuition dump:

It seemed to me that a crucial aspect of the 1/3 solution to the sleeping beauty problem was that for a given credence, any payoffs based on hypothetical decisions involving said credence scaled linearly with the number of instances making the decision. In terms of utility, the "correct" probability for sleeping beauty would be 1/3 if her decisions were rewarded independently, 1/2 if her (presumably deterministic) decisions were rewarded in aggregate.

The 1/2 situation is mirrored here: Th... (read more)

1jsalvatier13y
I think "implicit assumption that your decision process is meaningfully distinct from the others', but this assumption violates the constraints of the problem." is a good insight.

I tell you that you're a decider [... so] the coin is 90% likely to have come up tails.

Yes, but

So saying "yea" gives 0.9 1000 + 0.1 100 = 910 expected donation.

... is wrong: you only get 1000 if everybody else chose "yea". The calculation of expected utility when tails come up has to be more complex than that.

Let's take a detour through a simpler coordination problem: There are 10 deciders with no means of communication. I announce I will give $1000 to a Good and Worth Charity for each decider that chose "yea", except... (read more)

Here the optimal strategy is to choose "yea" with a certain probability p, which I don't have time to calculate right now

The expected value is $1000 (10 * p - 10 p^ 10). Maximums and minimums of functions may occur when the derivative is zero, or at boundaries.

The derivative is $1000(10 - 100 p^ 9). This is zero when p = 0.1^(1/9) ~= 0.774. The boundaries of 0 and 1 are minima, and this is a maximum.

EDIT: Huh. This simple calculation that mildly adds to the parent is worth more karma than the parent? I thought the parent really got to the heart of things with: "(because there's no reliable way to account for the decisions of others if they depend on yours)" Of course, TDT and UDT are attempts to do just that in some circumstances.

2James_Ernest11y
Shouldn't the expected value be $1000 (10p)*(1-p^10) or $1000 (10p - 10p^11) ? (p now maximised at 0.7868... giving EV $7.15K)
0wnoise11y
That does look right.
0christopherj10y
Seems to me that you'd want to add up the probabilities of each of the 10 outcomes, 0*p^10*(10!/(10!*0!)) + 9000*p^9*(1-p)*(10!/(9!*1!)) + 8000*p^8*(1-p)^2*(10!/(8!*2!)) + 7000*p^7*(1-p)^3*(10!/(7!*3!))... This also has a maximum at p~= 0.774, with expected value of $6968. This verifies that your shortcut was correct. James' equation gives a bigger value, because he doesn't account for the fact that the lost payoff is always the maximum $10,000. His equation would be the correct one to use, if the problem were with 20 people, 10 of which determine the payoff and the other 10 whether the payoff is payed and they all have to use the same probability.

The first is correct. If you expect all 10 participants to act the same you should not distinguish between the cases when you yourself are the sole decider and when one of the others is the sole decider. Your being you should have no special relevance. Since you are a pre-existing human with a defined identity this is highly counterintuitive, but this problem really is no different from this one: An urn with 10 balls in different colors, someone tosses a coin and draws 1 ball if it comes up head and 9 balls if it comes up tails, and in either case calls o... (read more)

I claim that the first is correct.

Reasoning: the Bayesian update is correct, but the computation of expected benefit is incomplete. Among all universes, deciders are "group" deciders nine times as often as they are "individual" deciders. Thus, while being a decider indicates you are more likely to be in a tails-universe, the decision of a group decider is 1/9th as important as the decision of an individual decider.

That is to say, your update should shift probability weight toward you being a group decider, but you should recognize that ... (read more)

1cousin_it13y
If the decision of a group decider is "1/9th as important", then what's the correct way to calculate the expected benefit of saying "yea" in the second case? Do you have in mind something like 0.9*1000/9 + 0.1*100/1 = 110? This doesn't look right :-(

Do you have in mind something like 0.9 1000/9 + 0.1 100/1 = 110? This doesn't look right

This can be justified by change of rules: deciders get their part of total sum (to donate it of course). Then expected personal gain before:

for "yea": 0.5*(0.9*1000/9+0.1*0)+0.5*(0.9*0+0.1*100/1)=55  
for "nay": 0.5*(0.9*700/9+0.1*0)+0.5*(0.9*0+0.1*700/1)=70

Expected personal gain for decider:

for "yea": 0.9*1000/9+0.1*100/1=110
for "nay": 0.9*700/9+0.1*700/1=140

Edit: corrected error in value of first expected benefit.

Edit: Hm, it is possible to reformulate Newcomb's problem in similar fashion. One of subjects (A) is asked whether ze chooses one box or two boxes, another subject (B) is presented with two boxes with content per A's choice. If they make identical decision, then they have what they choose, otherwise they get nothing.

And here's a reformulation of Counterfactual Mugging in the same vein. Find two subjects who don't care about each other's welfare at all. Flip a coin to choose one of them who will be asked to give up $100. If ze agrees, the other one receives $10000.

This is very similar to a rephrasing of the Prisoner's Dilemma known as the Chocolate Dilemma. Jimmy has the option of taking one piece of chocolate for himself, or taking three pieces and giving them to Jenny. Jenny faces the same choice: take one piece for herself or three pieces for Jimmy. This formulation makes it very clear that two myopically-rational people will do worse than two irrational people, and that mutual precommitment at the start is a good idea.

This stuff is still unclear to me, but there may be a post in here once we work it out. Would you like to cooperate on a joint one, or something?

0red7513y
I'm still unsure if it is something more than intuition pump. Anyway, I'll share any interesting thoughts.
2cousin_it13y
This is awesome! Especially the edit. Thanks.
0[anonymous]13y
It's pure coordination game.
5jsalvatier13y
This kind of answer seems like on the right track, but I do not know of a good decision theory when you are not 100% "important". I have an intuitive sense of what this means, but I don't have a technical understanding of what it means to be merely part of a decision and not the full decision maker.
2GuySrinivasan13y
Can the Shapely value and its generalizations help us here? They deal with "how important was this part of the coalition to the final result?".
0[anonymous]13y
Have an upvote for noticing your own confusion. I posted the problem because I really want a technical understanding of the issues involved. Many commenters are offering intuitions that look hard to formalize and generalize.
3datadataeverywhere13y
I think my answer is actually equivalent to Nornagest's. The obvious answer is that the factors you divide by are (0.9 / 0.5) and (0.1 / 0.5), which results in the same expected value as the pre-arranged calculation.
0[anonymous]13y
I don't quite understand the formal structure behind your informal argument. If the decision of a group decider is 1/9th as important, does this also invalidate the reasoning that "yea" -> 550 in the first case? If no, why?

Your second option still implicitly assumes that you're the only decider. In fact each of the possible deciders in each branch of the simulation would be making an evaluation of expected payoff -- and there are nine times as many in the "tails" branches.

There are twenty branches of the simulation, ten with nine deciders and ten with one decider. In the one-decider branches, the result of saying "yea" is a guaranteed $100; in the nine-decider branches, it's $1000 in the single case where everyone agrees, $700 in the single case where e... (read more)

If we're assuming that all of the deciders are perfectly correlated, or (equivalently?) that for any good argument for whatever decision you end up making, all the other deciders will think of the same argument, then I'm just going to pretend we're talking about copies of the same person, which, as I've argued, seems to require the same kind of reasoning anyway, and makes it a little bit simpler to talk about than if we have to speak as though that everyone is a different person but will reliably make the same decision.

Anyway:

Something is being double-coun... (read more)

5Paul Crowley13y
It looks like the double-count is that you treat yourself as an autonomous agent when you update on the evidence of being a decider, but as an agent of a perfectly coordinated movement when measuring the payoffs. The fact that you get the right answer when dividing the payoffs in the 9-decider case by 9 points in this direction.

(No points for saying that UDT or reflective consistency forces the first solution. If that's your answer, you must also find the error in the second one.)

Under the same rules, does it make sense to ask what is the error in refusing to pay in a Counterfactual Mugging? It seems like you are asking for an error in applying a decision theory, when really the decision theory fails on the problem.

7cousin_it13y
Ah - I was waiting for the first commenter to draw the analogy with Counterfactual Mugging. The problem is, Psy-Kosh's scenario does not contain any predictors, amnesia, copying, simulations or other weird stuff that we usually use to break decision theories. So it's unclear why standard decision theory fails here.
5JGWeissman13y
This problem contains correlated decision making, which is what makes copies anthropically confusing.
3Scott Alexander13y
Would it be the same problem if we said that there were nine people told they were potential deciders in the first branch, one person told ey was a potential decider in the second branch, and then we chose the decision of one potential decider at random (so that your decision had a 1/9 chance of being chosen in the first branch, but a 100% chance of being chosen in the second)? That goes some of the way to eliminating correlated decision making weirdness.
0JGWeissman13y
If you change it so that in the tails case, rather than taking the consensus decision, and giving nothing if there is not consensus, the experimenter randomly selects one of the nine decision makers as the true decision maker (restating to make sure I understand), then this analysis is obviously correct. It is not clear to me which decision theories other than UDT recognize that this modified problem should have the same answer as the original.
0Sniffnoy13y
Meanwhile, that forumulation is equivalent to just picking one decider at random and then flipping heads or tails to determine what a "yea" is worth! So in that case of course you choose "nay".
0[anonymous]13y
The equivalence is not obvious to me. Learning that you're one of the "potential deciders" still makes it more likely that the coin came up tails.
0[anonymous]13y
To the first approximation, clearly, because you destructively update and thus stop caring about the counterfactuals. Shouldn't do that. The remaining questions are all about how the standard updating works at all, and in what situations that can be used, and so by extension why it can't be used here.
2Vladimir_Nesov13y
Why does it fail on this problem, and less so on others? What feature of this problem makes it fail?

you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying "yea" gives 0.9*1000 + 0.1*100 = 910 expected donation.

I'm not sure if this is relevant to the overall nature of the problem, but in this instance, the term 0.9*1000 is incorrect because you don't know if every other decider is going to be reasoning the same way. If you decide on "yea" on that basis, and the coin came up tails, and one of the other deciders says "nay", then the donation is $0.

Is it possible to insert the assumption that... (read more)

2cousin_it13y
I'm not sure if this is relevant either, but I'm also not sure that such an assumption is needed. Note that failing to coordinate is the worst possible outcome - worse than successfully coordinating on any answer. Imagine that you inhabit case 2: you see a good argument for "yea", but no equally good argument for "nay", and there's no possible benefit to saying "nay" unless everyone else sees something that you're not seeing. Framed like this, choosing "yea" sounds reasonable, no?
2Nornagest13y
There's no particular way I see to coordinate on a "yea" answer. You don't have any ability to coordinate with others while you're answering questions, and "nay" appears to be the better bet before the problem starts. It's not uncommon to assume that everyone in a problem like this thinks in the same way you do, but I think making that assumption in this case would reduce it to an entirely different and less interesting problem -- mainly because it renders the zero in the payoff matrix irrelevant if you choose a deterministic solution.
2datadataeverywhere13y
Because of the context of the original idea (an anthropic question), I think the idea is that all ten of you are equivalent for decision making purposes, and you can be confident that whatever you do is what all the others will do in the same situation.

Okay. If that is indeed the intention, then I declare this an anthropic problem, even if it describes itself as non-anthropic. It seems to me that anthropic reasoning was never fundamentally about fuzzy concepts like "updating on consciousness" or "updating on the fact that you exist" in the first place; indeed, I've always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it's about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge. In this problem, if we assume that all deciders are perfectly correlated, then (I predict) the solution won't be any easier than just answering it for the case where all the deciders are copies of the same person.

(Though I'm still going to try to solve it.)

5Vladimir_Nesov13y
Sounds right, if you unpack "implied by its state of knowledge" to not mean "only consider possible worlds consistent with observations". Basically, anthropic reasoning is about logical (agent-provable even) uncertainty, and for the same reason very sensitive to the problem statement and hard to get right, given that we have no theory that is anywhere adequate for understanding decision-making given logical uncertainty. (This is also a way of explaining away the whole anthropic reasoning question, by pointing out that nothing will be left to understand once you can make the logically correlated decisions correctly.)

I would decide "nay". Very crudely speaking a fundamental change in viewpoints is involved. If I update on the new information regarding heads vs tales I must also adjust my view of what I care about. It is hard to describe in detail without writing an essay describing what probability means (which is something Wei has done and I would have to extend to allow for the way to describe the decision correctly if updating is, in fact, allowed.).

So let’s modify the problem somewhat. Instead of each person being given the “decider” or “non-decider” hat, we give the "deciders" rocks. You (an outside observer) make the decision.

Version 1: You get to open a door and see whether the person behind the door has a rock or not. Winning strategy: After you open a door (say, door A) make a decision. If A has a rock then say “yes”. Expected payoff 0.9 1000 + 0.1 100 = 910 > 700. If A has no rock, say “no”. Expected payoff: 700 > 0.9 100 + 0.1 1000 = 190.

Version 2: The host (we’ll call him ... (read more)

The devil is in the assumption that everyone else will do the same thing as you for presumably the same reasons. "Nay" is basically a better strategy in general, though it's not always right. The 90% odds of tails are correctly calculated.

  • If you are highly confident that everyone else will say "yea", it is indeed correct to say "yea"
  • If you're highly confident that everyone will say the same thing but you're not sure what it is (an unlikely but interesting case), then make a guess (if it's 50-50 then "yea" is bette
... (read more)

Each decider will be asked to say "yea" or "nay". If the coin came up tails and all nine deciders say "yea", I donate $1000 to VillageReach. If the coin came up heads and the sole decider says "yea", I donate only $100. If all deciders say "nay", I donate $700 regardless of the result of the coin toss. If the deciders disagree, I don't donate anything.

Suppose that instead of donating directly (presumably for tax reasons), you instead divide the contribution up among the deciders, and then let them pass i... (read more)

Un-thought-out-idea: We've seen in the Dutch-book thread that probability and decision theory are inter-reliant so maybe

Classical Bayes is to Naive Decision Theory

as

Bayes-we-need-to-do-anthropics is to UDT

Actually now that I've said that it doesn't seem to ground-breaking. Meh.

This gives you new information you didn't know before - no anthropic funny business, just your regular kind of information - so you should do a Bayesian update: the coin is 90% likely to have come up tails.

Why 90% here? The coin is still fair, and anthropic reasoning should still remain, since you have to take into account the probability of receiving the observation when updating on it. Otherwise you become vulnerable to filtered evidence.

Edit: I take back the sentence on filtered evidence.

Edit 2: So it looks like the 90% probability estimate is actual... (read more)

5cousin_it13y
I don't understand your comment. There is no anthropic reasoning and no filtered evidence involved. Everyone gets told their status, deciders and non-deciders alike. Imagine I have two sacks of marbles, one containing 9 black and 1 white, the other containing 1 black and 9 white. I flip a fair coin to choose one of the sacks, and offer you to draw a marble from it. Now, if you draw a black marble, you must update to 90% credence that I picked the first sack. This is a very standard problem of probability theory that is completely analogous to the situation in the post, or am I missing something?
0Vladimir_Nesov13y
The marbles problem has a clear structure where you have 20 possible worlds of equal probability. There is no corresponding structure with 20 possible worlds in our problem.
2cousin_it13y
There is. First there's a fair coinflip, then either 1 or 9 deciders are chosen randomly. This means you receive either a black or a white marble, with exactly the same probabilities as in my analogy :-)
0Vladimir_Nesov13y
Very interesting. So probability estimate is correct, and acausal control exerted by the possible decisions somehow manages to exactly compensate the difference in the probability estimates. Still need to figure out what exactly is being controlled and not counted by CDT.
0[anonymous]13y
No, you need 20 possible worlds for the analogy to hold. When you choose 9 deciders, they all live in the same possible world.
0Vladimir_Nesov13y
Sorry, I'm just being stupid, always need more time to remember the obvious (and should learn to more reliably take it). Edit: On further reflection, the intuition holds, although the lesson is still true, since I should try to remember what the intuitions stand for before relying on them. Edit 2: Nope, still wrong.

The error in the reasoning is that it is not you who makes the decision, but the COD (collective of the deciders), which might be composed of different individuals in each round and might be one or nine depending on the coin toss.

In every round the COD will get told that they are deciders but they don't get any new information because this was already known beforehand.

P(Tails| you are told that you are a decider) = 0.9

P(Tails| COD is told that COD is the decider) = P(Tails) = 0.5

To make it easier to understand why the "yes" strategy is wrong, i... (read more)

I'm retracting this one in favor of my other answer:

http://lesswrong.com/lw/3dy/solve_psykoshs_nonanthropic_problem/d9r4

So saying "yea" gives 0.9 1000 + 0.1 100 = 910 expected donation.

This is simply wrong.

If you are a decider then the coin is 90% likely to have come up tails. Correct.

But it simply doesn't follow from this that the expected donation if you say yes is 0.9*1000 + 0.1*100 = 910.

To the contrary, the original formula is still true: 0.5*1000 + 0.5*100 = 550

So you should stil say "nay" and of course hope that everyone el... (read more)

[This comment is no longer endorsed by its author]Reply
3gjm8y
Some of your asterisks are being interpreted as signifying italics when you wanted them to be displayed to signify multiplication. You might want to edit your comment and put backslashes in front of them or something. With backslashes: 1\*2\*3 appears as 1*2*3. Without backslashes: 1*2*3 appears as 123.

If we are considering it from an individual perspective, then we need to hold the other individuals fixed, that is, we assume everyone else sticks to the tails plan:

In this case: 10% chance of becoming a decider with heads and causing a $1000 donation 90% chance of becoming a decider with tails and causing a $100 donation

That is 0.1 1000 + 0.9 100 = $190, which is a pretty bad deal.

If we want to allow everyone to switch, the difficulty is that the other people haven't chosen their action yet (or even a set of actions with fixed probability), so we can't ... (read more)

1Chris_Leong6y
Damn, another one of my old comments and this one has a mistake. If we hold all of the other individuals fixed on the tails plan, then there's a 100% chance that if you choose heads that no money is donated. But also, UDT can just point out Bayesian updates only work within the scope of problems solvable by CDT. When agents' decisions are linked, you need something like UDT and UDT doesn't do any updates. (Timeless decision theory may or may not do updates, but can't handle the fact that choosing Yay means that your clones choose Yay when they are the sole decider. If you could make all agents choose Yay when your were a decider, but all choose Nay when you weren't you'd score higher on average, but of course the linkage doesn't work this way as their decision is based on what they see, not on what you see. This is the same issue that it has with Counterfactual Mugging).
2Chris_Leong5y
Further update: Do you want to cause good to be done or do you want to be in a be in a world where good is done? That's basically what this question comes down to.

Precommitting to "Yea" is the correct decision.

The error: the expected donation for an individual agent deciding to precommit to "nay" is not 700 dollars. It's pr(selected as decider) * 700 dollars. Which is 350 dollars.

Why is this the case? Right here:

Next, I will tell everyone their status, without telling the status of others ... Each decider will be asked to say "yea" or "nay".

In all the worlds where you get told you are not a decider (50% of them - equal probability of 9:1 chance or a 1:9 chance) your precom... (read more)

1Vaniver13y
How can that be, when other people don't know whether or not you're a decider? Imagine the ten sitting in a room, and two people stand up and say "If I am selected as a decider, I will respond with 'yea'." This now forces everyone else to vote 'yea' always, since in only 5% of all outcomes (and thus 10% of the outcomes they directly control) does voting 'nay' increase the total donation (by 600*.1=60) whereas in the other 45% / 90% of cases it decreases the donation (by 1000*.9=900). The two people who stood up should then suddenly realize that the expected donation is now $550 instead of $700, and they have made everyone worse off by their declaration. (One person making the declaration also lowers the expected donation by a lower amount, but the mechanism is clearer with two people.)
2shokwave13y
It doesn't make much sense to say "I precommit to answering 'nay' iff I am not selected as a decider." But then ... hmm, yeah. Maybe I have this the wrong way around. Give me half an hour or so to work on it again. edit: So far I can only reproduce the conundrum. Damn.
3Vaniver13y
I think the basic error with the "vote Y" approach is that it throws away half of the outcome space. If you make trustworthy precommitments the other people are aware of, it should be clear that once two people have committed to Y the best move for everyone else is to vote Y. Likewise, once two people have committed to N the best move for everyone else is to vote N. But, since the idea of updating on evidence is so seductive, let's take it another step. We see that before you know whether or not you're a decider, E(N)>E(Y). One you know you're a decider, you naively calculate E(Y)>E(N). But now you can ask another question- what are P(N->Y|1) and P(N->Y|9)? That is, the probability you change your answer from N to Y given that you are the only decider and given that you are one of the nine deciders. It should be clear there is no asymmetry there- both P(N->Y|1) and P(N->Y|9)=1. But without an asymmetry, we have obtained no actionable information. This test's false positive and false negative rates are aligned exactly as to do nothing for us. Even though it looks like we're preferentially changing our answer in the favorable circumstance, it's clear from the probabilities that there's no preference, and we're behaving exactly as if we precommitted to vote Y, which we know has a lower EV.

Below is very unpolished chain of thoughts, which is based on vague analogy with symmetrical state of two indistinguishable quantum particles.

When participant is said ze is decider, ze can reason: let's suppose that before coin was flipped I changed places with someone else, will it make difference? If coin came up heads, than I'm sole decider and there are 9 swaps which make difference in my observations. If coin came up tails then there's one swap that makes difference. But if it doesn't make difference it is effectively one world, so there's 20 worlds I... (read more)

-2cousin_it13y
Um, the probability-updating part is correct, don't spend your time attacking it.

Once I have been told I am a decider, the expected payouts are:

For saying Yea: $10 + P$900 For saying Nay: $70 + Q$700

P is the probability that the other 8 deciders if they exist all say Yea conditioned on my saying Yay, Q is the probability that the other 8 deciders if they exist all say Nay conditioned on my saying Nay.

For Yay to be the right answer, to maximize money for African Kids, we manipulate the inequality to find P > 89% - 78%*Q The lowest value for P consistent with 0<=Q<=1 is P > 11% which occurs when Q = 100%.

What are P & Q... (read more)

0Vaniver13y
This suggests to me that you actually can coordinate with everyone else, but the problem isn't clear on whether or not that's the case.

Thank you for that comment. I think I understand the question now. Let me restate it somewhat differently to make it, I think, clearer.

All 10 of us are sitting around trying to pre-decide a strategy for optimizing contributions.

The first situation we consider is labeled "First" and quoted in your comment. If deciders always say yay, we get $1000 for heads and $100 for tails which gives us an expected $550 payout. If we all say nay, we get $700 for either heads or tails. So we should predecide to say "nay."

But then Charlie says "I think we should predecide to say "yay." 90% of the time I am informed I am a decider, there are 8 other deciders besides me, and if we all say "yay" we get $1000. 10% of the time I am informed I am a decider, we will only get $100. But when I am informed I am a decider, the expected payout is $910 if we all say yay, and only $700 if we all say nay."

Now Charlie is wrong, but its no good just asserting it. I have to explain why.

It is because Charlie has failed to consider the cases when he is NOT chosen as a decider. He is mistakenly thinking that since in those cases he is not chosen as a... (read more)

1Vaniver13y
Very good explanation; voted up. I would even go so far as to call it a solution.
[-][anonymous]13y00

Is this some kind of inverse Monty Hall problem? Counterintuitively, the second solution is incorrect.

If everyone pledges to answer "yea" in case they end up as deciders, you get 0.5*1000 + 0.5*100 = 550 expected donation.

This is correct. There are 10 cases in which we have a single decider and win 100 and there are 10 cases in which we have a single non-decider and win 1000, and these 20 cases are all equally likely.

you should do a Bayesian update: the coin is 90% likely to have come up tails.

By calculation (or by drawing a decision tree... (read more)

1cousin_it13y
Are you claiming that it's rational to precommit to saying "nay", but upon observing D1 (assuming no precommitment happened) it becomes rational to say "yea"?
0[anonymous]13y
Actually I find this problem quite perplexing, just like an optical illusion that makes you see different things. Yes, what I am claiming is that if they observe D1, they should say "yea". The point is that only player 1 knows whether D1 holds, and no other player can observe D1. Sure, there will be some player i for which Di holds, but you cannot calculate the conditional expectation as above since i is a random variable. The correct calculation in that case is as follows: Run the game and let i be a player that was selected as a decider. In that case the expected donation conditioned on the fact that Di holds is equal to the expected donation of 550, since i is a decider by definition and thus Di always holds (so the condition is trivial).

EDIT: Nevermind, I massively overlooked a link.

0[anonymous]13y
Yeah, my "source" link points into the comment thread of that post.

A note on the problem statement: You should probably make it more explicit that mixed strategies are a bad idea here (due to the 0-if-disagreement clause). I spent a bit wondering why you restricted to pure strategies and didn't notice it until I actually attempted to calculate an optimal strategy under both assumptions. (FWIW, graphing things, it seems that indeed, if you assume both scenarios are equally likely, pure "nay" is best, and if you assume 9 deciders is 90% likely, pure "yea" is best.)

I'm assuming all deciders are coming to the same best decision so no worries about deciders disagreeing if you change your mind.

I'm going to be the odd-one-out here and say that both answers are correct at the time they are made… if you care far more (which I don't think you should) about African kids in your own Everett branch (or live in a hypothetical crazy universe where many worlds is false).

(Chapter 1 of Permutation City spoiler, please click here first if not read it yet, you'll be glad you did...): Jura lbh punatr lbhe zvaq nsgre orvat gbyq, lbh jv... (read more)

6Manfred13y
Why drag quantum mechanics into this? Taking the expected value gives you exactly the same thing as it does classically, and the answer is still the same. Nay is right and yea is wrong. You seem to be invoking "everett branch" as a mysterious, not-so-useful answer.
2MichaelHoward13y
I'm not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in. After being told your status, you're split: * 1/20 are U=Decider, U=Heads. Yea is very bad here. * 9/20 are U=Decider, U=Tails. Yea is good here. * 9/20 are U=Passenger, U=Heads. Yea is very bad here. * 1/20 are U=Passenger, U=Tails. Yea is good here. After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins. But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you're not in yourself - Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
0Manfred13y
Okay. You're bringing up quantum mechanics needlessly, though. This is exactly the same reasoning as cousin it went through in the post, and leads to exactly the same problem, since everyone can be expected to reason like you. If yea is only said because it generates better results, and you always switch to yea, then QED always saying yea should have better results. But it doesn't!
0MichaelHoward13y
But my whole point has been that yea can yield better results, iff you don't care about kids in other branches, which would make branches relevant. To show that branches are not relevant, tell me why that argument (that Yeah wins in this case) is wrong, don't just assert that it's wrong.
0Manfred13y
Since, as I've been saying, it's identical to the original problem, if I knew how to resolve it I'd already have posted the resolution. :) What can be shown is that it's contradictory. If yea is better for your "branch" when you vote yea, and everyone always follows this reasoning and votes yea, and the whole point of "branches" is that they're no longer causally linked, all the branches should do better. More simply, if yea is the right choice for every decider, it's because "always yea" actually does better than "always nay." But always yea is not better than always nay. If you would like to argue that there is no contradiction, you could try and find a way to resolve it by showing how a vote can be better every single time without being better all the time.
0MichaelHoward13y
It is the best choice for every decider who only cares about the kids in their Everett branches. It's not the best choice for deciders (or non-deciders, though they don't get a say) who care equally about kids across all the branches. Their preferences are as before. It's a really lousy choice for any non-deciders who only care about the kids in their Everett branches. Their expected outcome for "yea" just got worse by the same amount that first lot of deciders who only care about their kids got better. Unfortunately for them, their sole decider thinks he's probably in the Tails group, and that his kids will gain by saying "yea", as he is perfectly rational to think given the information he has at that time. There is no contradiction.
3GuySrinivasan13y
What does an entity that only cares about the kids in its Everett branches even look like? I am confused. Usually things have preferences about lotteries over outcomes, and an outcome is an entire multiverse, and these things are physically realized and their preferences change when the coinflip happens? How does that even work? I guess if you want you can implement an entity that works like that, but I'm not certain why we'd even call it the same entity at any two times. This sort of entity would do very well to cut out its eyes and ears so it never learns it's a decider and begin chanting "nay, nay, nay!" wouldn't it?
0MichaelHoward13y
Example 1: Someone that doesn't know about or believe in many worlds. The don't care about kids in alternate Everett branches, because to their mind they don't exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, "yea" wins. Most people just don't think about entire multiverses. Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else's Problem. I hear this isn't too uncommon :-) Actual cutting aside, this is an excellent strategy. Upvoted :)
0[anonymous]13y
Example 1: Someone that doesn't know about or believe in many worlds. The don't care about kids in alternate Everett branches, because to their mind they don't exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, "yea" wins. Most people just don't think about entire multiverses. Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else's Problem. I hear this isn't too uncommon :-) Actual cutting aside, this is an excellent strategy.
-2Manfred13y
I suppose I'll avoid repeating myself and try to say new things. You seem to be saying that when you vote yea, it's right, but when other people vote yea, it's wrong. Hmm, I guess you could resolve it by allowing the validity of logic to vary depending on who used it. But that would be bad. (Edited for clarity)
4MichaelHoward13y
I think we may be misunderstanding each-other, and possibly even arguing about different things. I'm finding it increasingly hard to think how your comments could possibly be a logical response to those you're responding to, and I suspect you're feeling the same. Serves me right, of course. When I do what? What are you even talking about?
0Manfred13y
Ah, sorry, that does look odd. I meant "when you vote 'yea,' it's okay, but when they vote 'yea' for exactly the same reasons, it's bad."
0MichaelHoward13y
Not sure why the ones voting "yea" would be me. I said I disagreed with those deciders who didn't care about kids in other branches. Anyway, they vote differently despite being in the same situation because their preferences are different.
0Manfred13y
Well, I give up. This makes so little sense to me that I have lost all hope of this going somewhere useful. It was interesting, though, and it gave me a clearer picture of the problem, so I regret nothing :D
0MichaelHoward13y
We're not perfect Bayesians (and certainly don't have common knowledge of each-other's beliefs!) so we can agree to disagree. Besides, I'm running away for a few days. Merry Xmas :)
0[anonymous]13y
I'm not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in. After being told your status, you're split: * 1/20 are U=Decider, U=Heads. Yea is very bad here. * 9/20 are U=Decider, U=Tails. Yea is good here. * 9/20 are U=Passenger, U=Heads. Yea is very bad here. * 1/20 are U=Passenger, U=Tails. Yea is good here. After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins. But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you're not in yourself - Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
0[anonymous]13y
I'm not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in. After being told your status, you're split: 1/20 are U=Decider, U=Heads. Yea is very bad here. 9/20 are U=Decider, U=Tails. Yea is good here. 9/20 are U=Passenger, U=Heads. Yea is very bad here. 1/20 are U=Passenger, U=Tails. Yea is good here. After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins. But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you're not in yourself - Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
-4mwengler13y
I think your reasoning here is correct and that it is as good an argument against the many worlds interpretation as any that I have seen.
3wedrifid13y
The best argument against the many worlds interpretation that you have seen is somewhat muddled thinking about ethical considerations with respect to normal coin tosses?
0mwengler13y
Yup, that's the best. I'd be happy to hear about the best you've seen, especially if you've seen better.
0wedrifid13y
Why do you assume I would be inclined to one up the argument? The more natural interpretation of my implied inference is in approximately the reverse direction. If the best argument against MWI that a self professed physicist and MWI critic has ever seen has absolutely zero persuasive power then that is rather strong evidence in favor.
7mwengler13y
I am new to this board and come in with a "prior" of rejecting MWI beyond the tiniest amount on the basis of, among other things, conservation of energy and mass. (Where do these constantly forming new worlds come from?) MWI seems more like a mapmakers mistake than a description of the territory, which manifestly has only one universe in it every time I look. I was inviting you to show me with links or description whatever you find most compelling, if you could be bothered to. I am reading main sequence stuff and this is one of the more interesting puzzles among Less Wrong's idiosyncratic consensi.
2XiXiDu13y
Here a subsequent discussion about some experimental test(s) of MWI. Also here a video dicussion between Scott Aaronson and Yudkowsky (starting at 38:11). More links on topic can be found here. ETA Sorry, I wanted to reply to another of your comments, wrong tab. Anyway.
0GuySrinivasan13y
Wikipedia points to a site that says conservation of energy is not violated. Do you know if it's factually wrong or what's going on here? (if so can you update wikipedia? :D) Q22 Does many-worlds violate conservation of energy? First, the law conservation of energy is based on observations within each world. All observations within each world are consistent with conservation of energy, therefore energy is conserved. Second, and more precisely, conservation of energy, in QM, is formulated in terms of weighted averages or expectation values. Conservation of energy is expressed by saying that the time derivative of the expected energy of a closed system vanishes. This statement can be scaled up to include the whole universe. Each world has an approximate energy, but the energy of the total wavefunction, or any subset of, involves summing over each world, weighted with its probability measure. This weighted sum is a constant. So energy is conserved within each world and also across the totality of worlds. One way of viewing this result - that observed conserved quantities are conserved across the totality of worlds - is to note that new worlds are not created by the action of the wave equation, rather existing worlds are split into successively "thinner" and "thinner" slices, if we view the probability densities as "thickness".
0MichaelHoward13y
I don't understand. How is my argument an argument against the many worlds interpretation? (Without falling into the logical fallacy of Appeal to Consequences).
0mwengler13y
It would seem to suggest that if I want to be rich I should buy a bunch of lottery tickets and then kill myself when I don't win. I have not seen the local discussion of MWI and everett branches, but my "conclusion" in the past has been that MWI is a defect of the map maker and not a feature of the territory. I'd be happy to be pointed to something that would change my mind or at least rock it a bit, but for now it looks like angels dancing on the heads of pins. Has somebody provided an experiment that would rule MWI in or out? If so, what was the result? If not, then how is a consideration of MWI anything other than confusing the map with the territory? If I have fallen in to Appeal to Consequences with my original post, than my bad.
0MichaelHoward13y
I don't think that's the case, but even if it were, using that to argue against the likelihood of MWI would be Appeal to Consequences. That's what I used to think :) If you're prepared for a long but rewarding read, Eliezer's Quantum Physics Sequence is a non-mysterious introduction to quantum mechanics, intended to be accessible to anyone who can grok algebra and complex numbers. Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam's Razor), epistemology, reductionism, naturalism, and philosophy of science. For a shorter sequence that concentrates on why MWI wins, see And the Winner is... Many-Worlds! The idea is that MWI is the simplest explanation that fits the data, by the definition of simplest that has proven to be most useful when predicting which of different theories that match the same data is actually correct.
0[anonymous]13y
How is it an argument against the many worlds interpretation? Unless you're falling into the logical fallacy of Appeal to Consequences.

We could say that the existence of pre-agreed joint strategies invalidates standard decision theory.

It's easy to come up with scenarios where coordination is so valuable that you have to choose not to act on privileged information. For example, you're meeting a friend at a pizzeria, and you spot a better-looking pizzeria two blocks away, but you go to the worse one because you'd rather eat together than apart.

Psy-Kosh's problem may not seem like a coordination game, but possibilities for coordination can be subtle and counter-intuitive. See, for example, ... (read more)

Stream of conciousness style answer. Not looking at other comments so I can see afterwards if my thinking is the same as anyone else's.

The argument for saying yea once one is in the room seems to assume that everyone else will make the same decision as me, whatever my decision is. I'm still unsure whether this kind of thinking is allowed in general, but in this case it seems to be the source of the problem.

If we take the opposite assumption, that the other decisions are fixed, then the problem depends on those decisions. If we assume that all the others (i... (read more)

0cousin_it13y
I like how you apply game theory to the problem, but I don't understand why it supports the answer "nay". The calculations at the beginning of your comment seem to indicate that the "yea" equilibrium gives a higher expected payoff than the "nay" equilibrium, no?
1benelliott13y
If all ten individuals were discussing the problem in advance the would conclude that nay was better, so, by the rule I set up, when faced with the problem you should say nay. The problem comes from mixing individual thinking, where you ask what is the best thing for you to do, with group thinking (no relation to groupthink), where you ask what is the best thing for the group to do. The rule I suggested can be expressed as "when individual thinking leaves you with more than one possible solution, use group thinking to decide between them". Updating on the fact that you are a decider is compulsory in individual thinking but forbidden in group thinking, and problems arise when you get confused about this distinction.
[-][anonymous]13y00

I'm torn whether I should move this post to LW proper. What do people think?

[-][anonymous]11y-20

Initially either 9 or 1 of the 10 people will have been chosen with equal likelihood, meaning I had a 50% chance of being chosen. If being chosen means I should find 90% likelihood that the coin came up tails, then not being chosen should mean I find 90% likelihood that the coin came up heads (which it does). If that were the case, I'd want nay to be what the others choose (0.9 100 + 0.1 1000 = 190 < 700). Since both branches are equally likely (initially), and my decision of what to choose in the branch in which I choose corresponds (presumably) to t... (read more)

This struck an emotional nerve with me, so I'm going to answer as if this were an actual real-life situation, rather than an interesting hypothetical math problem about maximizing expected utility.

IMHO, if this was a situation that occurred in real life, neither of the solutions is correct. This is basically another version of Sophie's Choice. The correct solution would be to punch you in the face for using the lives of children as pawns in your sick game and trying to shift the feelings of guilt onto me, and staying silent. Give the money or not as you se... (read more)