I think you're wrong. Suppose 1,000,000 people play this game. Each of them flips the coin 1000 times. We would expect about 500,000 to survive, and all of them would have flipped heads initially. Therefore, P(I flipped heads initially | I haven't died yet after flipping 1000 coins) ~= 1.

This is actually quite similar to the Sleeping Beauty problem. You have a higher chance of surviving (analogous to waking up more times) if the original coin was heads. So, just as the fact that you woke up is evidence that you were scheduled to wake up more times in the Sleeping Beauty problem, the fact that you survive is evidence that you were "scheduled to survive" more in this problem.

On the other hand, each "heads" you observe doesn't distinguish the hypothetical where the original coin was "heads" from one where it was "tails".

This is the same incorrect logic that leads people to say that you "don't learn anything" between falling asleep and waking up in the Sleeping Beauty problem.

I believe the only coherent definition of Bayesian probability in anthropic problems is that P(H | O) = the proportion of observers who have observed O, in a very large universe (where the experiment will be repeated many times), for whom H is true. This definition naturally leads to both 2/3 probability in the Sleeping Beauty problem and "anthropic evidence" in this problem. It is also implied by the many-worlds interpretation in the case of quantum coins, since then all those observers really do exist.

It's often pointless to argue about probabilities, and sometimes no assignment of probability makes sense, so I was careful to phrase the thought experiment as a decision problem. Which decision (strategy) is the right one?

No Anthropic Evidence

by Vladimir_Nesov 1 min read23rd Sep 201234 comments

10


Closely related to: How Many LHC Failures Is Too Many?

Consider the following thought experiment. At the start, an "original" coin is tossed, but not shown. If it was "tails", a gun is loaded, otherwise it's not. After that, you are offered a big number of rounds of decision, where in each one you can either quit the game, or toss a coin of your own. If your coin falls "tails", the gun gets triggered, and depending on how the original coin fell (whether the gun was loaded), you either get shot or not (if the gun doesn't fire, i.e. if the original coin was "heads", you are free to go). If your coin is "heads", you are all right for the round. If you quit the game, you will get shot at the exit with probability 75% independently of what was happening during the game (and of the original coin). The question is, should you keep playing or quit if you observe, say, 1000 "heads" in a row?

Intuitively, it seems as if 1000 "heads" is "anthropic evidence" for the original coin being "tails", that the long sequence of "heads" can only be explained by the fact that "tails" would have killed you. If you know that the original coin was "tails", then to keep playing is to face the certainty of eventually tossing "tails" and getting shot, which is worse than quitting, with only 75% chance of death. Thus, it seems preferable to quit.

On the other hand, each "heads" you observe doesn't distinguish the hypothetical where the original coin was "heads" from one where it was "tails". The first round can be modeled by a 4-element finite probability space consisting of options {HH, HT, TH, TT}, where HH and HT correspond to the original coin being "heads" and HH and TH to the coin-for-the-round being "heads". Observing "heads" is the event {HH, TH} that has the same 50% posterior probabilities for "heads" and "tails" of the original coin. Thus, each round that ends in "heads" doesn't change the knowledge about the original coin, even if there were 1000 rounds of this type. And since you only get shot if the original coin was "tails", you only get to 50% probability of dying as the game continues, which is better than the 75% from quitting the game.

(See also the comments by simon2 and Benja Fallenstein on the LHC post, and this thought experiment by Benja Fallenstein.)

The result of this exercise could be generalized by saying that counterfactual possibility of dying doesn't in itself influence the conclusions that can be drawn from observations that happened within the hypotheticals where one didn't die. Only if the possibility of dying influences the probability of observations that did take place, would it be possible to detect that possibility. For example, if in the above exercise, a loaded gun would cause the coin to become biased in a known way, only then would it be possible to detect the state of the gun (1000 "heads" would imply either that the gun is likely loaded, or that it's likely not).

10