For the second question:

Imagine there are many planets with a civilization on each planet. On half of all planets, for various ecological reasons, plagues are more deadly and have a 2/3 chance of wiping out the civilization in its first 10000 years. On the other planets, plagues only have a 1/3 chance of wiping out the civilization. The people don't know if they're on a safe planet or an unsafe planet.

After 10000 years, 2/3 of the civilizations on unsafe planets have been wiped out and 1/3 of those on safe planets have been wiped out. Of the remaining civilizations, 2/3 are on safe planets, so the fact that your civilization survived for 10000 years is evidence that your planet is safe from plagues. You can just apply Bayes' rule:

P(safe planet | survive) = P(safe planet) P(survive | safe planet) / P(survive) = 0.5 * 2/3 / 0.5 = 2/3

EDIT: on the other hand, if logical uncertainty is involved, it's a lot less clear. Supposed either all planets are safe or none of them are safe, based on the truth-value of a logical proposition (say, the trillionth digit of pi being odd) that is estimated to be 50% likely a priori. Should the fact that your civilization survived be used as evidence of the logical coin flip? SSA suggests no, SIA suggests yes because more civilizations survive when the coin flip makes all planets safe. On the other hand, if we changed the thought experiment so that no civilization survives if the logical proposition is false, then the fact that we survived is proof that the logical proposition is true.

Yes! I thought of this too. So, the anthropic bias does not give us a reason to ignore evidence; it merely changes the structure of specific inferences. We find that we are in an interestingly bad position to estimate those probabilities (the probability will appear to be 0%, if we look just at our history). Yet, it does seem to provide some evidence of higher survival probabilities; we just need to do the math carefully...

2 Anthropic Questions

by abramdemski 2 min read26th May 201231 comments

8


I have just finished reading the section on anthropic bias in Nassim Taleb's book, The Black Swan. In general, the book is interesting to compare to the sort of things I read on Less Wrong; its message is largely very similar, except less Bayesian (and therefore less formal-- at times slightly anti-formal, arguing against misleading math).

Two points concerning anthropic weirdness.

First:

If we win the lottery, should we really conclude that we live in a holodeck (or some such)? From real-life anthropic weirdness:

Pity those poor folk who actually win the lottery!  If the hypothesis "this world is a holodeck" is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck.  (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)

It seems to me that the right way of approaching the question is: before buying the lottery ticket, what belief-forming strategy would we prefer ourselves to have? (Ignore the issue of why we buy the ticket, of course.) Or, slightly different: what advice would you give to other people (for example, if you're writing a book on rationality that might be widely read)?

"Common sense" says that it would be quite silly to start believing some strange theory, just because I win the lottery. However, Bayes says that if we assign greater than 10-8 prior probability to "strange" explanations of getting a winning lottery ticket, then we should prefer them. In fact, we may want to buy a lottery ticket to test those theories! (This would be a very sensible test, which would strongly tend to give the right result.)

However, as a society, we would not want lottery-winners to go crazy. Therefore, we would not want to give the advice "if you win, you should massively update your probabilities".

(This is similar to the idea that we might be persuaded to defect in Prisoner's Dilemma if we are maximizing our personal utility, but if we are giving advice about rationality to other people, we should advise them that cooperating is the optimal strategy. In a somewhat unjustified leap, I suppose we should take the advice we would give to others in such matters. But I suppose that position is already widely accepted here.)

On the other hand, if we were in a position to give advice to people who might really be living in a simulation, it would suddenly be good advice!

 

Second:

Taleb discusses an interesting example of anthropic bias:

Apply this reasoning to the following question: Why didn't the bubonic plague kill more people? People will supply quantities of cosmetic explanations involving theories about the intensity of the plague and "scientific models" of epidemics. Now, try the weakened causality argument that I have just emphasized in this chapter: had the bubonic plague killed more people, the observers (us) would not be here to observe. So it may not neccessarily be the property of diseases to spare us humans.

You'll have to read the chapter if you want to know exactly what "argument" is being discussed, but the general point is (hopefully) clear from this passage. If an event was a necessary prerequisite for our existence, then we should not take our survival of that event as evidence for a high probability of survival of such events. If we remember surviving a car crash, we should not take that to increase our estimates for surviving a car crash. (Instead, we should look at other car crashes.)

This conclusion is somewhat troubling (as Taleb admits). It means that the past is fundamentally different from the future! The past will be a relatively "safe" place, where every event has led to our survival. The future is alien and unforgiving. As is said in the story The Hero With A Thousand Chances:

"The Counter-Force isn't going to help you this time.  No hero's luck.  Nothing but creativity and any scraps of real luck - and true random chance is as liable to hurt you as the Dust.  Even if you do survive this time, the Counter-Force won't help you next time either.  Or the time after that.  What you remember happening before - will not happen for you ever again."

Now, Taleb is saying that we are that hero. Scary, right?

On the other hand, it seems reasonable to be skeptical of a view which presents difficulties generalizing from the past to the future. So. Any opinions?

8