In the most recent episode of Rationally Speaking ("Humanity on the precipice", 10 Dec. 2021), Julia Galef and Toby Ord discuss the risk of a nuclear exchange during the Cold War:

J.G. [55:28]: "If you had asked me to put a probability on each case, like taking nuclear near misses—for example, like the Cuban Missile Crisis or the time the Russians could have retaliated against us if not for Arkhipov—I would have put a probability of maybe 25% on a lot of those. But it starts to add up. If you take all of the near misses that I would've assigned 25% to, they add up enough that it starts to look really surprising that we didn't end up with a nuclear war, which kind of throws into question my ability to assign good probabilities to all of these near misses."

T.O. [55:02]: "You're right that if your attempts to do this are producing very large numbers, then that does suggest that the attempts might be going wrong, and I think it's very easy to have them go wrong; I think it's an extremely difficult thing, and I think there's been very little written, really, about how to do this kind of retrospective prediction or trying to assign probabilities to past events where we actually know what happened.

If anthropic effects were distorting the historical record, this is exactly what you'd expect to see: Julia's probabilities on nuclear exchange would seem reasonable individually but too high in aggregate. Why does neither of them mention the anthropics explanation? Is it somehow common knowledge that this explanation doesn't hold water?

The Precipice has a brief section on anthropics (part of Chapter 7), in which Toby says, "From what we know, it doesn't look like [anthropic] selection effects have distorted the historical record much, but there are only a handful of papers on the topic and some of the methodological issues have yet to be resolved." The only nearby citation is this paper by Cirkovic, Sandberg & Bostrom, who seem to conclude that selection effects from anthropogenic hazards are difficult to determine; they don't make any argument that those effects are small.

Also related: https://www.lesswrong.com/posts/EkmeEAB646Yf2DNzW/anthropics-doesn-t-explain-why-the-cold-war-stayed-cold 
 

New Answer
New Comment

5 Answers sorted by

Anthropic selection is not magic. More precisely, it is the least amount of magic necessary to ensure our existence.

If a miracle is necessary for you to survive, but either a smaller miracle or a (much) greater miracle can do the job, then if you survive, you should expect that it was because of the smaller miracle.

Multiple independent improbabilities are multiplied, so similarly if you can be saved by either one miracle alone or ten independent miracles of comparable size, then if you survive, you should expect to be saved by the one miracle.

(It all adds up to the nearest possible approximation of normality.)

So my guess is that the argument implicitly made by Julia and/or Toby (I haven't read the book nor listened to the episode) could be that although it is technically possible that we have avoided nuclear extinction because of a sequence of lucky outcomes, when you multiply the probabilities, the result is so small that there is probably something else (perhaps also some kind of luck, but smaller in magnitude than this entire sequence combined) responsible for the outcome.

In other words, you should not accept the answer "you were saved by a miracle of a probability P" before you have sufficiently explored possible miracles with probabilities greater than P. (Even if you believe that you were saved by anthropic magic.)

When I try to think anthropically about nuclear extinction specifically, it seems to me that there are also possible outcomes other than "no nuclear war" and "humanity extinct". Like situations where (most of) USA and USSR (and a few other countries) were nuked, but many places were not nuked, people survived there, and even the nuclear winter or whatever didn't literally kill all humans. If there were many nuclear "near misses", then there should be also many alternative histories like these. After enough such branches, even with anthropic reasoning, the probability of finding ourselves as survivors in a post-apocalyptic world is greater that the probability of finding ourselves in a world without a nuclear war. Not sure about exact numbers, though. The opposing argument is that the world without a nuclear war contains much more humans than any individual post-apocalyptic world.

Thanks. By "smaller miracle," are you referring to the case that Julia's estimates are wrong? Or something else?

Agreed that expecting a high fraction of observers to survive a nuclear war makes anthropic selection a less-appealing explanation. Would be interesting to see the numbers.

2Viliam2y
I interpret Julia's "I would have put a probability of maybe 25% on a lot of those. But it starts to add up. [...] which kind of throws into question my ability to assign good probabilities to all of these near misses." to mean that the 25% estimates are either wrong or not independent, because there were too many such events, and the multiplied probabilities of being lucky at all of them is just too small. So there is probably some other explanation... but Julia in the quoted text does not propose a specific alternative. She just says that if there was only one such event in history, then explanation "25% extinction, 75% we got lucky" would be a good explanation of our current state; but now that she knows there were actually many such events, it does not seem like a good explanation anymore.

Nuclear war doesn't need to produce human extinction to cause anthropic effects. In the world with such war there will be (presumably) less universities and less people who are interested in anthropic, as most such centres will be destroyed during nuclear exchange, and people will be more busy in survival. 

Also global internet is less likely to exist in the world after a large scale nuclear war, which means lesser exchange of the ideas and lesser chances for the things like LessWrong to exist – or smaller number of people participating in them.

I guestimate that in the world after 1-billion-loss-nuclear exchange where will be 10 time less people interested in anthropics.

Thus it is not very surprising to find oneself in the world where large scale nuclear war never happened, but it is not evidence that nuclear war causes extinction.

I agree with you that observational selection effects are important here.

Our causal model of reality tells us that if there were a nuclear exchange between the superpowers, there is a good chance we or our ancestors would've died, and once dead, would've been unable to engage in this conversation (or to produce offspring to engage in this conversation). This information needs to be taken into account in certain situations, e.g., when estimating the likelihood of a nuclear war in the future.

In particular, if nuclear weapons suddenly appeared for the first time today, we would have some guess, some probability of their being used in anger (in the future). Our knowledge that they've existed since 1946 without having been used lowers that probability. I.e., it is evidence against their being used in the future, but it is weaker evidence than it would have been if we were immune to the effects of the weapons, i.e., if we were equally able to observe their past use as we are their past non-use.

Moreover, suppose you, your parents and your grandparents were dependent for their income on their ownership of a sporting-goods store in Washington, D.C. D.C. is of course a likely to be targeted with many bombs in any nuclear war between the superpowers, and any owner of a sporting-goods store is likely to stay in or near the store whenever looting is likely (at least if their family's income depends on the store). Then the fact that there was no nuclear war between 1946 and today is less informative to you than it is to someone whose ancestors lived somewhere unlikely to be targeted (e.g., New Zealand). The person in New Zealand can of course communicate their observations to you, but if you're being sufficiently rational, you have to consider their communications biased in the same way that you have to consider the communications of, e.g., a drug company to be biased if you suspect that the drug company is likely to withhold information that shows their drug to be harmful.

I think it's only worth considering anthropic selection as a meaningful hypothesis for survival probabilities that are less than about 10^-9, and where we are very confident about that upper bound. Nuclear war doesn't come anywhere near meeting either of those criteria.

I do think we're in one of the few worlds that made it thru the Cold War intact. I consider our existence weak evidence for Many Worlds and quantum immortality.

Gives a reason to be subjectively optimistic about the future. But, I hope, not complacent. I find it hard to shake off the notion that the number of worlds in which civilization survives matters. But I also find it hard to explain why.