I agree we risk making errors in reasoning about the future based on the past due to selection effects, but for my taste this post skims the surface of anthropic arguments without really engaging them and thus makes confusing claims. To pick on just one to unravel the threads, you say:
Our anthropic plot armor is gone. We'd better figure out how to survive without it.
But on what basis is this assertion made? If we live in a universe where quantum immortality explains why we are still alive, for example, then our "plot armor" never runs out, or run out only if causality forces our measure to zero. Maybe you want to make a claim that this is what's happening, but it would require arguing that.
If you claim the anthropic principle applies in some other way, then similar you'd need to give an argument for why the plot armor has run out.
I think this post would have been better if it had just stuck to being an explainer about anthropic selection effects, or if it had engaged more completely with anthropic arguments to better support some of your claims.
Toy models show that we're wearing alive-tinted glasses.
In discussions of existential risk or potential apocalypses, a common refrain is something along the lines of "We've been fine before, so we'll be fine again. Sure," some argue, "we've had some close calls in the past, but we've always been fine in the end." Maybe they argue that, though humans are complacent enough to allow things to get close to the brink, once disaster is close, human spirit and ingenuity always meets the moment. Maybe they argue that things that looked like potential disasters at the time were destined to work out for reasons that were difficult or impossible to see at the time.
As an assessment of past events, it's hardly unreasonable on its face. Nuclear war[1], for instance, was indeed averted probably by a single human's courage, wisdom, and willingness to defy orders. Malthusian predictions of mass famine were averted by the green revolution and demographic transition[2]. And sure, there are groups of people who've faced annihilation or near-annihilation. Native Americans had their population cut by roughly an order of magnitude by disease when Europeans arrived. Most German Jews who didn't flee were murdered during WWII. Indeed, there are species of humans who've gone extinct: Neanderthals didn't evolve into modern humans; they were driven into oblivion by competition from homo sapiens. It's happened to other people, they concede, but never to us, not to humanity as a whole. Disasters don't happen here, or wipe us out. They certainly don't wipe out humanity.
But that line of reasoning has a subtle fallacy: it assumes the observed rate of disasters in the past is the same as the expected rate in the future. But of course we don't live in a world where an existential disaster happened. Humanity might've been destined to make it this far, or might be one in a million intelligent species insanely lucky enough to have made it even to today. Regardless, we're going to observe that we made it. In short, we're wearing alive-tinted glasses, which can be warped enough to make arbitrarily low odds of survival look like certainty.
But this doesn't just apply to crises that result in extinction. By observing the past, we're likely to underestimate the risk of future risk of crises, and our estimates are likely to be worse the more severe the crisis is. (This line of thinking is called anthropic reasoning, asking what we can figure out by virtue of our status as (likely-typical) observers, and the suppression effect casts a sort of anthropic shadow[3]. Also see the Doomsday Argument, an anthropic argument against a galactic human future.)
Okay, let's see this in action. We can be more concrete about this by making some toy models. Let's simulate a universe with 100,000 worlds. Each world starts with 10 people and dies out if it ever has fewer than 1 person. When nothing goes wrong, each world grows logistically[4]. But disasters can happen[5], ranging from minor to eliminating most or all of the population.
Okay, now we imagine that we're an average observer. We'll assume we have perfect knowledge of the past crises in our world. If we try to estimate the chances of each type of disaster by examining history, what do we find? We'll plot that on the right, with the actual (uniform plus noise) distribution on the left:
There's a suppression effect, but it's surprisingly small (other than for existential crises). Why? Well, a lot of worlds tend to look like this:
Generally speaking, a non-existential crisis doesn't have much effect on the world's long-term trajectory (or therefore, the total number of observers in the world over its entire existence). The population at day 200 is about the same as it would have been if none of the disasters between days 0 and 170 had happened.
This is probably not realistic. Let's add the ability for crises to change the carrying capacity and growth rate[6]. What happens now?
The suppression effect is still not massive, but those additions amplified it[7]. Average observers estimate the chance of the most severe non-existential disasters as about half of what it actually is. And these effects are fairly sensitive to a variety of parameters. With different conditions, the suppression effects may be much greater.
To clarify, these toy models are intended to demonstrate that suppression effects can arise from simple models, and that we should expect them to be at work in the real world unless we have a good reason to believe otherwise. I don't mean to suggest that these toy models are giving us accurate measurements of the strength of the suppression effect, nor that they're excellent models of planetary populations. In fact, I suspect that the actual suppression effects are much stronger than what we saw here, but my basis for this doesn't immediately have much to do with things my models omit[8]. I do believe that, with enough effort, one probably could make reasonable estimates for the strength of the suppression effect, and that would be very valuable, but that's far beyond the scope of this article.
We have thus far gotten lucky, perhaps only slightly, perhaps wildly. But our luck, and the small-to-massive benefit it's thus far provided us, is unlikely to continue. Our anthropic plot armor is gone. We'd better figure out how to survive without it.
This was crossposted from The Pennsylvania Heretic. The code used to run these simulations is here.
An all-out nuclear war with current arsenals would be apocalyptic, but probably not existential. Estimates hover around 70% of the world population dying, including from indirect effects. Even one during the cold war (when there were many more nukes) probably wouldn't be existential.
Population decline is a much more serious threat in the developed world than population growth nowadays.
In research on the subject after having mostly written this article, I found the term Anthropic Shadow used to refer to the suppression effect. There is a small amount of literature on the subject, including this paper: https://onlinelibrary.wiley.com/doi/10.1111/j.1539-6924.2010.01460.x
Initially doubling each year, with a cap set uniformly randomly somewhere between 0 and 10M. I also tried exponential growth, and there's a toggle for that in my code. Due to the ratio of the growth rate including the effects of non-existential crises to the existential crisis chance, it tended to lead to a single world having the vast majority of all observers.
Disasters eliminate 2%, 4%,...98%, or 100% of the population. Each type of disaster has a 1% chance of happening each year.
Each high-severity crisis (82% to 100%) lowers the logistic growth rate by 0.1 to 1.0 per event, in 2-point severity steps (82 -> -0.1, …, 100 -> -1.0). The carrying capacity is rescaled with growth rate (effective_capacity = base_capacity * growth_rate / base_rate), so crisis-driven growth-rate drops also shrink the capacity limit (and can collapse it to zero if growth rate goes non-positive). There are also non-crisis events that do the opposite: they increase growth rate by +0.1 to +1.0, which balance the effects of the crises.
From not-that-exhaustive testing, some conditions that contribute to significant suppression effects:
-Crises have long-lasting effects.
-(At least for certain setups) universe not dominated by a single world.
The Fermi paradox is the largest reason, followed by the bizarrely high number of seemingly-close calls humanity has had.