Not one of these would have ended humanity, or even long term particularly reduced it's population, hence the evidence towards survivorship bias from this is effectively 0.
Claiming that literally no nuclear incident nor biological risk could "particularly reduced it's population" seems like a very strong claim to make? Especially given that your argument only holds if you're correct. (e.g., if one of these would have ended humanity, we wouldn't be having this conversation)
I mean, I think it's true. Nuclear winter was the only plausible story for even a full-out nuclear war causing something close to human extinction, and I think extreme nuclear winter is very unlikely.
Similarly, it is very hard to make a pathogen that could kill literally everyone. You just have too many isolated populations, and the human immune system is too good. It might become feasible soon, but it was not very feasible historically!
I feel my point still stands, but have been struggling to articulate why. I'll make my case, please let me know if my logic is flawed. I'll admit that the post was a little hot-headed. That's my fault. But having thought for a few days, I still believe there's something important here.
In the post I'm arguing that survivorship bias due to existential risks means that we have biased view about the risks of existential risk, and we should take this into account when thinking about existential risks.
Your position (please correct me if I'm wrong) is that the examples I give are extremely unlikely to lead to human extinction, therefore these examples don't support my argument.
To counter, I think that 1. given that it's never happened, it's difficult to say with confidence what the outcome of nuclear war/global pathogens would be, but 2. even if complete extinction is very unlikely, the argument I posed still applies to 90% extinction/50% extinction/10% extinction/etc. If there are X% fewer people in the world that undergoes a global catastrophe, that's still X% fewer people who observe that world, which leads to a survivorship bias as argued in the post.
This is similar to the argument that we should not be surprised to be alive on a hospitable planet where we can breath the air and eat the things around us. There's a survivorship bias that selects for worlds on which we can live, and we're not around to observe the worlds on which we can't survive.
My claim is no nuclear bomb incident would have killed more than 25% of the population, or 500 million people in 1950, one billion 1970.
Reasoning is trivial - a single nuclear bomb can only kill a maximum of a few hundred thousand people at a time. At the height of the cold war there were a few thousand bombs on each side, most of which weren't aimed at people but second strike capabilities in rural areas. Knock on effects like famines could kill more, but I doubt they would be worse than WW2, since number of direct deaths would be smaller. It would likely lead to war, but again WW2 is your ballpark here for number of deaths from an all out global war.
Making an anthropic update from something that at worse would have reduced world population by 25 percent is basically identical to reading tealeaves, especially if you don't update the other way from WW1s and WW2s and other assorted disasters which majorly reduced world population.
Maybe we are the luckiest timeline. But the evidence for that is not enough to update you enough to meaningfully change your plans.
Note: I'm writing every day in November, see my blog for disclaimers.
When considering existential risk, there’s a particular instance of survivorship bias that seems ever-present and which (in my opinion) impacts how x-risk debates tend to go.
We do not exist in the world that got obliterated during the cold war. We do not exist in the world that got wiped out by COVID. We can draw basically zero insight about the probability of existential risks, because we’ll only ever be alive in the universes where we survived a risk.
This has some significant effects: we can’t really say how effective our governments are at handling existential-level disasters. To some degree, it’s inevitable that we survived the Cuban Missile Crisis, that the Nazis didn’t build & launch a nuclear bomb, that Stanislav Petrov waited for more evidence. I’m going to paste the items from Wikipedia’s list of nuclear close calls, just to stress how many possibly-existential threats we've managed to get through:
That’s… a lot of luck.
And sure, very few of them would likely have been completely humanity-ending existential-level threats. But the list of laboratory biosecurity incidents is hardly short either:
I’m making you scroll through all these things on purpose. Saying “57 lab leaks and 42 nuclear close calls” just leads to scope insensitivity about the dangers involved here. Go back and read at least two random points from the lists above. There’s some “fun” ones, like “UK lab sent live anthrax samples by mistake”.
Not every one of these is a humanity-ending event. But there is a survivorship bias at play here, and this should impact our assessment of the risks involved. It’s very easy to point towards nuclear disarmament treaties and our current precautions around bio-risks as models for how to think about AI x-risk. And I think these are great. Or at least, they’re the best we’ve got. They definitely provide some non-zero amount of risk mitigation.
But we are fundamentally unable to gauge the probability of existential risk, because the world looks look the same whether humanity had gotten 1-in-a-hundred lucky or 1-in-a-trillion lucky.
None of this should really be an update. Existential risks are absolute and forever, basically every action is worth taking in order to reduce existential risks. But in case there’s anyone reading this who thinks x-risk maybe isn’t all that bad, this one’s for you.