I indeed meant only "worst so far", in the sense that it would probably kill more people than any previous disaster.
I'm typing this from New Zealand.
Important clarification: Neither here nor in the twitter post did I advocate appeasement or giving in to blackmail. In the Venn diagram of possible actions, there's certainly a non-empty intersection of "de-escalation" and "appeasement", but they're not the same set, and there are de-escalation strategies that don't involve appeasement but might nonetheless reduce nuclear war risk. I'm curious: do you agree that halting (and condemning) the following strategies can reduce escalation and help cool things down without giving in to blackmail?
I think it would reduce nuclear war risk if the international community strongly condemned 1-7 regardless of which side did it, and I'd like to see this type of de-escalation immediately.
The more items on the list of nuclear near-misses, the more convinced you should be that de-escalation works, no matter how close we get to nuclear war.
That's an interesting argument, but it ignores the selection effect of survivor bias. If you play Russian roulette many times and survive, that doesn't mean that the risk you took was small. Similarly, if you go with the Xia et al estimate that nuclear winter kills 99% of Americans and Europeans, the fact that we find ourself being in that demographic in 2022 doesn't mean that the past risks we took were small: if you do the Bayesean calculation, you'd find the most likely world for a surviving Americans or European in 2022 would be a world where no nuclear winter had occurred, even if the ab initio risk was quite large.
You can also make direct risk estimates. For example, JFK estimated that the risk of nuclear war during the Cuban Missile Crisis was about 33%. And he said that not knowing about the Arkhipov incident. If Orlov's account is accurate, then there was a 75% chance of a nuclear attack on the US that day, since there was only a 25% probability that Arkhipov would have been on that particular one of the four nuclear-armed subs.
Algon, please provide references to peer-reviewed journals supporting your claims that smoke predictions are overblown, etc. Since there's a steady stream of peer-reviewed papers quantifying nuclear winter in serious science journals, I find myself unconvinced by criticism that appears only on blogs and without the detailed data, GitHub code, etc. that tends to accompany peer-reviewed research. Thanks!
Ege, if you find the framework helpful, I'd love to hear your estimates for the factor probabilities 30%, 70%, 80%. I'd also be very interested in seeing alternative endpoint classifications and alternative frameworks. I sense that we both agree that it's valuable to estimate the nuclear war risk, and basing the estimate on a model that decomposes into pieces that can be debated separately rather than basing it on just gazing into our belly-buttons and tossing out a single probability that feels right.
Russia also wanted to withdraw of US troops from the baltic states which is also a nonstarter.
Yeah, that was clearly a non-starter, and perhaps a deliberate one they could drop later to save face and claim they'd won a compromise. My point was simply that since the West didn't even offer a promise not to let Ukraine into NATO, I don't think they'd ever agree to a "Kosovo".
Thanks David and Ege for these excellent points! You're giving me too much credit by calling it a "thesis"; it was simply part of my reasoning behind the 30% number. Yeah, I did consider the Gulf War as an important counterexample. I'll definitely consider revising my 30% number downward in my next update, but there are also interesting examples on the other side:
Thanks Wei for these interesting comments. Whether humans can "solve" ontological crises clearly depends one's definition of "solve". Although there's arguably a clear best solution for de Blanc's corridor example, it's far from clear that there is any behavior that deserves being called a "solution" if the ontological update causes the entire worldview of the rational agent to crumble, revealing the goal to have been fundamentally confused and undefined beyond repair. That's what I was getting at with my souls example.
As to what Nick's views are, I plan to ask him about this when I see him tomorrow.
Thanks Eliezer for your encouraging words and for all these interesting comments!
I agree with your points, and we clearly agree on the bottom line as well:
1) Building FAI is hard and we’re far from there yet. Sorting out “final goal” issues is part of the challenge.
2) It’s therefore important to further research these questions now, before it’s too late.