MaxTegmark

Wiki Contributions

Comments

I indeed meant only "worst so far", in the sense that it would probably kill more people than any previous disaster.

Important clarification: Neither here nor in the twitter post did I advocate appeasement or giving in to blackmail. In the Venn diagram of possible actions, there's certainly a non-empty intersection of "de-escalation" and "appeasement", but they're not the same set, and there are de-escalation strategies that don't involve appeasement but might nonetheless reduce nuclear war risk. I'm curious: do you agree that halting (and condemning) the following strategies can reduce escalation and help cool things down without giving in to blackmail?

  1. nuclear threats
  2. atrocities
  3. misleading atrocity propaganda
  4. assassinations lacking military value
  5. infrastructure attacks lacking military value (e.g. Nordstream sabotage)
  6. shelling the Zaporizhzhya nuclear plant
  7. disparaging de-escalation supporters as unpatriotic

I think it would reduce nuclear war risk if the international community strongly condemned 1-7 regardless of which side did it, and I'd like to see this type of de-escalation immediately. 

The more items on the list of nuclear near-misses, the more convinced you should be that de-escalation works, no matter how close we get to nuclear war.

That's an interesting argument, but it ignores the selection effect of survivor bias. If you play Russian roulette many times and survive, that doesn't mean that the risk you took was small. Similarly,  if you go with the Xia et al estimate that nuclear winter kills 99% of Americans and Europeans, the fact that we find ourself being in that demographic in 2022 doesn't mean that the past risks we took were small: if you do the Bayesean calculation, you'd find the most likely world for a surviving Americans or European in 2022 would be a world where no nuclear winter had occurred, even if the ab initio risk was quite large. 

You can also make direct risk estimates. For example, JFK estimated that the risk of nuclear war during the Cuban Missile Crisis was about 33%. And he said that not knowing about the Arkhipov incident. If Orlov's account is accurate, then there was a 75% chance of a nuclear attack on the US that day, since there was only a 25% probability that Arkhipov would have been on that particular one of the four nuclear-armed subs.

Algon, please provide references to peer-reviewed journals supporting your claims that smoke predictions are overblown, etc. Since there's a steady stream of peer-reviewed papers quantifying nuclear winter in serious science journals, I find myself unconvinced by criticism that appears only on blogs and without the detailed data, GitHub code, etc. that tends to accompany peer-reviewed research. Thanks!

Ege, if you find the framework helpful, I'd love to hear your estimates for the factor probabilities 30%, 70%, 80%. I'd also be very interested in seeing alternative endpoint classifications and alternative frameworks. I sense that we both agree that it's valuable to estimate the nuclear war risk, and basing the estimate on a model that decomposes into pieces that can be debated separately rather than basing it on just gazing into our belly-buttons and tossing out a single probability that feels right.

Russia also wanted to withdraw of US troops from the baltic states which is also a nonstarter. 

Yeah, that was clearly a non-starter, and perhaps a deliberate one they could drop later to save face and claim they'd won a compromise. My point was simply that since the West didn't even offer a promise not to let Ukraine into NATO, I don't think they'd ever agree to a "Kosovo". 

Thanks David and Ege for these excellent points! You're giving me too much credit by calling it a "thesis"; it was simply part of my reasoning behind the 30% number. Yeah, I did consider the Gulf War as an important counterexample. I'll definitely consider revising my 30% number downward in my next update, but there are also interesting examples on the other side:

  • The Falklands War: The Argentinian military junta's 1982 invasion of the British Falkland Islands was humiliatingly defeated. This became the final nail in the coffin for a dictatorship facing a collapsing economy and increasing domestic resistance, and collapsed shortly thereafter. Most of the members of the Junta are currently in prison for crimes against humanity and genocide.  
  • The Yom Kippur War: The 1973 invasion of Israeli-held territory by an Arab coalition was unsuccessful.  Although the Arab national leaders were able to remain in power, some military leaders fared less well. Syrian Colonel Rafik Halawi, who's infantry brigade allowed an Israeli breakthrough, was executed before the war even ended.
  • Survival of nation versus leader: Although mainstream Western media often portrays Putin as the main driving force behind the invasion, there's also broad and well-documented local sentiment that the West has been seeking to weaken, fragment and dominate Russia for decades, with Ukraine being a red line. Whether such sentiment is valid or not is irrelevant for my argument. In other words, the "escalate-or-die" dynamic may be playing out not only in Putin's head, but also at a national level. Ukraine itself is a shining example of how powerful such national self-preservation instincts can be.


 

Thanks Wei for these interesting comments. Whether humans can "solve" ontological crises clearly depends one's definition of "solve". Although there's arguably a clear best solution for de Blanc's corridor example, it's far from clear that there is any behavior that deserves being called a "solution" if the ontological update causes the entire worldview of the rational agent to crumble, revealing the goal to have been fundamentally confused and undefined beyond repair. That's what I was getting at with my souls example.

As to what Nick's views are, I plan to ask him about this when I see him tomorrow.

Thanks Eliezer for your encouraging words and for all these interesting comments! I agree with your points, and we clearly agree on the bottom line as well: 1) Building FAI is hard and we’re far from there yet. Sorting out “final goal” issues is part of the challenge. 2) It’s therefore important to further research these questions now, before it’s too late. :-)

Load More