*1969, degrees in business administration and more recently in psychology, 20 years work in hospital management, currently teaching/tutoring statistics
Another thing that could be interesting with spies is what they can do before a US/NATO-Russian war. If the Russian had one or more top level spies in the US security establishment or in NATO (as they, or their East German satellite, had during the cold war), then it could increase or decrease the risk of Russia using nuclear weapons.
If Russia got signals from inside US/NATO that the West was really willing to retaliate militarily in the case of a Russian nuclear strike on Ukraine, then this information could decrease Russia's willingness to escalate. If, however, Russia got signals from inside US/NATO that the West was not willing to use military force as an answer to a Russian nuclear strike on Ukraine, then this information could increase Russia's will to escalate.
From the current numbers (-3 and -4) your post does not seem to be heavily downvoted. I believe there may be some users here who see any arguments for a smaller threat as dangerous. As long as there are not many upvotes, even a very small number of users with this attitude could lead to those numbers. We have seen a similar dynamic with the public health authorities during the Covid crisis (prioritizing message control over epistemic rationality).
And as a good rationalist he explicitly asked:
I'd love to hear your thoughts both on this risk modeling framework and on the factor probabilities (30%, 80%, 70%) listed in the figure!
My estimate is quite high (80%) that NATO's response will be forceful enough to include a non-nuclear military strike against Russia, because key NATO leaders have already made strongly worded statements to this effect.
Here, my estimate is much lower (about 25%). Talk is cheap, so strongly worded statements in itself are only weak evidence for future intentions if carrying out those threats poses threats on this level (and I do think it to be likely that the relevant players in NATO are aware of the level of risk).
...or it might escalate with a Russian nuclear strike against Ukraine followed by NATO counterattacks against Russia...
That is possible. But I think it is important not to treat this as one scenario in your list of possible escalations but as two:
I believe that P(scenario 1) >> P(scenario 2). NATO knows about the nuclear capabilities of Russia. Therefore attacks of NATO on Russia seem to me extremely unlikely, given the history of the Cold War.
Unless, of course, NATO (or the US unilaterally) extends its deterrence on the Ukraine (e.g., by taking Ukraine into NATO; however there won't be the necessary unanimity within NATO for that). Because attacking Russia as a result of a Russian nuclear strike on a third country (Ukraine) without explicitly threating to do so in advance would be approaching the level of craziness of the movie "Dr. Strangelove" (Russia's secret doomsday machine).
For that reason I don't spend much time thinking about preparation for a nuclear war (I am living in Germany). But what I do think to be important is to prepare for acts of state terrorism, e.g.:
a) This time it was an attack on the Russian pipelines (whoever may be responsible for that). The next time there could be attacks on Western gas pipelines or LNG-terminals.
b) Hacker attacks could bring down crucial elements of Western societies (e.g., electricity grid, banking system).
I think given this increased risk it makes sense to prepare for situations where the normal systems in a country are not working for a couple of weeks (having enough food, water, banknotes, etc.).
Given that 335 users with 300+ karma were active on the site on Petrov Day, and the site didn't go down until we got beneath that, you could argue this is most successful Petrov Day yet on LessWrong (in past years, at most 250 people were given codes, and it's not clear they all visited LessWrong even). Plus, as above, this year the 300+ users didn't press the button despite the offer of anonymity.
I think that reasoning apllies only for the subset of users in the Americas. For users in Europe the time point when 300+ was enough to launch was deep in the night, and for parts of Asia very early in the morning. Someone from that group would have had to set the alarm to get up from bed to nuke the site which required considerable more energy than not withstanding the temptation and pressing the launch button while visiting Less Wrong during the day.
Still, I think it was a successful Petrov Day.
1. Make sure to have a good sleeping bag that is suitable for colder temperatures.
2. Prepare for power outages. I believe in most countries there are official agencies providing advice for such a situation (e.g., in the case of Germany: Bundesamt für Bevölkerungsschutz und Katastrophenhilfe)
3. Maybe get a dehumidifier (for situations with reduced heating but still with electricity) in order to prevent mould with reduced temperature.
News corporations provably have an imperative to reduce panic during a recession, although I don't know the specifics of where the momentum for this comes from.
Do they, provably? Or, to put it differently: Is having an imperative to reduce panic the only plausible explanation for headlines like this?
My primary model of news organizations' goals is that they are trying to maximize attention, ideally (but not always) without being factually wrong. I think the WSJ headline is compatible with those goals.
For me the reporting about daily fluctuations in the financial press is mostly a source for amusement, not to be taking seriously (on 99% of the days, of course).
The most helpful thing my physiotherapist did when he treated me for chronic back trouble: making the observation that there was a considerable amount of avoidance behavior on my part. Stopping that and becoming more active has greatly reduced my back problems.
CFAR's focus on AI research (as opposed to raising the rationality water line in general) leads me to two questions:
Based on that: Shouldn't it be an important goal to test and popularize rationality techniques outside of subcultures in AI research if one wants to solve the alignment problem in practice? (Whether that is a job for CFAR or someone else is a different question, of course).