There is a Russian saying (it's frequently ascribed to Saltykov-Shchedrin, and sounds to me like something he'd write, but I'm not able to properly source this) to precisely that effect... but not in the way you imagine. Roughly translated, it goes like this - "In Russia, the severity of the laws is compensated by the non-neccesity to obey them".
(I don't have too much time for this, so apologies for shoddy sources down in the answer. Please let me know if you'd like more proper ones, I'll be sure to come back to that later.)
My personal model of Russian corruption maps very well onto Banfield's amoral familism. In a nutshell, the model is that every single individual is morally incentivized to increase utility of his small social ingroup (usually his family, but may be also a group of otherwise associated friends - part of Vladimir Putin's inner circle can be traced back to a housing co-op in the 90s), to the detriment of both his own and society's total utility. One addition from my own anecdotal experience, that I think Banfield never observed or made, is that an amorally familistic society needs a tyrannical/absolutist ruler to oversee it, lets it collapses into a hobbesian war of all against all - or, at least, that is the usual Russian perspective of it IMO. The tyrant sets the boundaries that make sure the ingroups don't slaughter each other completely in the struggle for utility, and enforces them with terrible, ruthless power. He is, in that sense, elevated from the earthly struggle for utility by that duty - that's why it's not really possible to blame him for any social woes, and instead the blame falls on those that carry out his decisions (and are "earthly" and subject to familism like all others) - there is another, IMO popular, saying to that effect, "the tsar is good, the boyars are bad".(Ostensibly, this contradicts the saying from the very first paragraph - wasn't it possible to get around draconian laws by corruption? Yes it was. The idea is that while the tsar has his own, directly-controlled enforcers - the oprichnina or the rosgvardiya - and god help you if you cross the law and they notice, he also has a vastly more expansive hierarchy of appointed administrators and law enforcers (whom the average member of society has vastly more chances of encountering in his affairs), who are much more human, hence amorally familistic, hence corruptible.)
Regarding how this maps onto progress, it would probably be useful to consider Acemoglu/Robinson's "Why Nations Fail", keeping in mind two things.First, Russia is not a progressive country by any means, empirically. Second, the idea that economic growth equals broadly understood progress is IMO defensible, but not very trivial. But since Acemoglu and Robinson are insititutionalist growth economists, who see technological/scientific progress as the driver of economic growth, it's still not useless.
The WNF take on progress is that people are incentivized to create/adopt disruptive technologies that drive progress only if they have faith in the social/political institutions that will ensure they can extract utility (for themselves, or whomever they care about) from that. If they instead know that institutions care about estabilishing the pre-existing mono/oligopolies, any disruption will be quashed, and progress will be sporadic and unsustainable in the long run. The Russian case appears to me to be almost that - the ruler is more concerned with maintaining social order than with maintaining the economic status quo (although those two may be connected), but since any large economic entity (like a large corporation running on a technology that is about to get disrupted) is by default more powerful than it's disruptive challenger, it works out the same. The amoral familism begets a ruler concerned with enforcing boundaries and the status quo, and disruption just goes agains the grain of both.
To conclude, yes, corruption does get used as a way out of overregulation, but not neccessarily to the benefit of long term progress.
(Acemoglu and Robinson also make a more direct connection by claiming that you neccessarily need a liberal democracy to estabilish these institutions that are conducive to progress and disruption. I'm hesitant about accepting that head on, but if you choose to agree with that, the case is even easier - amoral familism sees democracy as just ridiculous, and liberalism as weak. If I'm amorally familistic, I only care about satisfying desires of my ingroup - why would I ever want to live in a society which equates my desires to ones from outgroups instead? And if I choose to have a ruler that enforces social order and prevents an all-out collapse, how would he be able to enforce it with terrible punishments if he's limited by some "freedoms"?)
P.S. I have to note that Acemoglu and Robinson are concerned with long-term sustainable growth only. In the short term, however, it would probably be a lot easier for the ruler, or for currently endowed players (like oligopolies/oligarchs) to get things done by getting around red tape.
One thing that seems important to note: nuclear warfare need not occur in a vacuum. If countries possessing nuclear weapons are trading all-out strikes, as in your model, they probably are in a state of (World?) war already, and either have fought with other weapons prior to the nuclear exchange, or plan to continue to do so after it. This may include use of non-nuclear weapons with high collateral damage, like chemical or biological agents, or saturation bombardment targeting high-population areas. I wonder if that skews the assessment of damage in any meaningful way.
This is not really erisology in any way, but I think specific topics of discussion/interactions with specific people may very well become Ugh Fielded if you have an initially bad experience.
Since you address "how likely meeting a certain politically charged event would be", I assume your question is focussed on what I've called "Polling 2", which concerns itself with predicting future events.
Yes, you're right, and I should have been more clear - thanks for pointing that out.
The best way to put the matter into quantitative terms may be to ask the interviewee what odds he would give in a bet on the event occuring
I don't know if I'm convinced that would work. I think that most people fall into two camps regarding betting odds. Camp A is not familiar with probability theory/calculus and doesn't think in probabilistic terms in their daily life - they are only familiar with bets and odds as a rhetorical device ("the odds were stacked against him", "against all odds", etc). Camp B are people who bet/gamble as a pastime frequently, and are actually familiar with betting odds as an operable concept. If you ask an A about their odds on an event related to a cause they feel strongly about, they will default to their understanding of odds as a rhetorical device and signal allegiance instead of giving you a usable estimate. If you ask a B about their odds on the same event, they will start thinking about it in monetary terms, since habitual gamblers usually bet and gamble money. But, as you point out, putting a price on faith/belief/allegiance is seen as an immoral/dishonorable act, and would cause even more incentive to allegiance-signal instead of truly estimating probabilities.In this way, this only works either for surveying people with good skills at rationality/probability, or for surveying people about events they don't have strong feelings on.However, there are two pitfalls to this argument, and that's why I'm not stating this with complete certainty. First, this is still speculation - I have no solid data on how familiar an aggregate (I'm not sure average is a good term to use here, given that mathematicians probably understand this concept very well while being relatively scarce) person is with concept of betting odds - and actually, do tell if you know any survey data that would allow to verify this, a gauge of how familiar people are with a certain concept seems like data useful enough to exist. Second, this may be cultural. I'm not American and have never stayed in America long-term - and based on you mentioning baseball and the time of your reply I assume that you are - so potentially the concept of betting is somehow more ingrained into the culture there, and I'm just defaulting to my prior knowledge.
The rather convoluted question I came up with to assess an interviewee's resistance to chance of intent has the disadvantage of generating a discreet (non-continuous) answer. and I worry it might also confuse some interviewees
Yup. I didn't see the point in highlighting that, since you mention yourself that the measure is imperfect, but this echoes my concern on betting. At the risk of sounding like an intellectualist snob, I think even the probabilistic concepts that most lesswrongers would see as basic are somewhat hard to imagine and operate with, save for as rhetorical devices, to the general public.
I don't have good data to back this up, but I have a feeling that people are thinking in more binary terms than you expect. More specifically, I conjecture that if you were to ask someone for how likely meeting a certain politically charged event would be, they would parse your question as a binary one and answer either "almost certainly" or "very unlikely" - and when pressed for a number, would give you either between 90-100%, or 0-10% respectively.
I don't have a full answer, but here's what seems important to consider - in my experience, the baseline for the level of confidence in speech that is associated with competence and authority is a lot lower in intellectual circles like LessWrong, compared to the general public.
This is because exposure to rationality and science usually impresses into someone that making mistakes is "fine" and an unavoidable component of learning, and that while science has made very impressive progress there is still a lot to learn and understand about the world. On the other hand, the real world and social opinion usually very closely associate mistakes with failure and the ensuing moral penalties and lowered status.
Based on that, if you rely a lot on qualifiers while speaking to someone who's not as exposed to or interested in intellectual thought, they may write you off as confused or unsure - they will expect "smarter" people to give them definite verdicts. So if you're trying to socially maneuver someone into agreeing with your reasoning based on competence, forgoing qualifiers is probably a good idea.
How do people read LessWrong? I subscribe to the RSS feed of the front page, but that tends to be suboptimal, as some posts aren't that well-aligned with my interests or are questions/discussion starters as opposed to being mid/longform reads that I'd mostly want to read LW for.
Illustration from Michael Haddad for Wired. It was originally commissioned for an article about biohackers, but I find that it captures the spirit of agency and self-improvement that is well-aligned with some of rationalist values.