As this answer got upvoted, I collected some Dubna's courses read in English, for which recordings are available (look for "Доступны 4 видеозаписи курса.")
Metaculus 2020 U.S. Election Risks Survey doesn't give >1% for >5000 deaths, but I think it is justified to infer something like that from it:
While large-scale violence and military intervention to quell civil unrest seem unlikely, experts still judged these possibilities to be far from remote. Experts predicted a median of 60 deaths occurring due to election-related violence, with an 80% confidence interval of 0 to 912 fatalities that reflects a high degree of uncertainty. Still, the real possibility of violence is a notable departure from the peaceful transitions that have been the hallmark of past U.S. elections. Results indicate an 8% probability of over 1,000 election-related deaths — suggesting that while widespread sustained clashes are unlikely, this possibility warrants real concern. Experts assigned a 10% median prediction that President Trump will invoke the Insurrection Act to mobilize troops during the transition period.
A better example: one might criticize CDC for lack of advice aimed at the vulnerable demographics. But absence might result not from lack of judgment but from political constraints. E.g. jimrandomh writes:
Addendum: A whistleblower claims that CDC wanted to advise elderly and fragile people to not fly on commercial airlines, but removed this advice at the White House's direction.
Upd: this might be indicative of other negative characteristics of CDC (which might contribute to unreliability) but I don't know enough about the US gov to asses it.
While for me it is, indeed, a reason to put less weight on their analysis or expect less useful work/analysis to be done by them in a short/medium-term.
But I think this consideration, also, weakens certain types of arguments about the CDC's lack of judgment/untrustworthiness. For example, arguments like "they did this, but should have done better" loses part of its bayesian weight as the organization likely made a lot of decisions under time pressure and other constraints. And things are more likely to go wrong if you're under-stuffed and hence prioritize more aggressively.
I don't expect to have a good judgment here, but it seems to me that "testing kits the CDC sent to local labs were unreliable" might fall here. It might have been a right call for them to distribute tests quickly and ~skip ensuring that tests didn't have a false positive problem.
Unless there are large enough demographics for which this post looks credible while FB conspiracies do not.
If the only issue is tone, you could write something like: 'Initially, I was confused/surprised by the core claim you made but reading this, this, and that [or thinking for 15 minutes/further research] made me believe that your position is basically correct'. This looks quite
[...] "Yes, you are correct about that" comes across as quite arrogant [...]
I attended Epistea Summer Experiment and greatly enjoyed it. (At the same time I am quite skeptical about value of any rationality workshops for EA-inspired work.)
I think Nuno's time-capped analysis is good.
Hey! Could you say more about a causal link between Sequences and writing these papers, please:
I think my confusion comes from (a) having enough math background (read some chapters of The Probabilistic Method yers ago); (b) while reading Sequences and more so AF discussions added to my understanding of formal epistemology, I am surprised that your emphasis how Sequences affected your muscle memory and ability to do calculations.