Your example would be pretty telling and damning if we assume that you're correct, but my guess is that most readers here will assume you're wrong about it. Someone in your position could still be right, of course; I'm just saying that this wouldn't yet be apparent to readers.
Fair enough! :) The parallel I had in mind was "[almost] no object level pushback", or at least almost no object level pushback that I can tell is based on an accurate understanding of my arguments.
So far, I genuinely have not gotten much object-level pushback on the most load-bearing points of my sequence, so, I'm not that worried.
I think this probably underestimates the severity of founder effects in this community. A salient example to me is precise Bayesianism (and the cluster of epistemologies that try to "approximate" Bayesianism): As far as I can tell, the rationalist and EA communities went over a decade without anyone within these communities pushing back on the true weak points of this epistemology, which is extremely load-bearing for cause prioritization. I think in hindsight we should have been worried about missing this sort of thing.
Examples of awareness growth vs. logical updates
(Thanks to Lukas Finnveden for discussion that prompted these examples, and for authoring examples #3-#6 verbatim.)
A key concept in the theory of open-minded updatelessness (OMU) is "awareness growth", i.e., conceiving of hypotheses you hadn't considered before. It's helpful to gesture at "discovering crucial considerations" as examples of awareness growth. But not all CC discoveries are awareness growth. And we might think we don't need this OMU idea if awareness growth is just logical updating, i.e. you already had nonzero credence in some hypothesis, but you changed this credence purely by thinking more. What's the difference? Here are some examples.
tendency to "bite bullets" or accepting implications that are highly counterintuitive to others or even to themselves, instead of adopting more uncertainty
I find this contrast between "biting bullets" and "adopting more uncertainty" strange. The two seem orthogonal to me, as in, I've ~just as frequently (if not more often) observed people overconfidently endorse their pretheoretic philosophical intuitions, in opposition to bullet-biting.
What other, perhaps slightly more complex or less obvious, crucial considerations are we still missing?
I agree this is very important. I've argued that if we appropriately price in missing crucial considerations,[1] we should consider ourselves clueless about AI risk interventions (here and here).
Also relatively prosaic causal pathways we haven't thought of in detail, not just high-level "considerations" per se.
A salient example to me: This post essentially consists of Paul briefly remarking on some mildly interesting distinctions about different kinds of x-risks, and listing his precise credences without any justification for them. It's well-written for what it aims to be (a quick take on personal views), but I don't understand why this post was so strongly celebrated.
I'm curious if you think you could have basically written this exact post a year ago. Or if not, what's the relevant difference? (I admit this is partly a rhetorical question, but it's mostly not.)
Oops, right. I think what's going on is:
Sorry, I don't understand the argument yet. Why is it clear that I should bet on odds P, e.g., if P is the distribution that the CCT says I should be represented by?
I found this post valuable for stating an (apparently popular on LW) coherentist view pretty succinctly. But its dismissal of foundationalism seems way too quick:
Relevant quote from Nye (2015): “The methodological approach I defend maintains that the direct plausibility or implausibility of principles about the ethical relevance of various factors is foundational in normative and practical ethics. This does not mean that appearances of direct plausibility are infallible. Principles often seem plausible only because we are making confusions and do not fully appreciate what they are really saying. On the approach I defend, much of the business of ethical reasoning consists in correcting erroneous appearances of plausibility by clarifying the content of principles, making crucial distinctions, and discovering alternatives with greater direct plausibility. Nor does the claim that the direct plausibility of principles is foundational mean that we should begin our ethical reasoning by considering only which principles seem plausible. The principles that turn out to be most plausible on reflection might be suggested to us only by first considering our intuitions about a variety of cases and then seeing which of them can be justified by principles that, once formulated and clarified, are directly plausible.”