There are many critical posts on LW about If Anyone Builds It, Everyone Dies.
There are detailed disagreements with particular arguments, object-level claims, and - to a lesser extent - technical assertions. But I think much of this criticism conflates three distinct propositions that deserve separate evaluation:
These three claims have different truth conditions and require different standards of evidence. Yet I observe many critics treating them as a package deal - rejecting (2) and (3) primarily on the basis of disagreeing with (1).
Personally, I find the arguments in IABIED straightforward and valid. I'm genuinely surprised by the degree of pushback from LessWrong, though this probably reflects my own bubble among rationalists and AI safety people. But this post isn't about relitigating those object-level arguments.
Because, I believe that the authors have made a compelling case that even if >95% of their specific arguments are incorrect, the core claim "if anyone builds it, everyone dies" still holds true.
The case for x-risk from AGI doesn't rest on any single argument being ironclad. It rests on the conjunctive claim that we need to solve all of these problems simultaneously under severe time pressure, and the problems are diverse, hard, independent, and their equivalents are not usually solved in a way which is required for ASI to work out well.
What puzzles me even more is the resistance to (3) given acceptance of some substantial probability of (2).
The logical structure here should be clear: Inasmuch as "if anyone builds it, everyone dies" (2) doesn't require full endorsement of every argument in the book (1), the proposal to "shut it all down" (3) doesn't require certainty about (2) either.
To say it very trivially, we don't need P(doom) = 0.99 to justify extraordinary precautions. We just need it to be (relatively) non-negligible, and we need the stakes to be astronomical.
Which they are.
So here's my ask for critics of IABIED: Please make it much more explicit why rejecting (1) justifies rejecting (2) or (3) in your particular case.
What's the specific logical connection you're drawing? Are you claiming that: