2539

LESSWRONG
LW

2538
AI

4

The statement "IABIED" is true even if the book IABIED is mostly false

by Ihor Kendiukhov
10th Oct 2025
2 min read
0

4

AI

4

New Comment
Moderation Log
More from Ihor Kendiukhov
View more
Curated and popular this week
0Comments

There are many critical posts on LW about If Anyone Builds It, Everyone Dies. 

There are detailed disagreements with particular arguments, object-level claims, and - to a lesser extent - technical assertions. But I think much of this criticism conflates three distinct propositions that deserve separate evaluation:

  1. The arguments in the book are sound. The entire body or majority of specific arguments, examples, and reasoning chains presented in IABIED are valid and persuasive.
  2. The title claim is true. The statement "if anyone builds it, everyone dies" accurately describes our situation with AGI development.
  3. The policy recommendation is correct. We should "shut it all down" via the specific interventions Yudkowsky and Soares propose.

These three claims have different truth conditions and require different standards of evidence. Yet I observe many critics treating them as a package deal - rejecting (2) and (3) primarily on the basis of disagreeing with (1).

Personally, I find the arguments in IABIED straightforward and valid. I'm genuinely surprised by the degree of pushback from LessWrong, though this probably reflects my own bubble among rationalists and AI safety people. But this post isn't about relitigating those object-level arguments.

Because, I believe that the authors have made a compelling case that even if >95% of their specific arguments are incorrect, the core claim "if anyone builds it, everyone dies" still holds true.

The case for x-risk from AGI doesn't rest on any single argument being ironclad. It rests on the conjunctive claim that we need to solve all of these problems simultaneously under severe time pressure, and the problems are diverse, hard, independent, and their equivalents are not usually solved in a way which is required for ASI to work out well. 

What puzzles me even more is the resistance to (3) given acceptance of some substantial probability of (2).

The logical structure here should be clear: Inasmuch as "if anyone builds it, everyone dies" (2) doesn't require full endorsement of every argument in the book (1), the proposal to "shut it all down" (3) doesn't require certainty about (2) either.

To say it very trivially, we don't need P(doom) = 0.99 to justify extraordinary precautions. We just need it to be (relatively) non-negligible, and we need the stakes to be astronomical.

Which they are. 

So here's my ask for critics of IABIED: Please make it much more explicit why rejecting (1) justifies rejecting (2) or (3) in your particular case.

What's the specific logical connection you're drawing? Are you claiming that:

  • All the arguments are so flawed that the probability of (2) drops below some threshold?
  • The correlation between the quality of specific arguments and claim validity is so tight that (1) being false for some particular arguments makes (2) unlikely?
  • The policy implications of (3) are so costly that only absolute certainty about (2) would justify them? Many people say that "shutting down" is "unrealistic". But the feasibility of implementation is not the same as the desirability of implementation, no?
  • You do not reject (2) or (3), but just (1)? But then, make it clearer, please!
  • Something else entirely?