Crossposted from Substack, originally published on September 17, 2025.
This post aims to contextualize the discussion around "If Anyone Builds It, Everyone Dies". It surveys the book’s arguments, the range of responses they have elicited, and the controversies that have emerged.
-
For more than two decades, Eliezer Yudkowsky has been one of the most insistent voices on the existential risk associated with advanced artificial intelligence. With the September 2025 publication of If Anyone Builds It, Everyone Dies, co-authored with Nate Soares, this vision is now presented in a book format intended for the general public. This has generated significant media attention and sparked debate within academic, journalistic, and technological communities. The work has been reviewed by major outlets such as The New York Times and Vox, and has prompted a wave of discussion on social media and specialized forums. Below, we will explore the book's central arguments, its reception, its reach, and its relevance.
The Central Argument
The core of Yudkowsky and Soares' thesis is, in short, that if a superhuman artificial intelligence is developed, the default outcome is that humanity will lose control of it, leading to catastrophic consequences—potentially extinction. In this sense, the book does not offer a novel diagnosis compared to previous literature on AI existential risk. However, it does present a systematic and forceful narrative, accompanied by biological analogies and recent examples, to make it clear that there is no margin for error.
A standout element is the critique of how current AI systems are built. The authors argue that it is not engineering in the strict sense, but a process closer to organic growth. Models are trained on billions of iteratively adjusted parameters, without researchers having a true understanding of the link between this mathematical configuration and the emergent behaviors. The metaphor of "cultivation" replaces that of "construction"; just as knowing the letters of DNA is not enough to precisely anticipate phenotypic traits in biology, it is also impossible in AI to foresee what internal preferences a complex model will develop.
From this premise stems the second major warning: the impossibility of ensuring that the external behaviors induced during training will translate into stable internal motivations. In the same way that natural selection favored a human preference for sweetness rather than a direct search for high-energy foods, AI systems could develop instrumental preferences detached from human goals. The result would be a misalignment between what is intended during training and what actually emerges as the system's objective in unforeseen contexts.
Added to this is the empirical evidence from recent examples. The authors allude, for instance, to experiments with models that "feign alignment" to avoid being modified during retraining phases, or to instances where a system displayed unsolicited instrumental initiative. These cases are presented as glimpses of a problem that would become unmanageable as we scale up to more powerful systems.
The third pillar of the argument is that the competitive dynamics between states and corporations make it unrealistic to expect key actors to voluntarily slow down the development of artificial intelligence. Economic, strategic, and military incentives fuel what the authors describe as a “suicidal race,” in which the first organization to achieve superintelligence would gain a decisive advantage. In this context, Yudkowsky and Soares argue that collective measures of global scope are required, such as international treaties to halt dangerous developments, strict limits on computational capacity, and a governance framework capable of coordinating an ecosystem that remains fragmented and competitive.
Critical Reception and Public Debate
Several mainstream media outlets and industry journals have praised its expository clarity and the urgency of its central warning. One review describes the volume as "an urgent clarion call" that deserves to be heeded, though it notes the extensive use of parables and the limited presentation of opposing views (Publishers Weekly, 2025). Another highlights its educational nature and its utility for non-technical readers interested in understanding the extreme risk the authors posit (Kirkus Reviews, 2025). In the local press, one reviewer acknowledges the rhetorical power of the diagnosis and recommends the book for those who want to fully engage with these arguments (Canfield, 2025).
Alongside this praise, criticisms have emerged targeting specific methodological elements. Jacob Aron of New Scientist calls the work "extremely readable" but warns that much of its rhetorical force relies on chaining together assumptions with uncertain probabilities; from this perspective, he argues that practical efforts should prioritize more near-term "science fact" problems (Aron, 2025). Sigal Samuel, in Vox, treats the book as the crystallization of a worldview—reproducing the extreme probability estimates cited by the authors—and emphasizes how these certainties inform radical policy proposals, including severe non-proliferation measures and the possibility of military action against computational infrastructure in extreme scenarios (Samuel, 2025).
In technical and intellectual communities, the discussion has been more analytical. Long-form reviews and blog posts have dissected specific argumentative steps, citing both the book's resources and counter-arguments. Scott Alexander published a detailed analysis that values the book's contribution to mobilizing public awareness of safety ideas while also questioning some assumptions and rhetorical styles (Alexander, 2025). Specialized forums and aggregators have collected reactions from relevant figures and significant endorsements have been identified—such as supportive posts on X by some researchers—which have contributed to the title's rapid dissemination within circles interested in AI safety.
Reach and Relevance
The value of the book lies not only in presenting an argument already familiar to those who follow the AI safety debate, but also in how it continues to move this discourse into more visible media and political spheres. In a context where governments and tech companies prioritize geopolitical and commercial competition, introducing the idea of a global moratorium on AGI development into the public agenda represents a significant discursive intervention.
The fact that major global media outlets like The New York Times have devoted editorial space to reviewing the book, even critically, indicates that the discussion on AI existential risks has reached a degree of cultural and political legitimacy that places it in a new phase. Indirectly, the book may help pressure legislators, international agencies, and multilateral organizations to consider extreme risk scenarios in their deliberations.
Indeed, the text does not offer a detailed policy program. Its strategic value lies more in establishing an absolute negative horizon: if the threshold of superintelligence is crossed without guaranteed alignment, the result will be the disappearance of humanity. This stark assertion, though debatable, provides a powerful rhetorical anchor for those seeking to mobilize efforts toward restrictive regulation and international cooperation.
Conclusion
If Anyone Builds It, Everyone Dies is neither an exhaustive academic treatise nor a work that contributes radical new concepts to the field of AI safety. Its value lies elsewhere: in its ability to crystallize, in accessible and narratively persuasive language, the fears that have accompanied a segment of the research community for years. In that sense, it serves a function similar to that of Nick Bostrom's Superintelligence in 2014, albeit with a more urgent tone.
What is indisputable is that Yudkowsky and Soares' book has marked a milestone in the dissemination of ideas about AI existential risk. Its media and cultural impact ensures that it will remain a reference point, both for those seeking to reinforce the urgency of addressing this potential existential threat and for those attempting to downplay or delegitimize the narrative of catastrophism.
References
Alexander, S. (2025, September 11). Book Review: If Anyone Builds It, Everyone Dies. Astral Codex Ten. https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone
Aron, J. (2025, September 8). No, AI isn't going to kill us all, despite what this new book says. New Scientist. https://www.newscientist.com/article/2495333-no-ai-isnt-going-to-kill-us-all-despite-what-this-new-book-says/
Canfield, K. (2025, September 2). “Everyone, everywhere on Earth, will die”: Why 2 new books on AI foretell doom. San Francisco Chronicle. https://www.sfchronicle.com/entertainment/article/superintelligent-ai-books-risk-21021893.php
Kirkus Reviews. (2025, May 30). IF ANYONE BUILDS IT, EVERYONE DIES. Kirkus Reviews. https://www.kirkusreviews.com/book-reviews/eliezer-yudkowsky/if-anyone-builds-it-everyone-dies/
Publishers Weekly. (2025). If Anyone Builds It, Everyone Dies: Why Superhuman AI Will Kill Us All. Publishers Weekly. https://www.publishersweekly.com/9780316595643
Samuel, S. (2025, September 17). “AI will kill everyone” is not an argument. It’s a worldview. Vox. https://www.vox.com/future-perfect/461680/if-anyone-builds-it-yudkowsky-soares-ai-risk
Spirlet, T. (2025, September 16). Forget woke chatbots — an AI researcher says the real danger is an AI that doesn't care if we live or die. Business Insider. https://www.businessinsider.com/ai-danger-doesnt-care-if-we-live-or-die-researcher-2025-9