46

LESSWRONG
LW

45
AI

17

IABIED Review - An Unfortunate Miss

by Darren McKee
18th Sep 2025
11 min read
1

17

AI

17

IABIED Review - An Unfortunate Miss
5WilliamKiely
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 12:17 AM
[-]WilliamKiely1h50

Fair review. As I've now said elsewhere, after listening to IABIED I think your book Uncontrollable is probably still the best overview of AI risk for a general audience. More people should definitely read your book. I'd be down to write a more detailed comparison in a week or two once I have hardcopies of each book (still in the mail).

Reply
Moderation Log
More from Darren McKee
View more
Curated and popular this week
1Comments

TL;DR   Overall, this is a decent book because it highlights an important issue, but it is not an excellent book because it fails to sufficiently substantiate its main arguments, to explain the viability of its solutions, and to be more accessible to the larger audience it is trying to reach. 
As such, it isn’t the best introduction for a layperson curious about AI risk.


(Meta?)Context and Expectations

  1. Writing a book is hard. Making all those decisions about what to write and how to write it is hard. Doing interviews/podcasts where it’s important to say the right thing, in the right way, is hard. So, separate from anything else, kudos to Eliezer and Nate for being men in the arena. 
  2. When I was writing my book on AI risk, someone said that it could be highly impactful, they just weren’t sure if that impact would be positive or negative. I think the same is true of IABIED. (Of course, it could also be anywhere in-between.) 
    It is an empirical question whether IABIED, as it ripples through space and time and minds and actions, will lead to a safer world. One hopes it at least broadens the Overton window (instead of a backlash making it worse). No one currently knows and only time will tell. Everyone should acknowledge the possible risks though. 
  3. Book reviews often say more about the reviewer than the book. Non-exhaustively, Scott Alexander didn’t like the middle third (the scenario), Buck didn’t like the final third (facing the challenge), Zvi liked it overall, others thought the beginning was weak, and Shakeel disliked the writing. Many recommended IABIED all the same, and some loved it with few quibbles. Obviously, mileage will vary and different strokes for different folks.
  4. What should be acknowledged is that most of these reviewers (including me) already care about AI risk and are convinced by the main arguments that are made, even though they disagree about likelihood of harm and what should be done. As such, it’s hard to read the book for ‘yourself’ instead of the broader impact we think the book might have, guessing how other people will receive it. This is additionally true for public reviews of the book. Lots of mental modeling going on, strategic considerations, with diverse and uncertain degrees of success. Time will tell.  
  5. Personally, I’ll share that going into this I REALLY wanted to love this book and have it clearly supplant mine as THE book on AI risk/safety to give to the interested layperson. 
    I knew I would likely disagree with some of the framing or prescriptions, but I hoped it would cover the key issues with style and depth and rigor, using excellent structure and up-to-date examples from recent events. Unfortunately, it did not deliver on that hope (more on why below). 
    As one of the handful(?) of humans on earth that have written a book for the layperson dedicated to AI risk, I couldn’t help but see missed opportunities all over the place. I’ve reflected on whether that perception is largely subjective preference or a honed sense of how others come to understand things but I can’t know (time will tell). 
    I will also sincerely admit that I believe my book “Uncontrollable” is a superior introduction for a curious layperson. I mention this for transparency but also because some seem to be rallying around IABIED, even with its shortcomings, because they don’t think there is another option (Buck implied as much in his review, which led me to frame a couple things differently in this review). 
  6. I acknowledge some tension/possible CoI here, but I definitely think the cause is more important than me. More to the point, the stakes are high! We need more people to understand the very real threat that advancing AI poses. Many of the people we’re trying to reach only read a couple of non-fiction books a year. And it is unlikely that they are going to read more than one about AI risk/safety. Should IABEID be that one? I don’t think so. 
  7. Finally, more overtly, I state my assumption that IABIED isn’t just for the LessWrong and adjacent audiences, but for others who have little exposure to the issues of AI safety and risk that we hope to convince. 
    It is through their eyes I’m trying to see, and that is how the book is judged. 

     

Main Points/Reflections

Important messages

IABIED usefully highlights key things everyone should know about AI risk: A superintelligent AI could cause humanity lots of harm, we don’t know how to fully ensure it won’t, we don’t know how long we have to figure it out, and everyone should be more concerned than they are. 

I liked the use of evolution as a way to glimpse how initial goals/purpose (evolved desire for sex or sugar) can be thwarted by intelligence down the line (hacks of birth control and artificial sweeteners). How predicting actions as an entity changes over time is very difficult/impossible. 

I also liked the refrain that AI systems are grown not built (to highlight our lesser understanding/control).

Important to end the book on the message of hope (but not sure how hopeful it will actually cause readers to feel). 

(The rest of the review focuses more on the shortcomings because I’ve seen fewer of those mentioned and I keep seeing positive blurbs without much analysis. ) 
 

Lack of detail/ argumentation/ too short

The book repeatedly suffers from insufficient exploration of a concept/issue. This harms a deeper understanding/encoding of that concept/issue and how it fits into the overall picture of AI safety and risk. 
For many of the chapters in Parts 1 and 3, as they came to their end, I wondered “That’s it? Where is the rest? You were just getting going!” 
This is all the more significant because of the bold claim in the title and the text, and how the reader is explicitly told the main claim is not hyperbole - everyone will die! If so, there better be reams of argumentation to support this thesis. This support should be both intellectual and using intuition pumps to break down cognitive/emotional walls to enable greater acceptance of AI risk arguments (perhaps this is what the parables were supposed to do, but if so, it wasn’t sufficient). 

For comparison, instead of IABIED’s parables with aliens, Uncontrollable met people where they are, using common/everyday experiences to illustrate AI risk/safety issues. For example, listening to music and baking cookies (the power of intelligence), ordering pizza and cleaning your room (alignment problem), your smartphone (control), and food poisoning (risk). The logic was: what is the activity/thing that almost everyone has experience with? The less common or accessible the introduction to a chapter, the fewer people will be reached, or the depth of that reach is lesser. Conceptual learning/integration suffers. 

I’m aware that each chapter ends with a QR code but I believe that most people will not use these, and those listening in audio are even less likely to do so. It’s hard to compare lengths without word count but the audio of IABIED is over 6 hours and the audio of my Uncontrollable is over 10 hours. This means Uncontrollable is ~40-60% longer, and I didn’t even have a scenario. So, IABIED spends much less time talking about the power of intelligence, how machine learning works, the alignment problem, control, risk, etc. 

I think IABIED should have been at least 20% longer. This would still allow it to be under 300 pages (as I assume a shorter book was the goal) while providing a lot more detail about the key aspects of AI safety/risk. If someone is hearing about the alignment problem for the first time, they really need to be walked through it. 

Minor example that is a bit representative: page 203, it says “It might help if more people understood just how spooked experts and engineers are about artificial intelligence” and “It might also help if more people understood how fast this field is moving” (p 204). 

Yes, agreed, it sure would! …But then each of those lines is only followed by a very short overview explanation. Why not many paragraphs/pages? With real world examples and quotes? Make the case, please make the case!

Further, IABIED doesn’t address many of the main criticisms AI safety/risk positions receive in the main body of the book, and more space would have allowed for that. There should be an abundance of responses to the “but what about/why doesn’t…” variety but there were only some. I don’t know why this decision to truncate and put more online was made to this degree, but I think it weakened the book quite a bit. 
 

Style too sciencey/sci-fi? 

Overall, the book is easy to read in the sense that it isn’t too long, the spacing/margins are large and text is not visually dense. Many sentences are straightforward and easy to read with the occasional flourish. Overall though, I think the style was too science/sci-fi oriented. This is a drawback because I believe IABIED is also trying to reach those without a science background and many of the examples/explanations/parables would not be that accessible for such people (empirical question though). 

To elaborate further in two parts…
First, the issue of the parables. Some will like these, some will not. I didn’t like most of them so that made many chapters a rough start. I did enjoy the story of leaded gasoline though and wanted more like that. In short, I think real-world examples are best, not aliens/bird aliens making observations. Further, in many of these parables, characters spoke in an odd way that felt forced (to make the point), which makes them less effective as a communication device. 

Second, science stuff is great for people who know/like science but they don’t work as well for those who don’t. Most people don’t really know how evolution works, so if that is going to be the analog to help understand limitations of AI alignment/control/prediction, best to unpack it a bit more. Even more so when discussing sexual selection - that part was just too brief to stick if you’re hearing it for the first time (people will only re-read the same paragraph so much). 
There were also the technical details about LLMs, nuclear reactors, and numerous other sprinkling of phrases that people who know science/want to show that they know science use… but I don’t see that as a strength. You get points among a certain subpopulation but you lose them from another, larger subpopulation (the one I think IABIED is trying to reach more). The risk of such language isn’t worth the reward when another style can work for all audiences. 
 

Scenario (Part II of IABIED)

I won’t give a summary as Scott and others did that, but I will say that it was disappointing for me. 
Scenarios are tricky. Without them, people often say, “Yes, but HOW would it happen?” Yet, with them, people poke holes and say “that specific thing is unlikely” and then disregard the larger point. A quirky thing about human brains, eh? The more detailed a narrative, the more psychologically plausible it is… while at the same time being less likely to occur whenever specificity exists. 

As I didn’t have a scenario in Uncontrollable, I had hoped that the one in IABIED would be something I could point towards. Alas, I don’t think I can. This is because I wanted a scenario that was less sci-fi narrative and more evidence-based good non-fiction extrapolation. Less a story, more a foresight scenario. 
For example, to make it extremely clear about the real world events that have happened and use those as a launching pad to near future events that are likely to happen, and then present a range of possible outcomes flowing from those. IABIED’s scenario felt more like a story which I think hurts the main goal of the book because too many people already think AI risk is a sci-fi story, so a description of how things come about should take pains to make it less like a story and more like a series of reasonable extrapolations. People will disagree about this (time will tell). 
 

Promoting Safe AI innovation or Shut it all down?

The main proposal of IABIED is to shut it all down. This means ceasing all AI progress, having an oversight/control regime to ensure that that happens, and bombing data centers if needed to enforce said regime. 
These proposals are so outlandish on their face, that there should have been LOTS of details about how exactly they would plausibly come to fruition and how the numerous complications and various wicked problems would be addressed. 
There was not. And it’s baffling to me. Why such an own goal?

8 GPUs?! 
Presently, the most powerful AI models are trained on thousands/tens of thousands of GPUs. There are plans for 100,000 GPU data centers. IABIED proposes that it should be illegal for any corporation/person to have more than 8 GPUs, unless they are part of a monitoring/treaty oversight regime. 
Honestly, this seems laughable but it’s important to have an open mind, so I was ready to take the position seriously. Take us through the steps of how this could actually work in practice. How different players - governments, corporations, billionaires - would react and how that proposal would be politically feasible, legally appropriate, etc. 
Nothing like that appears. Same with when they said that there should be no more AI research published. No detailed consideration of complexities or how some of these proposals might tank the economy (so any politician proposing them would be replaced by one who isn’t, etc). 
It’s a common recurrence and I don’t know why this wasn’t included in the book (itself). 
 
While I can appreciate the sentiment behind the belief “X will kill us if we don’t stop doing A, B, C”, it’s not enough to say that those things are necessary and sufficient for safety. 
You have to make the case. You have to spell it out. You have to convince skeptics. 
But they don’t. 
They say it won’t be easy or cheap, but it’s required. 
That is not enough to take such proposals seriously. 

If the next response is something like, ‘However implausible you think these proposals are, they are our only hope, so there is no room for flexibility’ - this isn’t obviously correct. 
They hint at this when IABIED argues there should be a single focus on curbing AI progress instead of a broader tent, because unrelated issues (like deep fakes) may detract from their focus or people are sensitive to regulatory overreach. 
Maybe? 
Make the stronger case so people aren’t left wondering: why not have a larger tent, find allies where you can, put in 5-10 different more palatable AI safety related proposals and make some progress in smaller steps? …instead of a sudden much larger ask that appears completely and utterly unrealistic? 

Strategically, IABIED has chosen to argue “Not X” (i.e., no more AI progress) while in Uncontrollable I argue “For Y” (i.e., safe AI innovation). I think having a range of policy proposals that lead to safe AI innovation is more plausible to succeed than what IABIED proposes (even if both are in the ‘unlikely’ category). 
 

[Meta: I can see I’m running out of steam. I wish I had more time/energy to give to this review but I don't, so some of my positions may appear less substantiated than if I had more time to quote/excerpt lots of the text. Such is life]
 

Final Thoughts

It was a great effort to create and launch IABIED. I sincerely hope it has a wonderfully positive impact and makes us all more safe. But it may not, given its numerous missed opportunities and various shortcomings. Time will tell.

I would love for someone other than me to compare/contrast IABEID and Uncontrollable. If the cost of my book is the limiting factor, let me know and I’ll get you a free one. 

I would normally consider this poor form in various ways, but given the critical importance of increasing acceptance of AI risk and some being unaware of its existence, it becomes more reasonable to say that Uncontrollable does a better job than IABIED at introducing AI risk to a layperson audience without a science background.