310

LESSWRONG
LW

309
AI
Frontpage
2025 Top Fifty: 9%

189

I enjoyed most of IABIED

by Buck
17th Sep 2025
9 min read
25

189

AI
Frontpage

189

I enjoyed most of IABIED
50Aaron_Scher
37Vaniver
33Nina Panickssery
33Buck
18sjadler
32RobertM
28Joe Rogero
17Darren McKee
7Buck
4WilliamKiely
1Darren McKee
6Raemon
14Karl von Wendt
5Vaniver
5Gunnar_Zarncke
18Raemon
2Gunnar_Zarncke
3khang200923
2Max H
2WilliamKiely
2WilliamKiely
2ryan_greenblatt
2Buck
1WilliamKiely
-4Aaron Turner
New Comment
25 comments, sorted by
top scoring
Click to highlight new comments since: Today at 12:53 AM
[-]Aaron_Scher1d5033

I feel a bit surprised by how much you dislike Section 3. I agree that it does not address 'the strongest counterarguments and automated-alignment plans that haven't been written down publicly'; this is a weakness but seems too demanding given what’s public. 

I particularly like the analogy to alchemy presented in Chapter 11. I think it is basically correct (or as correct as analogies get) that the state of AI alignment research is incredibly poor and the field is in its early stages where we have no principled understanding of anything (my belief here is based on reading or skimming basically every AI safety paper in 2024). The next part of the argument is like "we're not going to be able to get from the present state of alchemy to a 'mature scientific field that doesn't screw up certain crucial problems on the first try' in time". That is, 1: the field is currently very early stages without principled understanding, 2: we're not going to be able to get from where we are now to a sufficient level by the time we need. 

My understanding is that your disagreement is with 2? You think that earlier AIs are going to be able to dramatically speed up alignment research (and by using control methods we can get more alignment research out of better AIs, for some intermediate capability levels), getting us to the principled, doesn't-mess-up-the-first-try-on-any-critical-problem place before ASI. 

Leaning into the analogy, I would describe what I view as your position as "with AI assistance, we're going to go from alchemy to first-shot-moon-landing in ~3 years of wall clock time". I think it's correct for people to think this position is very crazy at first glance. I've thought about it some and think it's only moderately crazy. I am glad that Ryan is working on better plans here (and excited to potentially update my beliefs, as I did when you all put out various pieces about AI Control), but I think the correct approach for people hearing about this plan is to be very worried about this plan. 

I really liked Section 3, especially Ch 11, because it makes this (IMO) true and important point about the state of the AI alignment field. I think this argument stands on its own as a reason to have an AI moratorium, even absent the particular arguments about alignment difficulty in Section 1. Meanwhile, it sounds like you don't like this section because, to put it disingenuously, "they don't engage with my favorite automating-alignment plan that tries to get us from alchemy to first-shot-moon-landing in ~3 years of wall clock time and that hasn't been written down anywhere". 

Also, if you happen to disagree strongly with the analogy to alchemy or 1 above (e.g., think it's an incorrect frame), that would be interesting to hear! Perhaps the disagreement is in how hard alignment problems will be in the development of ASI; for example, if the alchemists merely had to fly a blimp first try, rather than land a rocket on the moon? Perhaps you don't expect there to be any significant discontinuities and this whole "first try" claim is wrong and we'll never need a principled understanding?

I found this post and your review to be quite thoughtful overall! 

Reply
[-]Vaniver10h*3731

Some people will be turned off by the book, especially the most unreasonable parts of it, and we will have missed the opportunity to have someone more reasonable (according to me) than Eliezer and Nate write a similar book and then do a tour etc. I'm less worried about this after reading the book, because the book was good enough that it's hard for me to imagine someone else writing a much better one.

I want to register some "perfect is the enemy of the good" complaint here? Like--if you want to say "oh person X should totally write a book about AI risk", sure, let's all get together and encourage person X to write a book about AI risk. (I would probably be willing to fund more of these, and I'm probably not alone in that.) But I don't actually think there's anyone who seems more reasonable than them according to you who's willing to write a book. (Are you?)

 

Separately--I think this is the wrong model of book demand / opportunities to talk in public. I think when Superintelligence was published, there was more appetite for books like this as a result, instead of less. Similarly, I think IABIED is going to increase the appetite for books and articles and interviews by people who disagree with Eliezer and Nate, and so your hypothetical more reasonable author is more likely to get a book deal or speaking engagements as a result of IABIED increasing the temperature and salience of this issue.

Reply11
[-]Nina Panickssery2d336

Just finished the book and agree that I’d recommended it to laypeople and predict it would improve the average layperson’s understanding of AI risk arguments.

Reply
[-]Buck2d334

Why did I like the book so much more than I expected? I think it's a mix of:

  • I like the authors' writing on basic AI risk stuff but I don't like their writing on more in-the-weeds questions, and I run across their in-the-weeds writing much more in my day-to-day life, so it's surprisingly pleasant to read them writing intro materials.
  • Their presentation of the arguments were cleaner here than I've previously seen.
Reply
[-]sjadler2d*182

I agree re cleaner presentation & thought the parables here were much easier to follow than some of Eliezer’s past two-people-having-a-conversation pieces

I also thought that chapters generally opened with interesting ledes and that their endings flowed well into the chapter that followed. I was impressed by the momentum / throughline of the book in that sense

Reply
[-]RobertM2d3221
  • They very briefly discuss automated AI alignment research as a proposal for mitigating AI risk, but their arguments against that plan do not respond to the most thoughtful versions of these plans. (In their defense, the most thoughtful versions of these plans basically haven't been published, though Ryan Greenblatt is going to publish a detailed version of this plan soon. And I think that there are several people who have pretty thoughtful versions of these plans, haven't written them up (at least publicly), but do discuss them in person.)

Am a bit confused by this section - did you think that part 3 was awful because it didn't respond to (as yet unpublished) plans, or for some other reason?

Reply
[-]Joe Rogero1d283

I felt like most of the counterarguments that I see in the wild (e.g. from people on Twitter, who are mostly much more informed about AI than the audience of this book) were left unaddressed. I have no idea whether the authors’ prioritization of counterarguments was right for that audience, and I do think it would be handy to have a version of this book somewhat more appropriate for AI twitter people.

PSA: The online resources do indeed contain quite a few counter-counterarguments that didn't fit into the book. (Buck probably knows this already, some readers might not.) 

Reply1
[-]Darren McKee1d171

Buck, did you read my book "Uncontrollable" ?

Given your review, it's possible my book is the response to what you're alluding to here: "I don't know of a resource for laypeople that’s half as good at explaining what AI is, describing superintelligence, and making the basic case for misalignment risk."

I'm only 40 pages in to the new book, and inherently conflicted of course, so it is better to have the thoughts of someone who has read both and isn't me, but people have said it is the best introduction to AI risk for laypeople. 
I had hoped EY's book would clearly supplant mine but the more reviews I read, I think that isn't clearly the case. 

(happy to get you a copy, physical or audio, if desired). 

Reply
[-]Buck1d70

I've seen physical copies around, but I actually haven't read it. It's possible that you're totally right, in which case I apologize and should have finished my review with "I'm an idiot for not realizing it was worth my time to read Uncontrollable so that I could recommend it to people".

I would appreciate a digital copy and audio copy if you wanted to email them to me! I'm not sure I'll consume it because I don't know if it's that decision-relevant to me. 

Reply
[-]WilliamKiely3h40

FWIW Darren's book Uncontrollable is my current top recommended book on AI.

While I expected (75% chance) IABIED to overtake it, after listening to the audiobook Tuesday I don't think IABIED is better (though I'll wait until I receive and reread my hardcopy to declare that definitively).

As I wrote on Facebook 10 months ago:

The world is not yet as concerned as it should be about the impending development of smarter-than-human AI. Most people are not paying enough attention.

What one book should most people read to become informed and start to remedy this situation?

"Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World" by Darren McKee is now my top recommendation, ahead of:

- "Superintelligence" by Nick Bostrom,

- "Human Compatible" by Stuart Russell, and

- "The Alignment Problem" by Brian Christian

It's a short, easy read (6 hours at ~120wpm / 2x speed on Audible) covering all of the most important topics related to AI, from what's happening in the world of AI, to what risks from AI humanity faces in the near future, to what each and everyone one of us can do to help with the most important problem of our time.

Reply
[-]Darren McKee28m10

Just posted my review: IABIED Review - An Unfortunate Miss — LessWrong

Reply
[-]Raemon1d60

(fyi, I almost replied yesterday with "my shoulder Darren McKee is kinda sad about the 'no one else tried writing a book like this' line", and didn't get around to it because I was busy. I did get a copy of your book recently to see how it compared. Haven't finished reading it yet) 

Reply
[-]Karl von Wendt2d143

Tricky hypothesis 2: But the differences between the world of today and the world where ASI will be developed don't matter for the prognosis.

I don't think that the authors implied this. Right in the first chapter, they write:

If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

(emphasis by me). Even if it is not always clearly stated, I think they don't believe that ASI should never be developed, or that it is impossible in principle to solve alignment. Their major statement is that we are much farther from solving alignment than from building a potentially uncontrollable AI, so we need to stop trying to build it. 

Their suggested measures in part III (whether helpful/feasible or not) are meant to prevent ASI under the current paradigms, with the current approaches to alignment. Given the time gap, I don't think this matters very much, though - if we can't prevent ASI from being built as soon as it is technically possible, we won't be in a world that differs enough from today's to render the book title wrong.

Reply
[-]Vaniver10h*5-3

I’m saying that if such a parallel scaling technique was developed, it would probably be tested out with incrementally increasing amounts of parallelism, if nothing else just for practical engineering reasons.)

I think this isn't my impression of how things have gone in the past, so I'm not sure where your 'probably' is coming from. Like, this reminds me of the OpenAI Five story, where (if I remember correctly) an exploratory model ran for a week without much supervision and then people discovered it had performed surprisingly well.

But even if this is true, I don't think it changes the story much? If they run Sable with two hundred GPUs, then two thousand, then twenty thousand, then two hundred thousand, what is different about the story? It seems like extrapolating Sable's performance on 200k GPUs from 20k GPUs is fraught, and if the worrisome behavior starts appearing at 40k GPUs, the step size of 10x means they skip over the zone where the bad behavior is happening and maybe the model is unable to hide it to the zone where the bad behavior is happening and the model is able to hide it. "That's why real AI labs use a step size of 2x", one might reply, but--how confident are we that 2x is enough?

I do think there's some tension in the story between elements that make it likely all of the GPUs are put onto one project (something like "it's the CEO's pet project to throw all the compute at a project") and "no one is watching"--if nothing else, probably someone will get fired if the model stopped thinking one hour in to a long run--but I think it's not hard to add additional details that make that sensible, or to postulate that the watching is just a devops engineer making sure that the GPUs are being utilized. ("Oh neat, it reformulated its internal language to be more efficient!" the observers might think, as Sable slips out of their control.)

Reply
[-]Gunnar_Zarncke1d5-12

(caveat: I'm still reading the book)

The book takes a risk by - and I assume it is intentional - ignoring some of the more nuanced arguments (esp. your Tricky hypothesis 2). I think they are trying to shock the Overton Window to the very real risk of death by alignment failure if society continues with business as usual. The risk management seems to be:

A) Yet another carefully hedged warning call (like this one). Result:

  • 95% few people update, but the majority continues business as usual.
  • 5% brings the topic over the tipping point.

B) If Anyone Builds It, Everyone Dies. Result:

  • 50% the topic becomes a large discussion point, the Overton Window includes the risk.
  • 50% critical voices point out technical weaknesses of part 3 and the effort fizzles out.

If these numbers are halfway right, B seems advisable? And you can still do A if it fails!

Reply
[-]Raemon1d184

I think Buck and Eliezer both agree you should only say shocking things if they were true. I think if Eliezer believed what Buck believes, he would have found a title that was still aimed at the overton-smashing strategy but honest.

Reply
[-]Gunnar_Zarncke1d20

I don't think you are arguing only about the title. Titles naturally have to simplify, but the book content has to support it. The "with techniques like those available today" in "If anyone builds it (with techniques like those available today), everyone dies" sure is an important caveat, but arguably it is the default. And, as Buck agrees, the authors do qualify it that way in the book. You don't have to repeat the qualification each time you mention it. 

The core disagreement doesn't seem to be about that but about leaving out Tricky hypothesis 2. I'm less sure that is an intentional omission by the authors. Yudkowsky sure has argued many times that alignment is tricky and hard and may feel that the burden of proof is on the other side now. 

Reply
[-]khang2009231d3-2

Even given all the flaws, I don't know of a resource for laypeople that’s half as good at explaining what AI is, describing superintelligence, and making the basic case for misalignment risk.

You might not have read aisafety.dance. Although it doesn't explain in detail what AI and superintelligence are, it did a really good job of describing the specifics of AI safety, possibly on par with the book (I haven't read the book yet, so this is an educated guess)

Reply
[-]Max H1h20
  • Tricky hypothesis 1: ASI will in fact be developed in a world that looks very similar to today's (e.g. because sub-ASI AIs will have negligible effect on the world; this could also be because ASI will be developed very soon).

 

  • Tricky hypothesis 2: But the differences between the world of today and the world where ASI will be developed don't matter for the prognosis.

 

Both of these hypotheses look relatively more plausible than they did 4y ago, don't they? Looking back at this section from the 2021 takeoff speed conversation gives a sense of how people were thinking about this kind of thing at the time.

AI-related investment and market caps are exploding, but not really due to actual revenue being "in the trillions" - it's mostly speculation and investment in compute and research.

Deployed AI systems can already provide a noticeable speed-up to software engineering and other white-collar work broadly, but it's not clear that this is having much of an impact on AI research (and especially a differential impact on alignment research) specifically.

Maybe we will still get widely deployed / transformative robotics, biotech, research tools etc. due to AI that could make a difference in some way prior to ASI, but SoTA AIs of today are routinely blowing through tougher and tougher benchmarks before they have widespread economic effects due to actual deployment.

I think most people in 2021 would have been pretty surprised by the fact we have widely available LLMs in 2025 with gold medal-level performance on the IMO, but which aren't yet having much larger economic effects. But in relative terms it seems like you and Christiano should be more surprised than Yudkowsky and Soares.

Reply1
[-]WilliamKiely6h*20

I'm less worried about this after reading the book, because the book was good enough that it's hard for me to imagine someone else writing a much better one.

I was really hoping you'd say "after reading the book, I updated toward thinking that I could probably help a better book get written."

My view is still that a much better Intro to AI risk can still get written.

I currently lean toward Darren McKee's Uncontrollable still being a better intro than IABIED, though I'm going to reread IABIED once my hardcopy arrives before making a confident judgment.

Reply
[-]WilliamKiely6h20

"If anyone builds it (with techniques like those available today), everyone dies"

One could argue that the parenthetical caveat is redundant if the "it" means something like "superintelligent AI built with techniques like those available today".

I also listened to the book and don't have the written text available yet, so I'll need to revisit it when my hardcopy arrives to see if I agree that there are problematic uncaveated versions of the title throughout the text.

(At first I disliked the title because it seemed uncaveated, but again, the "it" in the title is ambiguous and can be interpreted as including the caveats, so now I'm more neutral about the title.)

Reply
[-]ryan_greenblatt1d20

I'm quite sympathetic to something like the caveated version of the title

Presumably, another problem with your caveated version of the title is that you don't expect literally everyone to die (at least not with high confidence) even if AIs take over.

Reply
[-]Buck1d20

oh thanks, fixed. I just internally substitute "AI takeover" anytime anyone says "AI kills everyone" because this comes up constantly, and I'd forgotten that I'd done so here :P

Reply
[-]WilliamKiely6h10

I independently had this same thought when listening to the book on Tuesday, and think it's worth emphasizing:

I again think they’re inappropriately reasoning about what happens for arbitrarily intelligent models instead of reasoning about what happens with AIs that are just barely capable enough to count as ASI. Their arguments (that AIs will learn goals that are egregiously misaligned with human goals and then conspire against us) are much stronger for wildly galaxy-brained AIs than for AIs that are barely smart enough to count as superhuman.

Reply
[-]Aaron Turner9h-40

Please see "TTQ: An Implementation-Neutral Solution to the Outer AGI Superalignment Problem", preprint: https://doi.org/10.5281/zenodo.16876832. (Full disclosure: I am the author.)

Reply
Moderation Log
More from Buck
View more
Curated and popular this week
25Comments

I listened to "If Anyone Builds It, Everyone Dies" today.

I think the first two parts of the book are the best available explanation of the basic case for AI misalignment risk for a general audience. I thought the last part was pretty bad, and probably recommend skipping it. Even though the authors fail to address counterarguments that I think are crucial, and as a result I am not persuaded of the book’s thesis and think the book neglects to discuss crucial aspects of the situation and makes poor recommendations, I would happily recommend the book to a lay audience and I hope that more people read it.

I can't give an overall assessment of how well this book will achieve its goals. The point of the book is to be well-received by people who don't know much about AI, and I’m not very good at predicting how laypeople will respond to it; seems like results so far are mixed leaning positive. So I’ll just talk about whether I think the arguments in the book are reasonable enough that I want them to be persuasive to the target audience, rather than whether I think they’ll actually succeed.

Thanks to several people for helpful and quick comments and discussion, especially Oli Habryka and Malo Bourgon!

Synopsis

Here's a synopsis and some brief thoughts, part-by-part:

  • In part 1, they explain what neural nets are and why you might expect powerful AIs to be misaligned. I thought it was very good. I think it's a reasonable explanation of basic ML and an IMO excellent exploration of what the evolution analogy suggests about AI goals (though I think that there are a bunch of disanalogies that the authors don’t discuss, and I imagine I’d dislike their discussion of that if they did write it). I agreed with most of this section.
    • I thought the exploration of the evolution analogy was great – very clearly stated and thoughtful. I don’t remember if I’ve previously read other versions of this argument that also made all the points here (though there are many important subtleties to the argument that it doesn’t discuss; for example, it almost totally ignores instrumental alignment).
    • Overall, I thought this part does a great job of articulating arguments. The text does respond to a bunch of counterarguments, but they mostly felt like really naive and rudimentary counterarguments to me, and I felt like most of the counterarguments that I see in the wild (e.g. from people on Twitter, who are mostly much more informed about AI than the audience of this book) were left unaddressed. I have no idea whether the authors’ prioritization of counterarguments was right for that audience, and I do think it would be handy to have a version of this book somewhat more appropriate for AI twitter people.
  • Part 2, where they tell a story of AI takeover, is solid; I only have one footnoted quibble[1].

    • In general, they try to tell the story as if the AI company involved is very responsible, but IMO they fail to discuss some countermeasures the AI company should take (e.g. I would take those actions if I were in charge of a ten-person team, assuming the rest of the company is being reasonably cooperative with my team). This doesn't hurt the argument very much, because it's easy to instead read it as a story about a real, not-impressively-responsible AI company.

  • Part 3, where they try to talk about the state of countermeasures and how the risk should be responded to, varied between okay and awful, and overall felt pretty useless to me. If I were recommending the book to someone, I would plausibly recommend that they skip it.
    • They give a general discussion of how engineering problems are often hard when you don't have good feedback loops or good understanding of the underlying science, and when the technology involves fast-moving components (e.g. nuclear reactors); this content was fine.
    • They very briefly discuss automated AI alignment research as a proposal for mitigating AI risk, but their arguments against that plan do not respond to the most thoughtful versions of these plans. (In their defense, the most thoughtful versions of these plans basically haven't been published, though Ryan Greenblatt is going to publish a detailed version of this plan soon. And I think that there are several people who have pretty thoughtful versions of these plans, haven't written them up (at least publicly), but do discuss them in person.)
    • (They also criticize some other naive plans for mitigating AI risk, like “train it to be curious, so that it preserves humanity”; I think their objections to those are fine.)
    • They propose that GPU clusters (perhaps as small as 8 GPUs!) be banned or restricted somehow and suggest some other calls to action; I don’t think the ideas here are very good. (I’m told that the online resources go into more detail on their proposals; my concern isn’t that the proposals aren’t detailed enough, but that they aren’t very good interventions to push for.)

I personally really liked the writing throughout (unlike e.g. Shakeel I didn't find the sentences torturous at all). I'm a huge fan of Eliezer's fiction and most of his non-fiction that doesn't talk about AI, so maybe this is unsurprising. I often find it annoying to read things Eliezer and Nate write about AI, but I genuinely enjoyed the experience of listening to the book. (Also, the narrator for the audiobook does a hilarious job of rendering the dialogues and parables.)

My big disagreement

In the text, the authors often state a caveated version of the title, something like "If anyone builds it (with techniques like those available today), everyone dies". But they also frequently state or imply the uncaveated title. I'm quite sympathetic to something like the caveated version of the title[2]. But I have a huge problem with equivocating between the caveated and uncaveated versions.

There are two possible argument structures that I think you can use to go from the caveated thesis to the uncaveated one, and both rely on steps that are IMO dubious:

Argument structure one:

  • If anyone built ASI with current techniques in a world that looked like today's, everyone would die.
  • Tricky hypothesis 1: ASI will in fact be developed in a world that looks very similar to today's (e.g. because sub-ASI AIs will have negligible effect on the world; this could also be because ASI will be developed very soon).
  • Therefore, everyone will die.

This is the argument that I (perhaps foolishly and incorrectly) understood Eliezer and Nate to be making when I worked with them, and the argument I made when I discussed AI x-risk five years ago, right before I started changing my mind on takeoff speeds.

I think Eliezer and Nate aren’t trying to make this argument—they are agnostic on timelines and they don’t want to argue that sub-ASI AI will be very unimportant for the world. I think they are using what I’ll call “argument structure two”:

  • If anyone built ASI with current techniques in a world that looked like today's, everyone would die.
  • The big complication: However, ASI might be built in a world that looks very different from today's: it might be several decades in the future, pretty powerful AI might be available for a while before ASI is developed, researchers might be way more experienced getting AIs to do stuff than they currently are.
  • Tricky hypothesis 2: But the differences between the world of today and the world where ASI will be developed don't matter for the prognosis.
  • Therefore, everyone will die.

The authors are (unlike me) confident in tricky hypothesis 2. The book says almost nothing about either the big complication or tricky hypothesis 2, and I think that’s a big hole in their argument that a better book would have addressed.[3] ( I find Eliezer’s arguments extremely uncompelling.)

I think that explicitly mentioning the big complication is pretty important for giving your audience an accurate picture of what you're expecting. Whenever I try to picture the development of ASI, it's really salient in my picture that that world already has much more powerful AI than today’s, and the AI researchers will be much more used to seeing their AIs take unintended actions that have noticeably bad consequences. Even aside from the question of whether it changes the bottom line, it’s a salient-enough part of the picture that it feels weird to neglect discussing it.

And of course, the core disagreement that leads me to disagree so much with Eliezer and Nate on both P(AI takeover) and on what we should do to reduce it: I don't agree with tricky hypothesis 2. I think that the trajectory between here and ASI gives a bunch of opportunities for mitigating risk, and most of our effort should be focused on exploiting those opportunities. If you want to read about this, you could check out the back-and-forth me and my coworkers had with some MIRI people here, or the back-and-forth Scott Alexander and Eliezer had here.

(This is less relevant given the authors’ goal for this book, but from my perspective, another downside of not discussing tricky hypothesis 2 is that, aside from being relevant to estimating P(AI takeover), understanding the details of these arguments is crucial if you want to make progress on mitigating these risks.)

If they wanted to argue a weaker claim, I'd be entirely on board. For example, I’d totally get behind:

  • It is pretty unclear whether we will survive or not. There are various reasons to think we might be able to prevent AI takeover. But none of those reasons are airtight, and many of them require that all AI companies with dangerous models implement safety measures competently, and it's very unclear that that will happen.
  • We should demand an extremely low probability of extinction from AI developers, because extinction would be really bad. And we are not on track to getting to justified confidence in the safety of powerful AI.

But instead, they propose a much stronger thesis that they IMO fail to justify.

This disagreement leads to my disagreement with their recommendations—relatively incremental interventions seem much more promising to me.

(There’s supplementary content online. I only read some of this content, but it seemed somewhat lower quality than the book itself. I'm not sure how much of that is because the supplementary content is actually worse, and how much of it is because the supplementary content gets more into the details of things—I think that the authors and MIRI staff are very good at making simple conceptual arguments clearly, and are weaker when arguments require attention to detail.)

(I will also parenthetically remark that superintelligence is less central in my picture than theirs. I think that there is substantial risk posed by AIs that are not wildly superintelligent, and it's plausible that humans purposefully or involuntarily cede control to AIs that are less powerful than the wildly superintelligent ones the authors describe in this book. This causes me to disagree in a bunch of places.)

I tentatively support this book

I would like it if more people read this book, I think. The main downsides are:

  • Some people will be persuaded by the parts of the book that I think are wrong, which will have slightly bad consequences but is not a huge problem, and seems overall better than them never engaging with a serious argument about AI x-risk.
  • Some people will be turned off by the book, especially the most unreasonable parts of it, and we will have missed the opportunity to have someone more reasonable (according to me) than Eliezer and Nate write a similar book and then do a tour etc. I'm less worried about this after reading the book, because the book was good enough that it's hard for me to imagine someone else writing a much better one.
    • Relatedly, success of this book will lead Eliezer and Nate to be more prominent public intellectuals on the topic of AI. I don't know whether this is good or bad. It really depends on who they're displacing.
      • I don't love them as representatives of AI safety to the public for a few reasons. Despite the book being impressively cleaned up compared to Eliezer’s usual writing style, I expect them to be somewhat worse at being likable and persuasive to mass audiences in unscripted settings. I think their arguments are often unpersuasive to informed audiences (partially because of the flaws in the arguments that I complained about above, and partially because they don’t know much about empirical ML or empirical evidence about alignment and sometimes come across as blowhards to ML researchers). And I disagree with their recommended actions.
      • I think it would be suboptimal if important stakeholders tried to get advice from them (though again, it depends who they’re displacing), because I don't think that they have good recommendations for what people should do.

Despite my complaints, I’m happy to recommend the book, especially with the caveat that I think it's wrong about a bunch of stuff. Even given all the flaws, I don't know of a resource for laypeople that’s half as good at explaining what AI is, describing superintelligence, and making the basic case for misalignment risk. After reading the book, it feels like a shocking oversight that no one wrote it earlier.

  1. ^

     In their story, the company figures out a way to scale the AI in parallel, and then the company suddenly massively increases the parallel scale and the AI starts plotting against them. This seems somewhat implausible—probably the parallel scale would be increased gradually, just for practical reasons. But if that scaling had happened more gradually, the situation probably still wouldn't have gone that well for humanity if the AI company was as incautious as I expect, so whatever. (My objection here is different from what Scott complained about and Eliezer responded to here—I’m not saying it’s hugely unrealistic for parallel scaling to pretty suddenly lead to capabilities improving as rapidly as depicted in the book, I’m saying that if such a parallel scaling technique was developed, it would probably be tested out with incrementally increasing amounts of parallelism, if nothing else just for practical engineering reasons.)

  2. ^

     My main problems with the caveated version of the title:

    • I again think they’re inappropriately reasoning about what happens for arbitrarily intelligent models instead of reasoning about what happens with AIs that are just barely capable enough to count as ASI. Their arguments (that AIs will learn goals that are egregiously misaligned with human goals and then conspire against us) are much stronger for wildly galaxy-brained AIs than for AIs that are barely smart enough to count as superhuman.
    • I don't think it's clear that misaligned superintelligent AI would kill everyone as part of taking over; see discussion here. Note that the expected fatalities from getting taken over by wildly superintelligent AI are probably lower than the fatalities from getting taken over by an AI that is barely able to take over, because in the latter case the AI might have to kill us in order to take our stuff despite not wanting to do so.
  3. ^

     I don't think Eliezer and Nate are capable of writing this better book, because I think their opinions on this topic are pretty poorly thought through.

Mentioned in
20IABIED Review - An Unfortunate Miss
6[Event]AISafety.com Reading Group session 327