Eliezer and I wrote a book. It’s titled If Anyone Builds It, Everyone Dies. Unlike a lot of other writing either of us have done, it’s being professionally published. It’s hitting shelves on September 16th.

It’s a concise (~60k word) book aimed at a broad audience. It’s been well-received by people who received advance copies, with some endorsements including:

The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the **** up!

- Stephen Fry, actor, broadcaster, and writer

If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can.

- Tim Urban, co-founder, Wait But Why

This is the best no-nonsense, simple explanation of the AI risk problem I've ever read.

- Yishan Wong, former CEO of Reddit

Lots of people are alarmed about AI, and many of them are worried about sounding alarmist. With our book, we’re trying to break that logjam, and bring this conversation into the mainstream.

This is our big push to get the world onto a different track. We’ve been working on it for over a year. The time feels ripe to me. I don’t know how many more chances we’ll get. MIRI’s dedicating a lot of resources towards making this push go well. If you share any of my hope, I’d be honored by you doing whatever you can to help the book make a huge splash, once it hits shelves.

One thing that our publishers tell us would help is preorders. Preorders count towards first-week sales, which determine a book’s ranking on the best-seller list, which has a big effect on how many people read it. And, inconveniently, early pre-orders affect the number of copies that get printed, which affects how much stock publishers and retailers wind up having on hand, which affects how much they promote the book and display it prominently. So preorders are valuable,[1][2] and they’re especially valuable before the first print (mid-June) and the second print (mid-July). We’re told that 10,000 pre-orders constitutes a good chance of being on the best-seller list (depending on the competition), and 20,000 would be a big deal. Those numbers seem to me like they’re inside the range of possibility, and they’re small enough that each individual preorder makes a difference.

If you’ve been putting off sharing your views on AI with your friends and family, this summer might be a good time for it. Especially if your friends and family are the sort of people who’d pre-order a book in June even if it won’t hit shelves until September.

Another thing that I expect to help is discussing the book once it comes out, to help generate buzz that helps increase the impact. Especially if you have a social media platform. If you’ve got a big or interesting platform, I’d be happy to coordinate about what timings are most impactful (according to the publicists) and perhaps even provide an advance copy (if you want to have content queued up), though we can't offer that to everyone.

Some of you have famous friends that might provide endorsements to match or exceed the ones above. Extra endorsements would be especially valuable if they come in before May 30, in which case they could be printed in or on the book; but they’re still valuable later for use on the website and in promotional material. If you have an idea, I invite you to DM me and we might be able to share an advance copy with your contact.

(And, of course, maybe you don’t share my hope that this book can bring the conversation to the mainstream, or are reserving judgement until you’ve read the dang thing. To state the obvious, that’d make sense too.)

I’ve been positively surprised by the reception the book has gotten thus far. If you're a LessWrong regular, you might wonder whether the book contains anything new for you personally. The content won’t come as a shock to folks who have read or listened to a bunch of what Eliezer and I have to say, but it nevertheless contains some new articulations of our arguments, that I think are better articulations than we’ve ever managed before. For example, Rob Bensinger (of MIRI) read a draft and said:

I've spent years trying to make these same arguments and I was frequently floored while reading the book at how much better you did at arguing for things I've been struggling to communicate this whole time. XD YOU SOLVED THE PROBLEM. This is how we should be making these arguments!

Other MIRI staff report that the book helped them fit the whole argument in their head better, or that it made sharp some intuitions they had that were previously vague. So you might get something out of it even if you’ve been around a while. And between these sorts of reactions among MIRI employees and the reactions from others quoted at the top of this post, you might consider that this book really does have a chance of blowing the Overton window wide open.

As Rob said in the MIRI newsletter recently:

the two proximate reasons humanity is currently racing to destroy itself with AI are that (1) not enough people are aware of the danger, and (2) some of the people aware of the danger are worried about looking silly by speaking out about it.

These are completely insane reasons for the human experiment to end.

[...] If public figures woke up tomorrow and just started talking about this issue, that would make it stop feeling like an issue that has to be discussed in hushed tones.

That’s what we’re going for. And seeing the reception of early drafts, I have a glimmer of hope. Perhaps humanity can yet jolt into action and change our course before it’s too late. If you, too, see that glimmer of hope, I’d be honored by your aid.

Also we have a stellar website made by LessWrong’s very own Oliver Habryka, where you can preorder the book today: IfAnyoneBuildsIt.com.


  1. Bulk preorders don’t count. The people who compile bestseller lists distinguish and discount bulk preorders. ↩︎

  2. We're told that hardcover preorders count for a little more than e-book preorders, if it's all the same to you. I mostly recommend just buying whichever versions you actually want. ↩︎

New Comment
109 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]jeremy10482

I have preordered, and am looking forward to reading my copy when it arrives. Seems like a way to buy lightcone-control-in-expectation-points very cheaply.

 

I admit I'm worried about the cover design. It looks a bit... slap-dash; at first I thought the book was self-published. I'm not sure how much control you and Eliezer have over this, but I think improving it would go a long way toward convincing people to spread it & its ideas as mainstream, reasonable, inside-the-overton-window.

[-]MondSemmel118114

+1 on the cover looking outright terrible. To make this feedback more specific and actionable:

  • If you care about the book bestseller lists, why doesn't this book cover look like previous bestsellers? To get a sense of how those look like, here is an "interactive map of over 5,000 book covers" from the NYT "Best Selling" and "Also Selling" lists between 2008 and 2019.
  • In particular, making all words the same font size seems very bad, and making title and author names the same size and color is a baffling choice.
    • Why is the subtitle in the same font size as the title?
    • And why are your author names so large, anyway? Is this book called "If Anyone Build It, Everyone Dies", or is it called "Eliezer Yudkowsky & Nate Soares"?
    • Plus someone with a 17-character name like "Eliezer Yudkowsky" simply can't have such a large author font. You're spending three lines of text on the author names!
    • Plus I would understand making the author names so large if you had a humungous pre-existing readership (when you're Stephen King or J. K. Rowling, the title of your book is irrelevant). But even Yudkowsky doesn't have that, and Nate certainly doesn't. So why not make the author names smaller, and let the ti
... (read more)

The "lightcone-eating" effect on the website is quite cool. The immediate obvious idea is to have that as a background and write the title inside the black area.

If one wanted to be cute you could even make the expansion vaguely skull-shaped; perhaps like so?

7Odd anon
Most of those are fiction or biographies/memoirs (which often have a picture of the subject/author on the cover), which seem to have a different cover style than other books. Skimming through some lists of NYT bestsellers, some books with the most comparable "Really Big Thing!" topics are "Fascism: A Warning" (Madeleine Albright, cover has large red-on-black lettering, no imagery), "How to Avoid a Climate Disaster" (Bill Gates, cover has large gradiented blue-to-red text on white background, author above, subtitle below, no imagery), "Germs" (title in centered large black lettering, subtitle "Biological Weapons and America's Secret War" in smaller text above, authors beneath; background is a white surface with a diagonally-oriented glass slide on it), and "A Warning - Anonymous" (plain black text on white background, subtitle "A Senior Trump Administration Official" in small red lettering below, no imagery). Neither cover version of IABIED looks that different from that pattern, I think.
5MikkW
Given that the book is being published by a major publisher, it can safely be assumed that the cover design was made by a professional cover designer, who knew what they were doing. Contrary to what you wrote, the title has a bigger font size than both the subtitle and the authors' names (this is true of both the American and UK covers; I am primarily talking about the American cover, which I presume is the one you are referencing). Even if the author names were the same size as the title, it is immediately obvious which one is the title, and which one isn't. Putting the subtitle in a dark grey, which is much closer to the background color (black) than the color of the title (white) is, also does a lot to move emphasis towards the title of the book (away from the subtitle) Most importantly, the title is plenty big. If it was small, then I would feel there is something to what you are saying; but the title is quite large and readable from a distance, and clearly delineated from the rest of the text on the cover. In this case, part of the point of publishing a book (including writing it in the first place), is presumably to promote the identity of the authors, to make them a known name / schelling point for discussion about AI safety. That would indicate making the names quite prominent on the cover.
6MikkW
I see that the numbers indicate people disagree with this post. Since there are several clauses, it's hard to know which specifically (or all of them) are being disagreed with. The second paragraph (beginning "Contrary to what you wrote...") is a list of factual statements, which as far as I can tell are all correct. The third paragraph ("Most importantly, the title is plenty big...") is more subjective, but I'm currently not imagining that anyone is disagreeing with that paragraph (that is, that anyone thinks "actually, the title is too small"). The fourth paragraph ("In this case, part of the point...") is more speculative, and I could easily imagine someone reading it and thinking "that's not the point of publishing / writing a book". There's certainly a reason I put a "presumably" in there. I do still feel that there's something to what I'm saying in that paragraph. My surprise would be of a limited extent if Soares and Yudkowsky said "that was not a consideration in our decision to do this" - but I would be somewhat surprised. I can see someone disagreeing with the first paragraph ("Given that the book..."), but my current state of mind is that such people would be simply wrong. The book is not being self-published, but is being published by Little, Brown and Company. Some excerpts from Wikipedia's article on Little, Brown and Company: and The point being, the company that is publishing Soares and Yudkowsky's book, is an established company that has sold important and/or bestselling works for two centuries. The people there know what they are doing, and that includes the people who design covers, as well as the bosses of the people who design the covers.

I imagine most disagreement comes from the first paragraph.

The problem with assuming that since the publisher is famous their design is necessarily good is that even huge companies make much worse baffling design decisions all the time, and in this case one can directly see the design and know that it's not great – the weak outside-view evidence that prestigious companies usually do good work doesn't move this very much.

Yes, my disagreement was mostly with the first paragraph, which read to me like "who are you going to believe, the expert or your own lying eyes". I'm not an expert, but I do have a sense of aesthetics, that sense of aesthetics says the cover looks bad, and many others agree. I don't care if the cover was designed by a professional; to shift my opinion as a layperson, I would need evidence that the cover is well-received by many more people than dislike it, plus A/B tests of alternative covers that show it can't be easily improved upon.

That said, I also disagreed somewhat with the fourth paragraph, because when it comes to AI Safety, MIRI really needs no introduction or promotion of their authors. They're well-known, the labs just ignore their claim that "if anyone builds it, everyone dies".

I used to do graphic design professionally, and I definitely agree the cover needs some work.

I put together a few quick concepts, just to explore some possible alternate directions they could take it:
https://i.imgur.com/zhnVELh.png
https://i.imgur.com/OqouN9V.png
https://i.imgur.com/Shyezh1.png

These aren't really finished quality either, but the authors should feel free to borrow and expand on any ideas they like if they decide to do a redesign.

It's important that the cover not make the book look like fiction, which I think these do. The difference in style is good to keep in mind.

7Said Achmiz
Those are definitely all improvements on the current cover!
5Vaniver
I only like the first one more than the current cover, and I think then not by all that much. I do think this is the sort of thing that's relatively easy to focus group / get data on, and the right strategy is probably something that appeals to airport book buyers instead of LessWrongers.
7baibhavbista
Finally created a LW account (after years of lurking) to upvote and agree on the cover design issue. This is a topic I read whatever I can get my hands on, and if I saw this book in a store (and did not know EY or Nate), even I would be a bit put off to give it a read Given the stated goal of trying to make this a bestseller, I feel like the cover is a pretty big impediment

Other MIRI staff report that the book helped them fit the whole argument in their head better, or that it made sharp some intuitions they had that were previously vague. So you might get something out of it even if you’ve been around a while.

Can confirm! I've followed this stuff for forever, but always felt at the edge of my technical depth when it came to alignment. It wasn’t until I read an early draft of this book a year ago that I felt like I could trace a continuous, solid line from “superintelligence grown by a blind process...” to “...develops weird internal drives we could not have anticipated”. Before, I was like, "We don't have justifiable confidence that we can make something that reflects our values, especially over the long haul," and now I'm like, "Oh, you can't get there from here. Clear as day."

As for why this spells disaster if anyone builds it, I didn't need any new lessons, but they are here, and they are chilling--even for someone who was already convinced we were in trouble.

Having played some small part in helping this book come together, I would like to attest to the sheer amount of iteration it has gone through over the last year. Nate and co. have been relen... (read more)

[-]Algon187

You convinced me to pre-order it. In particular, these lines: 
> It wasn’t until I read an early draft of this book a year ago that I felt like I could trace a continuous, solid line from “superintelligence grown by a blind process...” to “...develops weird internal drives we could not have anticipated”. Before, I was like, "We don't have justifiable confidence that we can make something that reflects our values, especially over the long haul," and now I'm like, "Oh, you can't get there from here. Clear as day."

I read an advance copy of the book; I liked it a lot. I think it's worth reading even if you're well familiar with the overall argument.

I think there's often been a problem, in discussing something for ~20 years, that the material is all 'out there somewhere' but unless you've been reading thru all of it, it's hard to have it in one spot. I think this book is good at presenting a unified story, and not getting bogged down in handling too many objections to not read smoothly or quickly. (Hopefully, the linked online discussions will manage to cover the remaining space in a more appropriately non-sequential fashion.)

Help me figure out how to recommend to normie friends.

In my experience, "normal" folks are often surprisingly open to these arguments, and I think the book is remarkably normal-person-friendly given its topic. I'd mainly recommend telling your friends what you actually think, and using practice to get better at it.

Context: One of the biggest bottlenecks on the world surviving, IMO, is the amount (and quality!) of society-wide discourse about ASI. As a consequence, I already thought one of the most useful things most people can do nowadays is to just raise the alarm with more people, and raise the bar on the quality of discourse about this topic. I'm treating the book as an important lever in that regard (and an important lever for other big bottlenecks, like informing the national security community in particular). Whether you have a large audience or just a network of friends you're talking to, this is how snowballs get started.

 If you're just looking for text you can quote to get people interested, I've been using:

As the AI industry scrambles to build increasingly capable and general AI, two researchers speak out about a disaster on the horizon.

In 2023, hundreds of AI scientists and leaders in the field, including the three mos

... (read more)
[-]Algon134

You could also try sending your friends an online AI risk explainer, e.g., MIRI's The Problem or Ian Hogarth's We Must Slow Down the Race to God-Like AI (requires Financial Times access) or Gabriel Alfour's Preventing Extinction from Superintelligence.

There's also AIsafety.info's short-er form and long form explainers. 

One notable difficulty with talking to ordinary people about this stuff is that often, you lay out the basic case and people go "That's neat. Hey, how about that weather?" There's a missing mood, a sense that the person listening didn't grok the implications of what they're hearing. Now, of course, maybe they just don't believe you or think you're spouting nonsense. But in those cases, I'd expect more resistance to the claims, more objections or even claims like "that's crazy". Not bland acceptance. 

One notable difficulty with talking to ordinary people about this stuff is that often, you lay out the basic case and people go "That's neat. Hey, how about that weather?" There's a missing mood, a sense that the person listening didn't grok the implications of what they're hearing.

I kinda think that people are correct to do this, given the normal epistemic environment. My model is this: Everyone is pretty frequently bombarded with wild arguments and beliefs that have crazy implications. Like conspiracy theories, political claims, spiritual claims, get-rich-quick schemes, scientific discoveries, news headlines, mental health and wellness claims, alternative medicine, claims about which lifestyles are better. We don't often have the time (nor expertise or skill or sometimes intelligence) to evaluate them properly. So we usually keep track of a bunch of these beliefs and arguments, and talk about them, but usually require nearby social proof in order to attach the arguments/beliefs to actions and emotions. Rationalists (and the more culty religions and many activist groups, etc.) are extreme in how much they change their everyday lives based on their beliefs.

I think it's probably oka... (read more)

5Morpheus
I had a similar experience where my dad seemed unbothered by me saying AI might take over the world and some other day I mentioned in passing that I don't know in how many years we have AI that is a better software engineer than humans, but 5-10 years doesn't sound strictly impossible. My father being a software engineer found that claim more interesting (He was visibly upset about his job security). I noticed I've kinda downplayed the retirement thing to my parents, because implicitly I noticed at that point they might call me insane, but explicitly thinking about it, it might be more effective to communicate what is at stake.
3k64
I don't know if you'll find it helpful, but you inspired me to write up and share a post I plan to make on Facebook.

I'm relatively OOTL on AI since GPT-3. My friend is terrified that we need to halt it urgently: I couldn't understand his point of view; he mentioned this book to me. I see a number of pre-readers saying the version they read is well-suited exactly for convincing people like me. At which point: if you believe the threat is imminent, why delay the book four months? I'll read a digital copy today if you point me to it.

I think they are delaying so people can early pre order which affects how many books the publisher prints and distributes which affects how many people ultimately read it and how much it breaks into the Overton window. Getting this conversation mainstream is an important instrumental goal.

If you are looking for info in the mean time you could look at PauseAI:

https://pauseai.info/

Or if you want less facts and quotes and more discussion, I recall that Yudkowsky’s Coming of Age is what changed my view from "orthogonality kinda makes sense" to "orthogonality is almost certainly correct and the implication is alignment needs more care than humanity is currently giving it".

You may also be better discussing more with your friend or the various online communities.

You can also preorder. I'm hopeful that none of the AI labs will destroy the world before the books release : )

Yeah, I think the book is going to be (by a very large margin) the best resource in the world for this sort of use case. (Though I'm potentially biased as a MIRI employee.) We're not delaying; this is basically as fast as the publishing industry goes, and we expected the audience to be a lot smaller if we self-published. (A more typical timeline would have put the book another 3-20 months out.)

If Eliezer and Nate could release it sooner than September while still gaining the benefits of working with a top publishing house, doing a conventional media tour, etc., then we'd definitely be releasing it immediately. As is, our publisher has done a ton of great work already and has been extremely enthusiastic about this project, in a way that makes me feel way better about this approach. "We have to wait till September" is a real cost of this option, but I think it's a pretty unavoidable cost given that we need this book to reach a lot of people, not just the sort of people who would hear about it from a friend on LessWrong.

I do think there are a lot of good resources already online, like MIRI's recently released intro resource, "The Problem". It's a very different beast from If Anyone Bu... (read more)

Note that IFP (a DC-based think tank) recently had someone deliver 535 copies of their new book to every US Congressional office.

Note also that my impression is that DC people (even staffers) are much less "online" than tech audiences. Whether or not you copy IFP, I would suggest thinking about in-person distribution opportunities for DC.

I would note that this is, indeed, a very common move done in DC.  I would also note that many of these copies end up in, e.g., Little Free Libraries and at the Goodwill.  (For example, I currently downstairs have a copy of the President of Microsoft's Board's book with literally still the letter inside saying "Dear Congressman XYZ, I hope you enjoy my book...") 

I am not opposed to MIRI doing this, but just want to flag that this is a regular move in DC.  (Which might mean you should absolutely do it since it has survivorship bias as a good lindy idea!  Just saying it ain't, like, a brand new strat.)
 

[-]Linch108

Would be nice if you can get a warm intro for the book to someone high up in the Vatican too, as well as other potentially influential groups.

I preordered my copy.

Something about the tone of this announcement feels very wrong, though. You cite Rob Bensinger and other MIRI staff being impressed. But obviously, those people are highly selected for already agreeing with you! How much did you engage with skeptical and informed prereaders? (I'm imagining people in the x-risk-reduction social network who are knowledgeable about AI, acknowledge the obvious bare-bones case for extinction risk, but aren't sold on the literal stated-with-certainty headline claim, "If anyone builds it, everyone dies.")

If you haven't already done so, is there still time to solicit feedback from such people and revise the text? (Sorry if the question sounds condescending, but the tone of the announcement really worries me. It would be insane not to commission red team prereaders, but if you did, then the announcement should be talking about the red team's reaction, not Rob's!)

[-]So8res6816

We're targeting a broad audience, and so our focus groups have been more like completely uninformed folks than like informed skeptics. (We've spent plenty of time honing arguments with informed skeptics, but that sort of content will appear in the accompanying online resources, rather than in the book itself.) I think that the quotes the post leads with speak to our ability to engage with our intended audience.

I put in the quote from Rob solely for the purpose of answering the question of whether regular LW readers would have anything to gain personally from the book -- and I think that they probably would, given that even MIRI employees expressed surprise at how much they got out of it :-)

(I have now edited the post to make my intent more clear.)

4robo
I'm very glad you've used focus groups!  Based solely on the title the results are excellent.  I'm idly curious how you assembled the participants. Do you have a way to get feedback from Chinese nationalists?  ("America Hawks" in China?).
[-]yams126

This strikes me as straightforwardly not the purpose of the book. This is a general-audience book that makes the case, as Nate and Eliezer see it, for both the claim in the title and the need for a halt. This isn’t inside baseball on the exact probability of doom, whether the risks are acceptable given the benefits, whether someone should work at a lab, or any of the other favorite in-group arguments. This is For The Out Group.

Many people (like >100 is my guess), with many different view points, have read the book and offered comments. Some of those comments can be shared publicly and some can’t, as is normal in the publishing industry. Some of those comments shaped the end result, some didn’t.

7peterbarnett
Fwiw the post also includes Stephen Fry, Tim Urban and Yishan Wong (although I agree these people don’t have deep ai x-risk experience)
9MikkW
Tim Urban has written about AI X-risk before, in a way that indicates that he's spent a good bit of time thinking about the problem. But, the point of the book seems to be to speak to people who don't have a deep knowledge of AI risk.

Not sure what he's done on AI since, but Tim Urban's 2015 AI blog post series mentions how he was new to AI or AI risk and spent a little under a month studying and writing those posts. I re-read them a few months ago and immediately recommended them to some other people with no prior AI knowledge, because they have held up remarkably well.

3denkenberger
  Unfortunately, the graphic below does not have the simple case of stating something, but I'm interested in people's interpretation of the confidence level. I think a reasonable starting point is interpreting it as 90% confidence. I couldn't quickly find what percent of AI safety researchers have 90% confidence in extinction (not just catastrophe or disempowerment), but it's less than 1% in the AI Impacts survey including safety and capabilities researchers. I couldn't find it for the public. Still, I think almost everyone will just bounce off this title. But I understand that's what the authors believe, and perhaps it could have influence on the relatively few existing extreme doomers in the public? Edited to add: After writing this, I asked perplexity what P(doom) someone should have to be called an extreme doomer, and it said 90%+ and mentioned Yud. Of course extreme doesn't necessarily mean wrong. And since there only needs to be about 10,000 copies sold in a week to be a NYT bestseller, that very well could happen even if 99% of people bounce off the title.
7Steven Byrnes
You think people don’t read books if they confidently disagree with the title? (Not rhetorical; I read books I confidently disagree with but I’m not an average book reader.) What about people who aren’t coming in with a strong opinion either way? Isn’t that most potential readers, and the main target audience? E.g. “The Myth of the Rational Voter” book title implies a strong claim that voters are not rational. If I had walked by that book on a bookshelf 15 years ago (before I knew anything about the topic or author), I imagine that I would have been intrigued and maybe bought it, not because I already confidently believed that voters are not rational but because, I dunno, it might have seemed interesting and fun to read, on a topic I didn’t already know much about, so maybe I’d learn something.
1denkenberger
Yes, I was thinking of adding that it could appeal to contrarians who may be attracted to a book with a title they disagreed with. As for people who don't have a strong opinion coming in, I can see some people being attracted to an extreme title. And I get that titles need to be simple. I think a title like "If anyone builds it, we lose control" would be more defensible. But I think the probability distributions from Paul Christiano are more reasonable.
[-]lc3023

Love the title

8MondSemmel
Agree. There's no way to present the alarming idea of AI doom without sounding alarmist. So it seems to me the next-best thing is to communicate it clearly in plain English without complicated words, and which can't be misunderstood (in contrast to co-opted terms like "AI Safety"). That's what this title does, so I like it.
6Warty
it's the title-based impact optimization for me
[-]Dan H284

It's a great book: it's simple, memorable, and unusually convincing.

Are there any plans for Russian translation? If not, I'm interested in creating it (or even in organizing a truly professional translation, if someone gives me money for it).

7Rob Bensinger
There's a professional Russian translator lined up for the book already, though we may need volunteer help with translating the online supplements. I'll keep you (and others who have offered) in mind for that -- thanks, Tapatakt. :)

anyone

How’s the Simplified Chinese translation coming along?

[-]So8res343

We're still in the final proofreading stages for the English version, so the translators haven't started translating yet. But they're queued up.

[-]robo*282

Given the potentially massive importance of a Chinese version, it may be worth burning $8,000 to start the translation before proofreading is done, particularly if your translators come back with questions that are better clarified in the English text.  I'd pay money to help speed this up if that's the bottleneck[1].  When I was in China I didn't have a good way of explaining what I was doing and why.

  1. ^

    I'm working mostly off savings and wouldn't especially want to, but I would to make it happen.

Something I've done in the past is to send text that I intended to be translated through machine translation, and then back, with low latency, and gain confidence in the semantic stability of the process.

Rewrite english, click, click.
Rewrite english, click, click. 
Rewrite english... click, click... oh! Now it round trips with high fidelity. Excellent. Ship that!

5Three-Monkey Mind
Excellent. I cannot convey how pleased I am that I did not have to explain myself.

Hi, I've pre-ordered it on the UK Amazon, I hope that works for you. Let me know if I should do something different.

I have a number of reasonably well-respected friends in the University of Cambridge and its associated tech-sphere, I can try to get some of them to give endorsements if you think that will help and can send me a pdf.

I preordered it.

I just pre-ordered.

I agree that the cover art seems notably bad. The white text on black background in that font looks like some sort of autogenerated placeholder. I understand you feel over-constrained -- this is just another nudge to think creatively about how to overcome your constraints, e.g. route around your publisher and hire various artists on your own, then poll your friends on the best design.

I would encourage you to send free review copies to prominent nontechnical people who are publicly complaining about AI, if you're not already doing so. Here are some examples I saw in the past few days; I'm sure a dedicated search could turn up lots more (and I encourage people to reply to this comment with more examples):

Edit: Come to think of it, perhaps there is no reason to preferentially send copies to those who are more inclined to agree? Engaging skeptics of AI risk like e.g. Tyler Cowen might be a good opportunity to show them a better argument and leave them a... (read more)

5Ebenezer Dukakis
Also, as long as we're talking about public engagement on AI, I'm going to plug this comment I wrote a few days ago, which deserves more attention IMO. Maybe the launch of this book could serve as a platform for a high-profile bet or adversarial collaboration with a prominent AI doom skeptic?
1JohnGreer
I think this is an interesting idea and should be done!
[-]k64140

Any info on what counts as "bulk".  I share an amazon prime account with my family so if we each want to buy copies, does it need to be separate orders, separate shipping/billing addresses, separate accounts, or separate websites to not count as "bulk"?

This will be a huge help when talking to political representatives. When reaching out to politicians as an AI safety volunteer the past 6 months, I got a range of reactions:

  • They're aware of this issue but can't get traction with fellow politicians, it needs visible public support first
  • They're aware but this issue is too complex for the public to understand
  • They're unaware but also the public is focused on immediate issues like housing and cost of living
  • They're unaware, sounds important, but they lack the resources to look into it

Having a professionally published book will help with all those responses. I am preordering!

Is an audiobook version also planned per chance? Could preordering that one also help?

Judging from Stephen Fry's endorsement and, as I've seen, his interest in the topic for some time in general, perhaps a delightful and maybe even eager deal could be made where he narrates? Unless some other choice might be better for either party of course. And I also understand if negotiations or existing agreements prevent anyone from confirming anything on this aspect, I'd be happy to hear whether the audio version is planned/intended to begin with and when if that can be known.

[-]dirk112

There is indeed an audiobook version; the site links to https://www.audible.com/pd/If-Anyone-Builds-It-Everyone-Dies-Audiobook/B0F2B8J9H5 (where it says it'll be available September 30) and https://libro.fm/audiobooks/9781668652657-if-anyone-builds-it-everyone-dies (available September 16).

Any updates on the cover? It seems to matter quite a bit; this market has a trading volume of 11k mana and 57 different traders:

https://manifold.markets/ms/yudkowsky-soares-change-the-book-co?r=YWRlbGU

-4sunwillrise
This market uses play money.  The standard arguments for why traders are incentivized to put their money where their mouth is, and therefore push the market value closer to the "correct" probabilities, do not apply.

Writing a book is an excellent idea! I found other AI books like Superintelligence much more convenient and thorough than navigating blog posts. I've pre-ordered the book and I'm looking forward to reading it when it comes out.

I just pre-ordered 10 copies. Seems like the most cost effective way to help that I've seen in a long time. (Though yes I'm also going to try to distribute my copies.)

I think that's what they meant you should not do when they said [edit to add: directly quoting a now-modified part of the footnote] "Bulk preorders don’t count, and in fact hurt."

6Towards_Keeperhood
Oops ok then i guess i will cancel my order
[-]So8res*3417

My guess is that "I'm excited and want a few for my friends and family!" is fine if it's happening naturally, and that "I'll buy a large number to pump up the sales" just gets filtered out. But it's hard to say; the people who compile best-seller lists are presumably intentionally opaque about this. I wouldn't sweat it too much as long as you're not trying to game it.

2Alice Blair
Edit: this comment seems to be incorrect; see comments. They did not say that, on my reading? The footnote about bulk preorders says Which I read as semantically equivalent to "bulk preorders neither help nor hurt" with an implicit "so don't do a bulk preorder solely for the sake of making the numbers better, but other reasons are acceptable."
7Davidmanheim
They edited the text. It was an exact quote from the earlier text.

I have pre-ordered it! Hopefully a German pre-order from a local bookstore will make a difference. :-)
For those who can't wait, and most people here probably already know, here is Eliezer's latest interview on that topic: https://www.youtube.com/watch?v=0QmDcQIvSDc. I'm halfway through it and I really like how clearly he thinks and makes his argument; it's still deeply disturbing though.

If you want to hear a younger, more optimistic Eliezer, here's the recording of his Hard AI Future Salon from many years ago, way back in 2006. :-)
https://archive.org/details/FutureSalon_02_2006

He starts his talk at minute 12. There are excellent questions from the audience as well.

I don't know anyone who has thought about and tried to steer us in the right direction on this problem more deeply or for longer than him.

2Rob Bensinger
Yep, this counts! :)
[-]robo120

Online advertising can be used to promote books.  Unlike many books, you are not trying to make a profit and can pay for advertising beyond where the publisher's marginal costs equals marginal revenue.  Do you:

  • Have online advertising campaigns set up by your publisher and can absorb donations to spend on more advertising (LLM doubts Little, Brown and Company lets authors spend more money)
  • Have $$$ to spend on an advertising campaign but don't have the managerial bandwidth to set one up.  You'd need logistics support to set up an effective advertising campaign.
  • Need both money and logistics for an advertising campaign.
    • Alphabet and Meta employees get several hundred dollars per month to spend on on advertising (as incentive to dogfood their product).  If LessWrong employees at those companies setup many $300 / month advertising campaigns, that sounds like a worthwhile investment
  • Need neither help setting up an advertising campaign nor funds for more advertising (though donations to MIRI are of course always welcome)
[-]So8res160

We have an advertising campaign planned, and we'll be working with professional publicists. We have a healthy budget for it already :-)

Are there planned translations in general, or is that something that is discussed only after actual success?

7So8res
A variety of translations are lined up.
[-]Bucky*8-1

Quick note that I can't open the webpage via my institution (same issue on multiple browsers). Their restrictions can be quite annoying and get triggered alot. I can view it myself easily enough on phone but if you want this to get out beware trivial inconveniences and all that...

Firefox message is below.

Secure Connection Failed

An error occurred during a connection to ifanyonebuildsit.com. Cannot communicate securely with peer: no common encryption algorithm(s).

Error code: SSL_ERROR_NO_CYPHER_OVERLAP

  • The page you are trying to view cannot be shown because t
... (read more)
6habryka
Huh, that sure is weird. Looking into it, it seems that this would only happen if the institution network is forcing outdated SSL protocols, which really isn't great since SSL exploits seem reasonably common and very bad.  https://vercel.com/guides/resolve-err-ssl-protocol-error-with-vercel  Not much I can do for now. I hope not many networks do this. If they do, I might think about doing something complicated to work around it.

Insider trading by anyone who can help on the Yes side is welcome :)

[-]jefftk*70

Very exciting; thanks for writing!

I know this is minor, but the image on the bottom of the website looks distractingly wrong to me -- the lighting doesn't match where real population centers are. It would be a lot better with something either clearly adapted from the real world or something clearly created, but this is pretty uncanny valley

[-]eapi70

Here in Australia I can only buy the paperback/hardcover versions. Any chance you can convince your publisher/publishers to release the e-book here too?

7So8res
I'm told that Australians will be able to purchase the UK e-book, and that it'll be ready to go in a week or so.

I initially read the title of the post as

Eliezer and I wrote a book: If Anyone Reads It, Everyone Dies

Quite intimidating! 

Why do I see two different versions on amazon.co.uk? Both hardcover, same title but different subtitles, different publication dates. The second one has the same cover and title as the one on amazon.com, so presumably that is the one to go for.

5So8res
The US and UK versions will have different covers and subtitles. I'm not sure why the US version shows up on the .co.uk website. We've asked the publishers to take a look.
2Mikhail Samin
The US version is currently #396 in Books on .co.uk
2Richard_Kennaway
Why the different subtitles? The US subtitle seems much more direct, while the UK subtitle is a breath of stale air. What is the "esc" key intended to convey, when the point is that there would be no escape?
[-]So8res260

There's not a short answer; subtitles and cover art are over-constrained and the choices have many stakeholders (and authors rarely have final say over artwork). The differences reflect different input from different publishing-houses in different territories, who hopefully have decent intuitions about their markets.

Are you American? Because as a British person I would say that the first version looks a lot better to me, and certainly fits the standards for British non-fiction books better.

Though I do agree that the subtitle isn't quite optimal.

I am British. I'm not much impressed by either graphic design, but I'm not a graphic designer and can't articulate why.

8keltan
I’m also not a graphic designer. But I agree that both designs give me the ick. I think it’s something about how lazy they both look. They give early 2000s self help book. To be clear, I’m quite excited for this book, and have preordered! I am just surprised by the covers.
3lc
I think the main problem is that the second cover looks really rushed.
9philh
Huh, I'm also British and I thought the first version looked like a placeholder, as in "no one's uploaded an actual cover yet so the system auto generates one". The only thing making me think not-that was that the esc key is mildly relevant. I bought the second one partly because I was a lot more confident I was actually buying a real book. I guess part of what's going on here is it's the same grey as the background (or very close?), so looks transparent. But even without that I think I'd have had a similar reaction.
4Yoav Ravid
I had the same reaction to the first version

Have preordered on Amazon DE, as both hardcover and Kindle.

Excited to hear this! Preordered.

We’re told that 10,000 pre-orders constitutes a good chance of being on the best-seller list (depending on the competition), and 20,000 would be a big deal

Does hardcover vs. ebook matter here?

9tanagrabeast
Bestseller algorithms are secret and shifty, but hardcover is generally believed to count a little more. And as for overall impact, if either format is good for you, a hardcover preorder helps more because it encourages the publisher to print a bigger initial run of physical copies, which can get pumped into stores and onto shelves where people will see them.
6MikkW
I would assume e-book orders will also play a role in encouraging the publisher to print more physical copies, because it indicates that more people are interested in reading the book.
3Rob Bensinger
Yep! This is the first time I'm hearing the claim that hardcover matters more for bestseller lists; but I do believe hardcover preorders matter a bit more than audiobook preorders (which matters a bit more than ebook preorders). I was assuming the mechanism for this is that they provide different amounts of evidence about print demand, and thereby influence the print run a bit differently. AFAIK all the options are solidly great, though; mostly I'd pick the one(s) that you actually want the most.
2Thane Ruthenis
I'd assume so, but the post didn't mention that as a consideration. @So8res, edit the post to point that out?
[-]k6450

Online, I'm seeing several sources say that pre-orders actually hurt on Amazon, because the Amazon algorithm cares about sales and reviews after launch and doesn't count pre-orders.  Anyone know about this?  If I am buying on Amazon should I wait til launch, or conversely if I'm pre-ordering should I buy elsewhere?

4Rob Bensinger
It's a bit complicated, but after looking into this and weighing this against other factors, MIRI and our publisher both think that the best option is for people to just buy it when they think to buy it -- the sooner, the better. Whether you're buying on Amazon or elsewhere, on net I think it's a fair bit better to buy now than to wait.

Preordered ebook version on Amazon. I am also interested in doing Korean translation.

If I want to pre-order but don't use Internet marketplaces and don't have a credit card, are there options for that (e.g. going to a physical store and asking them to pre-order)?

I made a Manifold market for how many pre-orders there will be!
 

Amazon's best-seller standings. I wouldn't make too much of this, their categorization is wonky. (I also have no clue what the lookback window is, what they make of preoders, etc.)
#5 in "Technology"

#4,537 in all books

#11 in engineering

#14 in semantics and AI (how is this so much lower than "technology?")

In short: showing up! It could be grabbing someone's eye right now. Still drowned out by Yuval Noah Harari, Ethan Mollick, Ray Kurzweil, et al.

4TsviBT
It was briefly in the 300s overall, and 1 or 2 in a few subcategory thingies.

Sad to see that ebook version is DRM protected. On your website you list far less retailers than Hatchette does.

From the MIRI announcement:

Our big ask for you is: If you have any way to help this book do shockingly, absurdly well— in ways that prompt a serious and sober response from the world — then now is the time.

sober response from the world

sober response

Uh... this is debatably a lot to ask of the world right now.

A German bookseller claims there is a softcover (Taschenbuch) version available for preorder: https://www.thalia.de/shop/home/artikeldetails/A1075128502

Is that correct? It does not seem to be available on any US website.

More from So8res
Curated and popular this week