(That was indeed my first thought when Bernanke said he liked the book; no dice, though.)
Yep. And equally, the blurbs would be a lot less effective if the title were more timid and less stark.
Hearing that a wide range of respected figures endorse a book called If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All is a potential "holy shit" moment. If the same figures were endorsing a book with a vaguely inoffensive title like Smarter Than Us or The AI Crucible, it would spark a lot less interest (and concern).
Yeah, I think people usually ignore blurbs, but sometimes blurbs are helpful. I think strong blurbs are unusually likely to be helpful when your book has a title like If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.
Aside from the usual suspects (people like Tegmark), we mostly sent the book to people following the heuristic "would an endorsement from this person be helpful?", much more so than "do we know that this person would like the book?". If you'd asked me individually about Church, Schneier, Bernanke, Shanahan, or Spaulding in advance, I'd have put most of my probability on "this person won't be persuaded by the book (if they read it at all) and will come away strongly disagreeing and not wanting to endorse". They seemed worth sharing the book with anyway, and then they ended up liking it (at least enough to blurb it) and some very excited MIRI slack messages ensued.
(I'd have expected Eddy to agree with the book, though I wouldn't have expected him to give a blurb; and I didn't know Wolfsthal well enough to have an opinion.)
Nate has a blog post coming out in the next few days that will say a bit more about "How filtered is this evidence?" (along with other topics), but my short answer is that we haven't sent the book to that many people, we've mostly sent it to people whose AI opinions we didn't know much about (and who we'd guess on priors would be skeptical to some degree), and we haven't gotten many negative reactions at all. (Though we've gotten people who just didn't answer our inquiries, and some of those might have read the book and disliked it enough to not reply.)
Now, how much is that evidence about the correctness of the book? Extremely little!
It might not be much evidence for LWers, who are already steeped in arguments and evidence about AI risk. It should be a lot of evidence for people newer to this topic who start with a skeptical prior. Most books making extreme-sounding (conditional) claims about the future don't have endorsements from Nobel-winning economists, former White House officials, retired generals, computer security experts, etc. on the back cover.
We're still working out some details on the preorder events; we'll have an announcement with more info on LessWrong, the MIRI Newsletter, and our Twitter in the next few weeks.
You don't have to do anything special to get invited to preorder-only events. :) In the case of Nate's LessOnline Q&A, it was a relatively small in-person event for LessOnline attendees who had preordered the book; the main events we have planned for the future will be larger and online, so more people can participate without needing to be in the Bay Area.
(Though we're considering hosting one or more in-person events at some point in the future; if so, those would be advertised more widely as well.)
"Inventor" is correct!
Hopefully a German pre-order from a local bookstore will make a difference.
Yep, this counts! :)
It's a bit complicated, but after looking into this and weighing this against other factors, MIRI and our publisher both think that the best option is for people to just buy it when they think to buy it -- the sooner, the better.
Whether you're buying on Amazon or elsewhere, on net I think it's a fair bit better to buy now than to wait.
From my perspective, FWIW, the endorsements we got would have been surprising even if they had been maximally cherry-picked. You usually just can't find cherries like those.