LESSWRONG
LW

Rob Bensinger
22242Ω13801242229144
Message
Dialogue
Subscribe

Communications @ MIRI. Unless otherwise indicated, my posts and comments here reflect my own views, and not necessarily my employer's. (Though we agree about an awful lot.)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
A case for courage, when speaking of AI danger
Rob Bensinger1d*1410

(Considering how little cherry-picking they did.)

From my perspective, FWIW, the endorsements we got would have been surprising even if they had been maximally cherry-picked. You usually just can't find cherries like those.

Reply2
New Endorsements for “If Anyone Builds It, Everyone Dies”
Rob Bensinger9d93

(That was indeed my first thought when Bernanke said he liked the book; no dice, though.)

Reply2
New Endorsements for “If Anyone Builds It, Everyone Dies”
Rob Bensinger10d3019

Yep. And equally, the blurbs would be a lot less effective if the title were more timid and less stark.

Hearing that a wide range of respected figures endorse a book called If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All is a potential "holy shit" moment. If the same figures were endorsing a book with a vaguely inoffensive title like Smarter Than Us or The AI Crucible, it would spark a lot less interest (and concern).

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Rob Bensinger11d1511

Yeah, I think people usually ignore blurbs, but sometimes blurbs are helpful. I think strong blurbs are unusually likely to be helpful when your book has a title like If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.

Reply5
New Endorsements for “If Anyone Builds It, Everyone Dies”
Rob Bensinger11d152

Aside from the usual suspects (people like Tegmark), we mostly sent the book to people following the heuristic "would an endorsement from this person be helpful?", much more so than "do we know that this person would like the book?". If you'd asked me individually about Church, Schneier, Bernanke, Shanahan, or Spaulding in advance, I'd have put most of my probability on "this person won't be persuaded by the book (if they read it at all) and will come away strongly disagreeing and not wanting to endorse". They seemed worth sharing the book with anyway, and then they ended up liking it (at least enough to blurb it) and some very excited MIRI slack messages ensued.

(I'd have expected Eddy to agree with the book, though I wouldn't have expected him to give a blurb; and I didn't know Wolfsthal well enough to have an opinion.)

Nate has a blog post coming out in the next few days that will say a bit more about "How filtered is this evidence?" (along with other topics), but my short answer is that we haven't sent the book to that many people, we've mostly sent it to people whose AI opinions we didn't know much about (and who we'd guess on priors would be skeptical to some degree), and we haven't gotten many negative reactions at all. (Though we've gotten people who just didn't answer our inquiries, and some of those might have read the book and disliked it enough to not reply.)

Reply1
New Endorsements for “If Anyone Builds It, Everyone Dies”
Rob Bensinger11d3726

Now, how much is that evidence about the correctness of the book? Extremely little!

It might not be much evidence for LWers, who are already steeped in arguments and evidence about AI risk. It should be a lot of evidence for people newer to this topic who start with a skeptical prior. Most books making extreme-sounding (conditional) claims about the future don't have endorsements from Nobel-winning economists, former White House officials, retired generals, computer security experts, etc. on the back cover.

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Rob Bensinger12d100

We're still working out some details on the preorder events; we'll have an announcement with more info on LessWrong, the MIRI Newsletter, and our Twitter in the next few weeks.

You don't have to do anything special to get invited to preorder-only events. :) In the case of Nate's LessOnline Q&A, it was a relatively small in-person event for LessOnline attendees who had preordered the book; the main events we have planned for the future will be larger and online, so more people can participate without needing to be in the Bay Area.

(Though we're considering hosting one or more in-person events at some point in the future; if so, those would be advertised more widely as well.)

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Rob Bensinger12d20

"Inventor" is correct!

Reply
Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies
Rob Bensinger1mo20

Hopefully a German pre-order from a local bookstore will make a difference.

Yep, this counts! :)

Reply
Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies
Rob Bensinger1mo40

It's a bit complicated, but after looking into this and weighing this against other factors, MIRI and our publisher both think that the best option is for people to just buy it when they think to buy it -- the sooner, the better.

Whether you're buying on Amazon or elsewhere, on net I think it's a fair bit better to buy now than to wait.

Reply
Load More
2022 MIRI Alignment Discussion
2021 MIRI Conversations
Naturalized Induction
21Rob B's Shortform Feed
Ω
6y
Ω
79
98MIRI’s 2024 End-of-Year Update
7mo
2
195Response to Aschenbrenner's "Situational Awareness"
1y
27
143When is a mind me?
1y
131
142AI Views Snapshots
2y
61
91An artificially structured argument for expecting AGI ruin
Ω
2y
Ω
26
70AGI ruin mostly rests on strong claims about alignment and deployment, not about society
2y
8
189The basic reasons I expect AGI ruin
2y
73
137Four mindset disagreements behind existential risk disagreements in ML
2y
12
83Yudkowsky on AGI risk on the Bankless podcast
2y
5
224Elements of Rationalist Discourse
2y
49
Load More
Crux
2y
(+336)
Great Filter
3y
(+498/-274)
Roko's Basilisk
3y
(+255/-221)
Orthogonality Thesis
3y
Mesa-Optimization
3y
(+620/-372)
Humility
4y
(+47/-113)
Pivotal act
4y
(+2464/-1467)
Humility
4y
(+5773/-1290)
Functional Decision Theory
4y
(+17/-17)