LESSWRONG
LW

3061
Malo
169291227
Message
Dialogue
Subscribe

CEO at Machine Intelligence Research Institute (MIRI)

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
“If Anyone Builds It, Everyone Dies” release day!
Malo24d95

LFG!

#7 Combined Print & E-Book Nonfiction

#8 Hardcover Nonfiction

Reply
Thane Ruthenis's Shortform
Malo1mo20

¯\_(ツ)_/¯

Reply
jdp's Shortform
Malo1mo20

Obviously just one example, but Schneier has generally been quite skeptical, and he blurbed the book.

Reply
jdp's Shortform
Malo1mo*10

but subconsciously I notice that MIRI was cleaning house before the book launch (e.g. taking down EY's light novel because it might look bad)

Do you have any other concrete example here besides the novel?

Reply
Thane Ruthenis's Shortform
Malo1mo*51

Maybe that's wrong; maybe the issue was lack of reach rather than exhausting the persuadees' supply, and the book-packaging + timing will succeed massively. We'll see.

This is certainly the hope. Most people in the world have never read anything that anyone here has ever written on this subject.

Reply
Thane Ruthenis's Shortform
Malo1mo120

FWIW, and obviously this is just one anecdote, but a member of Congress who read an early copy, and really enjoyed it, said that Chapter 2 was his favorite chapter.

Reply1
IMO challenge bet with Eliezer
Malo3mo141

The market has now resolved to yes, with Paul confirming.

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Malo4mo20

Huh, I thought I fixed this. Thanks for flagging, will ensure I fix now.

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Malo4mo62

Also oddly, the US version is on many of Amazon's international stores including the German store ¯\_(ツ)_/¯ 

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Malo4mo2611

Schneier is also quite skeptical of the risk of extinction from AI. Here's a table o3 generated just now when I asked it for some examples.

DateWhere he said itWhat he saidTake-away
1 June 2023Blog post “On the Catastrophic Risk of AI” (written two days after he signed the CAIS one-sentence “extinction risk” statement)“I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war — a risk worth taking seriously, but not something to panic over.” (schneier.com)Explicitly rejects the “extinction” scenario, placing AI in the same (still-serious) bucket as pandemics or nukes.
1 June 2023Same post, quoting his 2018 book Click Here to Kill Everybody“I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future.” (schneier.com)Long-standing view: most dangers come from how humans use technology we already have.
9 Oct 2023Essay “AI Risks” (New York Times, reposted on his blog)Warns against “doomsayers” who promote “Hollywood nightmare scenarios” and urges that we “not let apocalyptic prognostications overwhelm us.” (schneier.com)Skeptical of the extinction narrative; argues policy attention should stay on present-day harms and power imbalances.
Reply
Load More
488New Endorsements for “If Anyone Builds It, Everyone Dies”
4mo
55
223MIRI 2024 Mission and Strategy Update
2y
44
55MIRI’s 2019 Fundraiser
6y
0
60MIRI’s 2018 Fundraiser
7y
1
27MIRI's 2017 Fundraiser
8y
5
19MIRI's 2017 Fundraiser
8y
4
8SI is coming to Oxford, looking for hosts, trying to keep costs down
13y
2
12SI is looking to hire someone to finish a Decision Theory FAQ
13y
9
6SI/CFAR Are Looking for Contract Web Developers
13y
10
12[Applications Closed] The Singularity Institute is hiring remote LaTeX editors
13y
24
Load More
Waterfall diagram
10 years ago
(-4)
Researchers in value alignment theory
10 years ago
(+3/-22)
List of Blogs
12 years ago
(+6/-6)
List of Blogs
12 years ago
(+33/-33)
List of Blogs
12 years ago
(+68)
The Hanson-Yudkowsky AI-Foom Debate
13 years ago
(+15/-1)
The Hanson-Yudkowsky AI-Foom Debate
13 years ago
(+1)