LESSWRONG
LW

623
Malo
166991167
Message
Dialogue
Subscribe

CEO at Machine Intelligence Research Institute (MIRI)

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
IMO challenge bet with Eliezer
Malo2mo120

The market has now resolved to yes, with Paul confirming.

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Malo3mo20

Huh, I thought I fixed this. Thanks for flagging, will ensure I fix now.

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Malo3mo62

Also oddly, the US version is on many of Amazon's international stores including the German store ¯\_(ツ)_/¯ 

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Malo3mo2611

Schneier is also quite skeptical of the risk of extinction from AI. Here's a table o3 generated just now when I asked it for some examples.

DateWhere he said itWhat he saidTake-away
1 June 2023Blog post “On the Catastrophic Risk of AI” (written two days after he signed the CAIS one-sentence “extinction risk” statement)“I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war — a risk worth taking seriously, but not something to panic over.” (schneier.com)Explicitly rejects the “extinction” scenario, placing AI in the same (still-serious) bucket as pandemics or nukes.
1 June 2023Same post, quoting his 2018 book Click Here to Kill Everybody“I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future.” (schneier.com)Long-standing view: most dangers come from how humans use technology we already have.
9 Oct 2023Essay “AI Risks” (New York Times, reposted on his blog)Warns against “doomsayers” who promote “Hollywood nightmare scenarios” and urges that we “not let apocalyptic prognostications overwhelm us.” (schneier.com)Skeptical of the extinction narrative; argues policy attention should stay on present-day harms and power imbalances.
Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Malo3mo2719

FWIW, I think Jack Shanahan definitely counts as a skeptic.

Reply
New Endorsements for “If Anyone Builds It, Everyone Dies”
Malo3mo530

My favorite reaction to the Bernanke blurb. From a friend who works on AI policy in DC:

Reply75
Claude 3.5 Sonnet
Malo1y6-12

Agree. I think Google DeepMind might actually be the most forthcoming about this kind of thing, e.g., see their Evaluating Frontier Models for Dangerous Capabilities report.

Reply
LessWrong's (first) album: I Have Been A Good Bing
Malo1y20

Apple Music?

Reply
MIRI 2024 Mission and Strategy Update
Malo2y72

I’d certainly be interested in hearing about them, though it currently seems pretty unlikely to me that it would make sense for MIRI to pivot to working on such things directly as opposed to encouraging others to do so (to the extent they agree with Nate/EYs view here).

Reply
MIRI 2024 Mission and Strategy Update
Malo2y72

I think this a great comment, and FWIW I agree with, or am at least sympathetic to, most of it.

Reply
Load More
486New Endorsements for “If Anyone Builds It, Everyone Dies”
3mo
55
223MIRI 2024 Mission and Strategy Update
2y
44
55MIRI’s 2019 Fundraiser
6y
0
60MIRI’s 2018 Fundraiser
7y
1
27MIRI's 2017 Fundraiser
8y
5
19MIRI's 2017 Fundraiser
8y
4
8SI is coming to Oxford, looking for hosts, trying to keep costs down
13y
2
12SI is looking to hire someone to finish a Decision Theory FAQ
13y
9
6SI/CFAR Are Looking for Contract Web Developers
13y
10
12[Applications Closed] The Singularity Institute is hiring remote LaTeX editors
13y
24
Load More
Waterfall diagram
10 years ago
(-4)
Researchers in value alignment theory
10 years ago
(+3/-22)
List of Blogs
12 years ago
(+6/-6)
List of Blogs
12 years ago
(+33/-33)
List of Blogs
12 years ago
(+68)
The Hanson-Yudkowsky AI-Foom Debate
13 years ago
(+15/-1)
The Hanson-Yudkowsky AI-Foom Debate
13 years ago
(+1)