User Profile

star52
description49
message305

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
All Posts
personIncludes personal and meta blogposts (as well as curated and frontpage).
rss_feed Create an RSS Feed

MIRI's 2016 Fundraiser

2y
5 min read
Show Highlightsubdirectory_arrow_left
13

Safety engineering, target selection, and alignment theory

2y
8 min read
Show Highlightsubdirectory_arrow_left
12

MIRI's 2015 Winter Fundraiser!

2y
6 min read
Show Highlightsubdirectory_arrow_left
24

MIRI's 2015 Summer Fundraiser!

3y
4 min read
Show Highlightsubdirectory_arrow_left
45

MIRI's Approach

3y
15 min read
Show Highlightsubdirectory_arrow_left
59

MIRI Fundraiser: Why now matters

3y
2 min read
Show Highlightsubdirectory_arrow_left
4

Taking the reins at MIRI

3y
2 min read
Show Highlightsubdirectory_arrow_left
11

The Stamp Collector

3y
5 min read
Show Highlightsubdirectory_arrow_left
13

The path of the rationalist

3y
4 min read
Show Highlightsubdirectory_arrow_left
30

Ephemeral correspondence

3y
5 min read
Show Highlightsubdirectory_arrow_left
22

Recent Comments

Huh, thanks for the heads up. If you use an ad-blocker, try pausing that and refreshing. Meanwhile, I'll have someone look into it.

FYI, this is not what the word "corrigibility" means in this context. (Or, at least, it's not how we at MIRI have been using it, and it's not how Stuart Russell has been using it, and it's not a usage that I, as one of the people who originally brought that word into the AI alignment space, endorse....(read more)

> By your analogy, one of the main criticism of doing MIRI-style AGI safety research now is that it's like 10th century Chinese philosophers doing Saturn V safety research based on what they knew about fire arrows.

This is a fairly common criticism, yeah. The point of the post is that MIRI-style AI...(read more)

Yes, precisely. (Transparency illusion strikes again! I had considered it obvious that the default outcome was "a few people are nudged slightly more towards becoming AI alignment researchers someday", and that the outcome of "actually cause at least one very talented person to become AI alignment r...(read more)

I don't claim that it developed skill and talent in all participants, nor even in the median participant. I do stand by my claim that it appears to have had drastic good effects on a few people though, and that it led directly to MIRI hires, at least one of which would not have happened otherwise :-...(read more)

Thanks! :-p It's convenient to have the 2015 fundraisers end before 2015 ends, but we may well change the way fundraisers work next year.