User Profile

star99
description77
message673

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

"Flinching away from truth” is often about *protecting* the epistemology

1y
6 min read
Show Highlightsubdirectory_arrow_left
55

[Link] CFAR's new mission statement (on our website)

1y
Show Highlightsubdirectory_arrow_left
14

CFAR’s new focus, and AI Safety

1y
Show Highlightsubdirectory_arrow_left
89

On the importance of Less Wrong, or another single conversational locus

1y
Show Highlightsubdirectory_arrow_left
364

Several free CFAR summer programs on rationality and AI safety

2y
Show Highlightsubdirectory_arrow_left
14

Consider having sparse insides

2y
Show Highlightsubdirectory_arrow_left
25

The correct response to uncertainty is *not* half-speed

2y
Show Highlightsubdirectory_arrow_left
41

Why CFAR's Mission?

2y
Show Highlightsubdirectory_arrow_left
57

Why startup founders have mood swings (and why they may have uses)

2y
Show Highlightsubdirectory_arrow_left
20

Recent Comments

I continue to think CFAR is among the best places to donate re: turning money into existential risk reduction (including this year -- basically because our good done seems almost linear in the number of free-to-participants programs we can run (because those can target high-impact AI stuff), and bec...(read more)

RyanCarey writes: If you are someone of median intelligence who just want to carry out a usual trade like making shoes or something, you can largely get by with recieved wisdom. AFAICT, this only holds if you're in a stable sociopolitical/economic context -- and, more specifically still, the kind of...(read more)

This is fair; I had in mind basic high school / Newtonian physics of everyday objects. (E.g., "If I drop this penny off this building, how long will it take to hit the ground?", or, more messily, "If I drive twice as fast, what impact would that have on the kinetic energy with which I would crash i...(read more)

Yes. Or will seriously attempt this, at least. It seems required for cooperation and good epistemic hygiene.

In case there are folks following Discussion but not Main: this mission statement was released along with:

* [CFAR’s new focus, and AI Safety](http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/) * [Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “caus...(read more)

Apologies; the link is broken and I'm not sure how to edit or delete it; real link is: http://rationality.org/about/mission

Thanks for the thoughts; I appreciate it.

I agree with you that framing is important; I just deleted the old ETA. (For anyone interested, it used to read: > ETA: Having talked just now to people at our open house, I would like to clarify: >Even though our aim is explicitly AI Safety... >CFAR do...(read more)