I continue to think CFAR is among the best places to donate re: turning money into existential risk reduction (including this year -- basically because our good done seems almost linear in the number of free-to-participants programs we can run (because those can target high-impact AI stuff), and bec...(read more)
If you are someone of median intelligence who just want to carry out a usual trade like making shoes or something, you can largely get by with recieved wisdom.
AFAICT, this only holds if you're in a stable sociopolitical/economic context -- and, more specifically still, the kind of...(read more)
This is fair; I had in mind basic high school / Newtonian physics of everyday objects. (E.g., "If I drop this penny off this building, how long will it take to hit the ground?", or, more messily, "If I drive twice as fast, what impact would that have on the kinetic energy with which I would crash i...(read more)
We would indeed love to help those people train.
Yes. Or will seriously attempt this, at least. It seems required for cooperation and good epistemic hygiene.
Thanks; good point; will add links.
In case there are folks following Discussion but not Main: this mission statement was released along with:
* [CFAR’s new focus, and AI Safety](http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/)
* [Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “caus...(read more)
Oh, sorry, the two new docs are posted and were in the new ETA:
Apologies; the link is broken and I'm not sure how to edit or delete it; real link is: http://rationality.org/about/mission
Thanks for the thoughts; I appreciate it.
I agree with you that framing is important; I just deleted the old ETA. (For anyone interested, it used to read:
> ETA: Having talked just now to people at our open house, I would like to clarify:
>Even though our aim is explicitly AI Safety...
>CFAR do...(read more)