CFAR and MIRI are running a free AI Summer Fellows Program (AISFP) in the San Francisco Bay Area from June 27 to July 14. Aimed at increasing participants' abilities to do technical research in AI alignment, the program includes CFAR's applied rationality content as well as practice in doing technical research toward AI Safety with MIRI researchers and (20-24) other participants.
AISFP, a two week summer program designed to increase participants' ability to do technical research into the AI alignment problem, will take place in the San Francisco Bay Area from June 27 to July 14.
The intent of the program is to boost participants, as far as possible, in four skills:
1. The CFAR applied rationality skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops.
2. Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems — i.e., the skillset taught in the core LW Sequences (e.g. reductionism and how to reason in contexts as confusing as anthropics without getting lost in words).
3. Technical forecasting in AI as well as AI alignment interventions (e.g. the content discussed in Nick Bostrom’s book Superintelligence).
4. The ability to do AI alignment-relevant technical research, while reflecting on the cognitive habits involved. We will give crash courses in: reflection, logical uncertainty, and decision theory.
Finalists will be contacted by a MIRI staff member for an interview.
[6/8: Applications to AISFP18 now closed]