I'm writing a series of discussion posts on how to purchase AI risk reduction (through donations to the Singularity Institute, anyway; other x-risk organizations will have to speak for themselves about their plans).
Each post outlines a concrete proposal, with cost estimates:
- A scholarly AI risk wiki
- Funding good research
- Short primers on crucial topics
- "Open Problems in Friendly AI"
- Building the AI risk research community
- Reaching young math/compsci talent
- Raising safety-consciousness among AGI researchers
- Strategic research on AI risk
- Building toward a Friendly AI team
(For a quick primer on AI risk, see Facing the Singularity.)