Slowing AI
This sequence is on slowing AI from an x-risk perspective.
The first two are pretty good; the rest (including possible future posts) are in progress and very rough (and short)– I'm mostly posting them in the hope that they help advance my conversations with researchers.
I think my "Foundations" post is by far the best source on the considerations relevant to slowing AI. I hope it informs people's analysis of possible ways to slow AI (improving interventions) and advances discussions on relevant considerations (improving foundations). If you've read "Foundations," I'd be excited to chat with you about how to improve (or maybe extend) it.
Slowing AI is not monolithic. We should expect some possible interventions to be bad and some to be great. And "slowing AI" is often a bad conceptual handle for the true goal, slowing AI + focusing on extending crunch time + focusing on slowing risky stuff + promoting various side goals + lots more nuances or something.
Thanks to Lukas Gloor, Rose Hadshar, Lionel Levine, and others for comments on drafts. Thanks to Charlotte Siegmann, Olivia Jimenez, Alex Gray, Katja Grace, Tom Davidson, Alex Lintz, Jeffrey Ladish, Ashwin Acharya, Rick Korzekwa, Siméon Campos, and many others for discussion.
This work in progress is part of a project supported by AI Impacts.