LESSWRONG
LW

590
Slowing AI

Slowing AI

Apr 17, 2023 by Zach Stein-Perlman

This sequence is on slowing AI from an x-risk perspective.

The posts are in order of priority/quality. The first three are important; the next three are good; the last three are bad. How to think about slowing AI is the best short introduction.

I think my "Foundations" post is by far the best source on the considerations relevant to slowing AI. I hope it informs people's analysis of possible ways to slow AI (improving interventions) and advances discussions on relevant considerations (improving foundations).

Slowing AI is not monolithic. We should expect some possible interventions to be bad and some to be great. And "slowing AI" is often a bad conceptual handle for the true goal, slowing AI + focusing on extending crunch time + focusing on slowing risky stuff + promoting various side goals + lots more nuances or something.

Thanks to Lukas Gloor, Rose Hadshar, Lionel Levine, and others for comments on drafts. Thanks to Alex Gray, Katja Grace, Tom Davidson, Alex Lintz, Jeffrey Ladish, Ashwin Acharya, Rick Korzekwa, Siméon Campos, and many others for discussion.

This work in progress is part of a project supported by AI Impacts.

47Slowing AI: Reading list
Zach Stein-Perlman
2y
3
45Slowing AI: Foundations
Zach Stein-Perlman
2y
11
14How to think about slowing AI
Zach Stein-Perlman
2y
2
12Cruxes for overhang
Zach Stein-Perlman
2y
5
11Slowing AI: Crunch time
Zach Stein-Perlman
2y
1
26Cruxes on US lead for some domestic AI regulation
Zach Stein-Perlman
2y
3
19Slowing AI: Interventions
Zach Stein-Perlman
2y
0
8Stopping dangerous AI: Ideal lab behavior
Zach Stein-Perlman
2y
0
17Stopping dangerous AI: Ideal US behavior
Zach Stein-Perlman
2y
0