About this document
There has been a recent flurry of letters/articles/statements/videos which endorse a slowdown or halt of colossal AI experiments via (e.g.) regulation or coordination.
This document aspires to collect all examples into a single centralised list. I'm undecided on how best to order and subdivide the examples, but I'm open to suggestions.
As a disclaimer, this list is...
- Living — I'll try to update the list over time.
- Non-exhaustive — There are almost certainly examples I've missed.
- Non-representative — The list is biased, at least initially, towards things that I have been shown personally.
Please mention in the comments any examples I've missed so I can add them!
List of slowdown/halt AI requests
Last updated: April 14th 2023.
(Note that I'm also including surveys.)
- Pause Giant AI Experiments: An Open Letter
by Future of Life Institute - Pausing AI Developments Isn't Enough. We Need to Shut it All Down
by Eliezer Yudkowsky - We must slow down the race to God-like AI
by Ian Hogarth - The A.I. Dilemma
by the Center for Humane Technology - The case for slowing down AI [1]
by Sigal Samuel - The Case for Halting AI Development
by Max Tegmark, Lex Fridman - Lennart Heim on Compute Governance
by Lennart Heim, Future of Life Institute - Let’s think about slowing down AI
by KatjaGrace - The 0.2 OOMs/year target
by Cleo Nardo - AI Summer Harvest
by Cleo Nardo - Instead of technical research, more people should focus on buying time
by Akash, Olivia Jimenez, Thomas Larsen - Slowing down AI progress is an underexplored alignment strategy
by Michael Huang - Slowing Down AI: Rationales, Proposals, and Difficulties [1]
by Simeon Campos, Henry Papadatos, Charles M - What an actually pessimistic containment strategy looks like
by lc - In the Matter of OpenAI (FTC 2023) [1]
by Center for AI and Digital Policy - We need a Butlerian Jihad against AI [2]
by Erik Hoel - Dangers of AI and the End of Human Civilization
by Eliezer Yudkowsky, Lex Fridman - We’re All Gonna Die with Eliezer Yudkowsky
by Eliezer Yudkowsky, Bankless - The public supports regulating AI for safety
by Zach Stein-Perlman - New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development
by Akash
- ^
Credit to Zach Stein-Perlman.
- ^
Credit to MM Maas.
Nice.
Thanks, Zach!
Nitpick: I believe you meant to say last updated Apr 14, not Mar 14.
well-spotted😳
Nice, thanks for collating these!
Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological
and somewhat older:
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022. https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like.
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021. https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against.
Thanks! I've included Erik Hoel's and lc's essays.
Your article doesn't actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation —
But if you've written anything which explicitly endorses AI restraint then I'll include that in the list.