About this document

There has been a recent flurry of letters/articles/statements/videos which endorse a slowdown or halt of colossal AI experiments via (e.g.) regulation or coordination.

This document aspires to collect all examples into a single centralised list. I'm undecided on how best to order and subdivide the examples, but I'm open to suggestions.

As a disclaimer, this list is...

  • Living — I'll try to update the list over time.
  • Non-exhaustive — There are almost certainly examples I've missed.
  • Non-representative — The list is biased, at least initially, towards things that I have been shown personally.

Please mention in the comments any examples I've missed so I can add them!

List of slowdown/halt AI requests

Last updated: April 14th 2023.

(Note that I'm also including surveys.)

  1. ^
  2. ^

    Credit to MM Maas.


New Comment
6 comments, sorted by Click to highlight new comments since: Today at 8:44 PM

Nitpick: I believe you meant to say last updated Apr 14, not Mar 14.


Nice, thanks for collating these!

Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological 

and somewhat older: 
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022. https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like.
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021. https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against.


Thanks! I've included Erik Hoel's and lc's essays.

Your article doesn't actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation —

This analysis does not show that restraint for AGI is currently desirable; that it would be easy; that it would be a wise strategy (given its consequences); or that it is an optimal or competitive approach relative to other available AI governance strategies.

But if you've written anything which explicitly endorses AI restraint then I'll include that in the list.