I was wondering if anyone’s compiled a list of posts which give a concrete description (with scenario-specific details) of a hypothetical future where humanity suffers an X or S scale disaster due to AGI takeover. If such a list does not already exist (or does exist but needs to be updated), please put links to specific posts of this kind in the comments!

New to LessWrong?

New Answer
New Comment

3 Answers sorted by

Joseph Miller

Apr 07, 2022

70

Gwern recently had a popular post that was exactly that kind of thing: https://www.gwern.net/Clippy

Dagon

Apr 07, 2022

70

Amusingly, I was writing https://www.lesswrong.com/posts/BkHRpF2cafyaoWxaT/believable-near-term-ai-disaster at the same time as you were posting the question, based on an earlier brainstorming exploration at https://www.lesswrong.com/posts/KTbGuLTnycA6wKBza/ .

Funny, I think we're both coming from similar sources of inspiration :)

1 comment, sorted by Click to highlight new comments since: Today at 2:10 AM

All the stories I've read, even Gwern's recent one feel surprisingly abstract. To me the obvious, very concrete story for an intelligence explosion looks like this:

  1. Run a program that does the following:
    while true:
    1. Run Codex on its own source with the prompt: "Improve the performance and efficiency of this coding model"
    2. Train a new version of Codex using the modified source code.
    3. Run tests and benchmarks to check it is actually better. If so, update your local version of Codex
  2. Wait until it is amazing / you are dead

Obviously Codex isn't nearly good enough to do this and you would need the benchmarks to include very difficult tasks, so that as it starts to take off it still has room for improvement. But I don't see why it would require a different kind of model.