New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:24 AM

Description? Also, none of your scenarios seem to involve a big intelligence or multitasking advantage - its certainly harder to imagine humans getting outwitted in many different ways in rapid sequence, culminating in an extremely efficient gain of power for the AI, but it actually seems more realistic to me for a fast takeoff (the other option being something like Paul's "gradual loss of control" slow takeoff).

Good points. I would imagine that all of these scenarios are made possible by an intelligence advantage, but I did not make that explicit here.

Your point about multitasking (if I understood it correctly) is important too. We can imagine an unfriendly-AI pursuing all 3 paths to existential catastrophe simultaneously. The question becomes, are there prevention strategies for combinations of existential-risk-paths which work better than simply trying to prevent individual paths? I have to think on that more.