I agree that preventing s-risks is important, but I will try to look on possible counter arguments:

  1. Benevolent AI will able to fight acasual war against evil AI in the another branch of the multiverse by creating more my happy copies, or more paths from suffering observer-moment to happy observer-moment. So creating benevolent superintelligence will help against suffering everywhere in the multiverse.

  2. Non-existence is the worst form of suffering if we define suffering as action against our most important value. Thus x-risks are s-risks. Pain is not alway

... (read more)

I think all of these are quite unconvincing and the argument stays intact, but thanks for coming up with them.

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21