I think all of these are quite unconvincing and the argument stays intact, but thanks for coming up with them.

  1. I think longer explanation is needed to show how benevolent AI will save observers from evil AI. It is not just compensation for sufferings. It is based on the idea of the indexical uncertainty of equal observers. If two equal observers-moments exist, he doesn't know, which one them he is. So a benevolent AI creates 1000 copies of an observer-moment which is in jail of evil AI, and construct to each copy pleasant next moment. From the point of view of the jailed observer-moment, there will be 1001 expected future moments for him, and only 1 of them will c

... (read more)

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

20