If we try to answer the question now, it seems very likely we'll get the answer wrong (given my state of uncertainty about the inputs that go into the question). I want to keep civilization going until we know better how to answer these types of questions. For example if we succeed in building a correctly designed/implemented Singleton FAI, it ought to be able to consider this question at leisure, and if it becomes clear that the existence of mature suffering-hating civilizations actually causes more suffering to be created, then it can decide to not make ... (read more)

If you are concerned exclusively with suffering, then increasing the number of mature civilizations is obviously bad and you'd prefer that the average civilization not exist. You might think that our descendants are particularly good to keep around, since we hate suffering so much. But in fact almost all s-risks occur precisely because of civilizations that hate suffering, so it's not at all clear that creating "the civilization that we will become on reflection" is better than creating "a random civilization" (which is bad).

To be clear... (read more)

0Lukas_Gloor3yPerhaps this, in case it turns out to be highly important but difficult to get certain ingredients – e.g. priors or decision theory – exactly right. (But I have no idea, it's also plausible that suboptimal designs could patch themselves well, get rescued somehow, or just have their goals changed without much fuss.)

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments