Or are you worried more that the question won't be answered correctly by whatever will control our civilization?

Perhaps this, in case it turns out to be highly important but difficult to get certain ingredients – e.g. priors or decision theory – exactly right. (But I have no idea, it's also plausible that suboptimal designs could patch themselves well, get rescued somehow, or just have their goals changed without much fuss.)

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21