I don't buy the "million times worse," at least not if we talk about the relevant E(s-risk moral value) / E(x-risk moral value) rather than the irrelevant E(s-risk moral value / x-risk moral value). See this post by Carl and this post by Brian. I think that responsible use of moral uncertainty will tend to push you away from this kind of fanatical view

I agree that if you are million-to-1 then you should be predominantly concerned with s-risk, I think they are somewhat improbable/intractable but not that improbable+intractable. I'd guess the proba... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 4 replies (Click to show all)

An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the former, then x-risk and s-risk reduction may end up being aligned.

Did you mean to say, "if the latter" (such that x-risk and s-risk reduction are aligned when suffering-hating civilizations decrease s-risk), rather than "if the former"?

2RomeoStevens2y In support of this, my system 1 reports that if it sees more intelligent people taking S-risk seriously it is less likely to nuke the planet if it gets the chance. (I'm not sure I endorse nuking the planet, just reporting emotional reaction).
2Kaj_Sotala2y Can you elaborate on what you mean by this? People like Brian or others at FRI don't seem particularly averse to philosophical deliberation to me... I support this compromise and agree not to destroy the world. :-)

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 2y20th Jun 20171 min read107 comments

21