I guess I didn't think about it carefully before. I assumed that s-risks were much less likely than x-risks (true) so it's okay not to worry about them (false). The mistake was that logical leap.

In terms of utility, the landscape of possible human-built superintelligences might look like a big flat plain (paperclippers and other things that kill everyone without fuss), with a tall sharp peak (FAI) surrounded by a pit that's astronomically deeper (many almost-FAIs and other designs that sound natural to humans). The pit needs to be compared to the peak, not... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Didn't you realize this yourself back in 2012?

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 2y20th Jun 20171 min read107 comments

21