[anonymous]3y0

Due to complexity and fragility of human values, any superintelligence that fulfills them will probably be adjacent in design space to many other superintelligences that cause lots of suffering (which is also much cheaper), so a wrong superintelligence might take over due to human error or malice or arms races. That's where most s-risk is coming from, I think. The one in a million number seems optimistic, actually.

[This comment is no longer endorsed by its author]Reply

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21