The only counterarguments I can think of would be:

  • The claim that the likelihood of s-risks being close to that of x-risks seems not well argued to me. In particular, conflict seems to be the most plausible scenario (and one which has a high prior placed on it as we can observe that much suffering today is caused by conflict), but it seems to be less and less likely of a scenario once you factor in superintelligence, as multi-polar scenarios seem to be either very short-lived or unlikely to happen at all.

  • We should be wary of applying anthropomorphic tr

... (read more)

I think the most general response to your first three points would look something like this: Any superintelligence that achieves human values will be adjacent in design space to many superintelligences that cause massive suffering, so it's quite likely that the wrong superintelligence will win, due to human error, malice, or arms races.

As to your last point, it looks more like a research problem than a counterargument, and I'd be very interested in any progress on that front :-)

0Kaj_Sotala3yThis seems plausible but not obvious to me. Humans are superintelligent as compared to chimpanzees (let alone, say, Venus flytraps), but humans have still formed a multipolar civilization.
0[anonymous]3yDue to complexity and fragility of human values, any superintelligence that fulfills them will probably be adjacent in design space to many other superintelligences that cause lots of suffering (which is also much cheaper), so a wrong superintelligence might take over due to human error or malice or arms races. That's where most s-risk is coming from, I think. The one in a million number seems optimistic, actually.

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments