I feel a weird disconnect on reading comments like this. I thought s-risks were a part of conventional wisdom on here all along. (We even had an infamous scandal that concerned one class of such risks!) Scott didn't "see it before the rest of us" -- he was drawing on an existing, and by now classical, memeplex.

It's like when some people spoke as if nobody had ever thought of AI risk until Bostrom wrote Superintelligence -- even though that book just summarized what people (not least of whom Bostrom himself) had already been saying for years.

Showing 3 of 5 replies (Click to show all)

Huh, I feel very differently. For AI risk specifically, I thought the conventional wisdom was always "if AI goes wrong, the most likely outcome is that we'll all just die, and the next most likely outcome is that we get a future which somehow goes against our values even if it makes us very happy." And besides AI risk, other x-risks haven't really been discussed at all on LW. I don't recall seeing any argument for s-risks being a particularly plausible category of risks, let alone one of the most important ones.

It's true that there was That One S... (read more)

7cousin_it3yI guess I didn't think about it carefully before. I assumed that s-risks were much less likely than x-risks (true) so it's okay not to worry about them (false). The mistake was that logical leap. In terms of utility, the landscape of possible human-built superintelligences might look like a big flat plain (paperclippers and other things that kill everyone without fuss), with a tall sharp peak (FAI) surrounded by a pit that's astronomically deeper (many almost-FAIs and other designs that sound natural to humans). The pit needs to be compared to the peak, not the plain. If the pit is more likely, I'd rather have the plain. Was it obvious to you all along?
2lifelonglearner3yThanks for voicing this sentiment I had upon reading the original comment. My impression was that negative utilitarian viewpoints / things of this sort had been trending for far longer than cousin_it's comment might suggest.

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21