Huh, I feel very differently. For AI risk specifically, I thought the conventional wisdom was always "if AI goes wrong, the most likely outcome is that we'll all just die, and the next most likely outcome is that we get a future which somehow goes against our values even if it makes us very happy." And besides AI risk, other x-risks haven't really been discussed at all on LW. I don't recall seeing any argument for s-risks being a particularly plausible category of risks, let alone one of the most important ones.

It's true that there was That One Scandal, but the reaction to that was quite literally Let's Never Talk About This Again - or alternatively Let's Keep Bringing This Up To Complain About How It Was Handled, depending on the person in question - but then people always only seemed to be talking about that specific incident and argument. I never saw anyone draw the conclusion that "hey, this looks like an important subcategory of x-risks that warrants separate investigation and dedicated work to avoid".

That sort of subject is inherently implicit in the kind of decision-theoretic questions that MIRI-style AI research involves. More generally, when one is thinking about astronomical-scale questions, and aggregating utilities, and so on, it is a matter of course that cosmically bad outcomes are as much of a theoretical possibility as cosmically good outcomes.

Now, the idea that one might need to specifically think about the bad outcomes, in the sense that preventing them might require strategies separate from those required for achieving good outcomes, may depend on additional assumptions that haven't been conventional wisdom here.

4Wei_Dai3yThere was some discussion back in 2012 [http://lesswrong.com/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/5ylx] and sporadically since [http://lesswrong.com/lw/hzs/three_approaches_to_friendliness/9e9g] then [http://lesswrong.com/lw/e97/stupid_questions_open_thread_round_4/7ba4]. (ETA: You can also do a search for "hell simulations" and get a bunch more results.) I've always thought that in order to prevent astronomical suffering, we will probably want to eventually (i.e., after a lot of careful thought) build an FAI that will colonize the universe and stop any potential astronomical suffering arising from alien origins and/or try to reduce suffering in other universes via acausal trade etc., so the work isn't very different from other x-risk work. But now that the x-risk community is larger, maybe it does make sense to split out some of the more s-risk specific work?

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

20