Wow!

Many thanks for posting that link. It's clearly the most important thing I've read on LW in a long time, I'd upvote it ten times if I could.

It seems like an s-risk outcome (even one that keeps some people happy) could be more than a million times worse than an x-risk outcome, while not being a million times more improbable, so focusing on s-risks is correct. The argument wasn't as clear to me before. Does anyone have good counterarguments? Why shouldn't we all focus on s-risk from now on?

(Unsong had a plot point where Peter Singer declared that the most important task for effective altruists was to destroy Hell. Big props to Scott for seeing it before the rest of us.)

Showing 3 of 8 replies (Click to show all)

I think the reason why cousin_it's comment is upvoted so much is that a lot of people (including me) weren't really aware of S-risks or how bad they could be. It's one thing to just make a throwaway line that S-risks could be worse, but it's another thing entirely to put together a convincing argument.

Similar ideas have been in other articles, but they've framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don't think I saw the links for either of those artic... (read more)

1RomeoStevens3yX-risk is still plausibly worse in that we need to survive to reach as much of the universe as possible and eliminate suffering in other places. Edit: Brian talks about this here: https://foundational-research.org/risks-of-astronomical-future-suffering/#Spread_of_wild_animals-2 [https://foundational-research.org/risks-of-astronomical-future-suffering/#Spread_of_wild_animals-2]
2Jiro3yThat sounds like a recipe for Pascal's Mugging.

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21