I think the reason why cousin_it's comment is upvoted so much is that a lot of people (including me) weren't really aware of S-risks or how bad they could be. It's one thing to just make a throwaway line that S-risks could be worse, but it's another thing entirely to put together a convincing argument.

Similar ideas have been in other articles, but they've framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don't think I saw the links for either of those artic... (read more)


Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.

I'm only confident about endorsing this conclusion conditional on having values where reducing suffering matters a great deal more than promoting happiness. So we wrote the "Reducing risks of astronomical suffering" article in a deliberately 'balanced' way, pointing out the differe... (read more)

0[anonymous]3yYeah. Also my comment was consciously written to jumpstart engagement, after our recent discussions about helping the community etc. It seems like choosing words that hit a nerve is a skill that's distinct from LW rationality, you have to understand on a deep level who the readers are. It's tricky and I can only do this on good days :-) On the plus side, it doesn't seem to require natural ability, anyone can learn it by working hard and paying attention. Maybe more LWers should try. The karma mechanism seems almost perfectly designed for teaching this skill, though with downvotes it would be better because people wouldn't optimize for controversy.

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments