Paul, thank you for the substantive comment!

Carl's post sounded weird to me, because large amounts of human utility (more than just pleasure) seem harder to achieve than large amounts of human disutility (for which pain is enough). You could say that some possible minds are easier to please, but human utility doesn't necessarily value such minds enough to counterbalance s-risk.

Brian's post focuses more on possible suffering of insects or quarks. I don't feel quite as morally uncertain about large amounts of human suffering, do you?

As to possible interventions, you have clearly thought about this for longer than me, so I'll need time to sort things out. This is quite a shock.

large amounts of human utility (more than just pleasure) seem harder to achieve than large amounts of human disutility (for which pain is enough).

Carl gave a reason that future creatures, including potentially very human-like minds, might diverge from current humans in a way that makes hedonium much more efficient. If you assigned significant probability to that kind of scenario, it would quickly undermine your million-to-one ratio. Brian's post briefly explains why you shouldn't argue "If there is a 50% chance that x-risks are 2 million times wors... (read more)

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21