It seems to me like a key question for making choices is: what does it do to the probability of the Astronomically bad and what does it do to the probability of the Astronomically good. These are symmetrical concepts so "s-risks" isn't a good framing. Is there something like "p-probability" (astronomically positive) and "n-probability"(astronomically negative).

Is it more important to decrease N-probability or increase P-probability? A negative utilitarian may say it's more important to decrease N-probability, but why the asymmetry? One possibility is that the badness of N is worse than the goodness of P. Is there a fundamental reason why this should be so? 

Do more people feel N is more important or P? If N feels more important, is it that brains built by evolution need dangers to be more salient since they are more irrevocable? 


 

New Answer
New Comment

4 Answers sorted by

Highly positive outcomes are assumed to be more particular and complex than highly bad outcomes. Another assumption I think is common is that a utility of a maximally good life is lower than the magnitude of the utility of a maximally bad life. Is there a life good enough that you would take a bet of a 50% chance of that life and a 50% chance of the worst life of torture?

Given human brains as they are now I agree highly positive outcomes are more complex, the utility of a maximally good life is lower than a maximally bad life, and there is no life good enough that I'd take a 50% chance of torture.

But would this apply to minds in general (say, a random mind or one not too different from human)?

A negative utilitarian may say it's more important to decrease N-probability, but why the asymmetry?

The Case for Suffering-Focused Ethics discusses various intuitions that point in this direction.

Thanks for the link. Reading through it, I feel all the intuitions it describes. At the same time I feel there may be some kind of divergence between my narrowly focused preferences and my wider preferences. I may prefer to have a preference for creating 1000 happy people rather then preventing the suffering of 100 sad people because that would mean I have more appreciation of life itself. The direct intuition is based on my current brain but the wider preference is based on what I'd prefer (with my current brain) my preference to be.

Should I use my c... (read more)

3Kaj_Sotala6mo
I generally think that if one part of your brain prefers X and another part of your brain prefers that you would not prefer X, then the right move is probably not to try to declare one of them correct and the other wrong. Rather, both parts are probably correct in some sense, but they're attending to different aspects of reality and coming to different conclusions because of that. If you can find out how exactly they are both correct, it might be possible for them to come to agreement. E.g. Internal Double Crux is one technique for doing something like this. Appreciation in general seems to feel good, so I would probably prefer to appreciate most things more than I do currently. Seems unclear. I could imagine it going that way but also it not going that way. E.g. if someone appreciates their romantic partner a lot, that doesn't necessarily imply that they would like to have more romantic partners (though it might!). In a similar way, I could easily see myself appreciating currently-existing life more, without necessarily that leading to a desire to increase the total amount of life in the universe.

My first response to this is: What exactly is an astronomically good outcome? For one, no matter what utopia you come up with, most people will hate it, due to freedom being restricted either too much or not enough. For two, any realistic scenario that is astronomically good for someone (say, Earth's current inhabitants and their descendants) is astronomically bad for someone else. Do you really think that if we had a compromised utopia, with all the major groups of humans represented in the deal, that a ridiculous number of sentient beings wouldn't be mistreated as a direct result? The current hegemonic values are: "cosmopolitanism" extending only to human beings, individual freedom as long as you don't hurt others(read: human beings), and bioconservatism. Hell, a large chunk of the current people's values don't even extend their "cosmopolitanism" to all humans, choosing to exclude whoever is in their outgroup. Most people would love to see the natural world, red in tooth in claw as it is, spread across every alien world we find. Most people wouldn't care much if the psychopaths among us decided to use their great transhumanist freedom to simulate someone sufficiently "nonhuman" to play with, I mean we don't even care about animals let alone whatever simulated life or consciousness we will come up with in some glorious transhumanist future.

This is hardly symmetrical to s-risk: If many beings are suffering, that doesn't require many beings to live good, free lives. But if many humans are living good, free lives, with access to high technology, in practice this means that many beings are suffering, unless the values locked-in are better for sentient beings than most people's values today, to a frankly miraculous degree.

Is it more important to decrease N-probability or increase P-probability? A negative utilitarian may say it's more important to decrease N-probability, but why the asymmetry? One possibility is that the badness of N is worse than the goodness of P. Is there a fundamental reason why this should be so? 

Would you take a deal where you get to experience the best this world has to offer for an hour, and then the worst this world has to offer for an hour? I would never take such a deal, and I don't think anybody with sufficient imagination to understand what it would really entail would either. This difference in magnitude is fundamental to the human experience, and certainly seems to be fundamental to evolved minds in general: I think if you made sure that every entity in the future actually considered pleasure to be more important than pain avoidance in the extreme case, these entities would be further from human than any animal. Since this asymmetry exists in all evolved minds, making sure all the minds are designed in this way instead is what would be necessary for a truly "astronomically positive" scenario without the drawbacks I mentioned before.

Do more people feel N is more important or P? If N feels more important, is it that brains built by evolution need dangers to be more salient since they are more irrevocable? 

N is more important than P, for the reason listed above. You can say I think this because of evolution. No shit, I exist because of evolution, so everything I think, I think because of evolution. That doesn't change the relevant values. Nor does the fact that you can invent a mind that would disagree with my values, because this is also the case for all of my beliefs and values.

Most people would love to see the natural world, red in tooth in claw as it is, spread across every alien world we find

 

This is totally different than my impression.

2andrew sauer6mo
Okay that's fair in the sense that most people haven't considered it. How about this: Most people don't care, haven't thought about it and wouldn't object. Most people who have thought about the possibility of spreading life to other planets have not even so much as considered and rejected the idea that the natural state of life is bad, if they oppose spreading life to other planets it's usually to protect potential alien life. If a world is barren, they wouldn't see any objection to terraforming it and seeding it with life. I don't know exactly how representative these articles are, but despite being about the ethical implications of such a thing, they don't mention my ethical objection even once, not even to reject it. That's how fringe such concerns are. https://phys.org/news/2022-12-life-milky-comets.html https://medium.com/design-and-tech-co/spreading-life-beyond-earth-9cf76e09af90 https://bgr.com/science/spreading-life-solar-system-nasa/

My first response to this is: What exactly is an astronomically good outcome? For one, no matter what utopia you come up with, most people will hate it, due to freedom being restricted either too much or not enough.

I actually agree with this unironically, but I'd argue that the same issue faces dystopias, and negative outcomes.

Really, this is probably more general in that one person's good thing is another person's bad thing.

Care to elaborate?

Answering my own question: https://www.lesswrong.com/posts/3WMscsscLEavkTJXv/s-risks-why-they-are-the-worst-existential-risks-and-how-to?commentId=QwfbLdvmqYqeDPGbo and other comments in that post answered quite a bit of it.

Talking about s-risk reduction makes some sense, but the "risk"/fear invocation might bias people's perspectives.