My first response to this is: What exactly is an astronomically good outcome? For one, no matter what utopia you come up with, most people will hate it, due to freedom being restricted either too much or not enough. For two, any realistic scenario that is astronomically good for someone (say, Earth's current inhabitants and their descendants) is astronomically bad for someone else. Do you really think that if we had a compromised utopia, with all the major groups of humans represented in the deal, that a ridiculous number of sentient beings wouldn't be mistreated as a direct result? The current hegemonic values are: "cosmopolitanism" extending only to human beings, individual freedom as long as you don't hurt others(read: human beings), and bioconservatism. Hell, a large chunk of the current people's values don't even extend their "cosmopolitanism" to all humans, choosing to exclude whoever is in their outgroup. Most people would love to see the natural world, red in tooth in claw as it is, spread across every alien world we find. Most people wouldn't care much if the psychopaths among us decided to use their great transhumanist freedom to simulate someone sufficiently "nonhuman" to play with, I mean we don't even care about animals let alone whatever simulated life or consciousness we will come up with in some glorious transhumanist future.
This is hardly symmetrical to s-risk: If many beings are suffering, that doesn't require many beings to live good, free lives. But if many humans are living good, free lives, with access to high technology, in practice this means that many beings are suffering, unless the values locked-in are better for sentient beings than most people's values today, to a frankly miraculous degree.
Is it more important to decrease N-probability or increase P-probability? A negative utilitarian may say it's more important to decrease N-probability, but why the asymmetry? One possibility is that the badness of N is worse than the goodness of P. Is there a fundamental reason why this should be so?
Would you take a deal where you get to experience the best this world has to offer for an hour, and then the worst this world has to offer for an hour? I would never take such a deal, and I don't think anybody with sufficient imagination to understand what it would really entail would either. This difference in magnitude is fundamental to the human experience, and certainly seems to be fundamental to evolved minds in general: I think if you made sure that every entity in the future actually considered pleasure to be more important than pain avoidance in the extreme case, these entities would be further from human than any animal. Since this asymmetry exists in all evolved minds, making sure all the minds are designed in this way instead is what would be necessary for a truly "astronomically positive" scenario without the drawbacks I mentioned before.
Do more people feel N is more important or P? If N feels more important, is it that brains built by evolution need dangers to be more salient since they are more irrevocable?
N is more important than P, for the reason listed above. You can say I think this because of evolution. No shit, I exist because of evolution, so everything I think, I think because of evolution. That doesn't change the relevant values. Nor does the fact that you can invent a mind that would disagree with my values, because this is also the case for all of my beliefs and values.
Given human brains as they are now I agree highly positive outcomes are more complex, the utility of a maximally good life is lower than a maximally bad life, and there is no life good enough that I'd take a 50% chance of torture.
But would this apply to minds in general (say, a random mind or one not too different from human)?