The yesterday's post gathered a relatively large number of comments. Nice! And downvotes! Engagement!! Sadly it was not a ragebait, only my honest-yet-unsure take on a complex and hard-to-understand topic.
One of the comments, by Vladimir Nesov, did change how I was modelling the whole thing:
How is suffering centrally relevant to anything? If a person strives for shaping the world in some ways, it's about the shape of the world, the possibilities and choice among them, not particularly about someone's emotions, and not specifically suffering. Perhaps people, emotions, or suffering and among the considerations for how the world is being shaped, but that's hardly the first thing that should define the whole affair.
(Imagine a post-ASI world where all citizens are mandated to retain a capacity to suffer, on pain of termination.)
My understanding is that Nesov argues for preference utilitarism here. It turns out, I have mostly been associating the term "welfare" with suffering-minimization. Suffering meaning, in the most general sense, the feelings that come from lacking something from the Maslow's hierarcy of needs or something like that. It's not that different of a framing, though, but indeed focuses more on the suffering than necessary.
From that point of view, it indeed seems like I've misunderstood the whole caring-about-others thing. It's instead about value-fullfillment, letting them shape the world as they see fit? And reducing suffering is just the primary example of how biological agents wish the world to change.
That's way more elegant model than focusing on the suffering, at least. Sadly this seems to make the question "why should I care?" even harder to answer. At least suffering is aesthetically ugly, and there's some built-in impulse to avoid it. The only way to derive any intrinsic values is starting from our evolutionary instincts, as argued by moral relativism and Hume's guillotine. On the other hand, compassion emerged as a coordination mechanism, maybe keeping it is viable.
That said, preference utilitarism still has two major issues. The first is, of course, the same issue as with every other type of utilitarism: why should we equally care about everybody instead of only ourselves? Or anyone at all? And second, how is this compatible with the problems around "revealed preferences".
Answering to the first question is the harder part. Perhaps it doesn't have an answer. Rawl's veil of ignorance? Definitely not enough for me. I went throught the wikipedia page, and asked ChatGPT to list the strongest arguments too. None of them seem to have any real basis on anything. They're just assertions like "every person has equal moral worth". That's completely arbitrary! And doesn't tell us what counts as a person, but we can just start with treaing everything that attempts to communicate or whatever, I guess.
The second question at least seems to have some kind of standard answers. Some of them even seem relevant! There's definitely a difference between what people want to do and what they do. Coherent extrapolated volition seems to be one such magic phrase. That said, I model agentness itself as a scale. Stones are not agentic. Some people are more agentic than I am. Agentyness is required for having values. I'm not sure should we weight wellbeing or whatever optimization target with agentness. But it's certainly clear that not everything has equal moral value, even though in practice treating every human equally seems like a good default.
I don't want to be a nihilist. I'm trying to build some foundation. But the foundation must be build on what I care about. At least that little bit of grounding and symmetry-breaking is required.