Are you exploring your own goals and preferences, or hoping to understand/enforce "common" goals on others (including animals)?

I applaud research (including time spent at a Buddhist monastery, however you'll need to acknowledge that you'll perceive different emotions if you're exploring it for happiness than if it's your only option in life) and reporting on such. I've mostly accepted that there's no such thing as coherent terminal goals for humans - everything is relative to each of our imagined possible futures.

I have a strong contrarian hunch that human terminal goals converge as long as you go far enough up the goal chain. What you see in the wild is people having vastly different tastes of how to live life. One likes freedom, the next likes community, and then the next is just trying to gain as much power as possible. But I call those subterminal goals, and I think what generated them is the same algorithm with different inputs (different perceived possibilities?). The algorithm, which I think optimizes for some proxies of genetic survival like sameness and self-preservation, that's the terminal goal. And no, I'm not trying to enforce any values. This isn't about things-in-the-world that ought to make us happy. This is about inner game.

We need a better theory of happiness and suffering

by toonalfrink 1 min read4th Jul 201739 comments

2


We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)