I think that all of us share the same subgoal for the next 100 years - prevent x-risks and personal short-term mortality via aging and accidential death.

Elon Musk with his Neurallink is looking in the similar direction. He also underlines the importance of "meaning" as something, which connects you with others.

I don't know about any suitable sub-groups.

share the same subgoal

That's a very weak claim. Humans have lots and lots of (sub)goals. What matters is how high is that goal in the hierarchy or ranking of all the goals.

0username23yAlthough a disproportionate number of us share those goals, I think you'd be surprised at the diversity of opinion here. I've encountered EA people focused on reducing suffering over personal longevity, fundamentalist environmentalists that value eco diversity over human life, and those who work on AI 'safety' with the dream of making an overpowering AI overlord that knows best (a dystopian outcome IMHO).

We need a better theory of happiness and suffering

by toonalfrink 1 min read4th Jul 201739 comments


We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)