Hey, all! An interesting discussion in this thread. Regarding terminal/ end goals...

I've come up with a goal framework consisting of 3 parts: 1) TRUTH. Let’s get to know as much as we can, basing our decisions on the best available knowledge, never closing our eyes to the truth. 2) KINDNESS. Let’s be good to each other, for this is the only kind of life worth living for. 3) BLISS. Let’s enjoy this all, every moment of it.

(A prerequisite to them all is existence, survival. For me, the idea of infinite or near-infinite survival of me/ humankind certainly has appeal, but I'd choose a somewhat shorter existence having more of the above-mentioned 3 things over a somewat longer existence with less of these things. But this is another longer discussion, let's just say that IF existence already exists, for a shorter or longer time, then that's what it should be like).

These 3 goals/values are axiomatic, they are what I consciously choose to want. What I want to want. Be ther humans, transhumans, AI, whatever - a world that consists more of these things is a better direction to head towards, a world that has less, a worse one. Yet another longer discussion is, what would the trade-offs between each of these be, but let's just say for now, that the goal is to find harmonious outcomes that have all three of these. (This way, wireheading-style happiness and harming-others-as-happiness, can easily be excluded).

Anyone wants to discuss something further from here, I'd be glad to.

We need a better theory of happiness and suffering

by toonalfrink 1 min read4th Jul 201739 comments

2


We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)