At the beginning, I should note that any goal which is not including immortality is stupid, as infinite existence will include realisation of almost all other goals. So immortality seems to be a good proxy for the best goal. It is better goal than pleasure. Pleasure is always temporary, and somewhat not interesting.

However, there is something bigger than immortality. I call it "to become a God". But I can't just jump, or become enlightened, or whatever, it will be not me. I want to go all the way from now to infinitely complex, eternal and superintelligent and benevolent being. I think it is the most interesting way live.

But it is not just fun. It is the meaning of life. And the "meaning" is what makes you work, even if there are no fun ahead. For example, if you care about the survival of your family, it gives you meaning. Or, speaking better, the meaning takes you.

The idea of infinite evolution is also a meaning for the following reasons. There is a basic drive to evolve in every living being. When you choose a beautiful goal, you want to put your genes in the best possible place and create best possible children, and this is a drive that moves evolution. (Not very scientific claim, as sexual selection is not as well accepted as natural selection. So it is more like poetic expression of my feeling about natural drive to evolution). If one educate himself, read, travel etc, it all is parts of this desire for evolution. Even the first AI will immediately find it and start to self-improve.

The desire to evolve is something like Nietzschen "will to power". But this will is oriented on the infinite space of future possible mind states.

I would add that I spent years working in the theory of happiness. I abandoned it and I feel much better. I don't need to be happy, I just need to be in working condition to move to my mission: infinitely evolve (it also includes saving humanity from x-risks and giving life extension for all, so my goal is altruistic).

It may look that this goal has smaller prior chances of success, but it is not so for two reasons, one them connected with appearing of superintelligence in near-term, and another is some form of observation selection which will prevent me from seeing my failure. If I merge with superintelligent AI, I could continue my evolution (as well as other people).

There is another point of view, that I often heard from Lesswrongers. That we should not dare to think about our final goals, as superintelligence will provide us with better goals via CEV. However, there is some circularity here, as superinteligence has to extract our values from us, and if we not investing in attempts to articulate them, it could assume that the most popular TV series are the best presentation of the world we want to live. Its "Games of Thrones" and "The Walking Dead".

Also like username2, I'm happy to hear of others with a view along this direction. A couple years ago I made a brief attempt at starting a modern religion called noendism, with the sole moral of survival. Not necessarily individual survival; on that we may differ.

However since then my core beliefs have evolved a bit and it's not so simple anymore. For one, after extensive research I've convinced myself that personal immortality is practically guaranteed. For another, one of my biggest worries is surviving, but imprisoned in a powerless situation.

Anyway, those details aren't practically relevant for my day to day life; these similar goals all head in the same direction.

1username23yI just want to say you are not alone, as my own goals very closely align with yours (and Jennifer's as she expressed them in this thread as well.) It's nice to know that there are other people working towards "infinite evolution" and viewing mental qualia like pain, suffering, and happiness as merely the signals that they are. Ad astra, turchin. (Also if you know a more focused sub-group to discuss actually implementing and proactively accomplishing such goals, I'd love to join.)

We need a better theory of happiness and suffering

by toonalfrink 1 min read4th Jul 201739 comments

2


We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)