risedive's Shortform

by risedive11th Oct 20219 comments
9 comments, sorted by Highlighting new comments since Today at 12:34 AM
New Comment

Which would be better for my level of happiness: living as long as possible, or making the world a better place?

I expect the answer to this question to determine my career. If living as long as possible is more important, then it seems like I should try to make as much money as possible so that I can afford life-extension technology. If making the world a better place is more important, then I should probably aim to work in AI alignment, in which I might have a small but significant impact but (I think) won’t make as much of a difference to my personal lifespan. The answers I’ll receive will probably be biased toward the latter option, seeing as how the people giving the answers would be part of the group impacted by me making the world a better place, but I might as well ask anyway.

It's likely (unless you're quite unusual as a human) that this is a false dichotomy.  You'll likely want to find multi-dimensional optima, rather than picking one simple thing and ignoring other aspects.  The more important question is how long-term you think regarding gratification.  

Look back over the last year.  Do you wish you'd done things that made you have a few much happier moments, or do you wish you'd done things that made you a little happier much of the time?

“Look back over the last year. Do you wish you'd done things that made you have a few much happier moments, or do you wish you'd done things that made you a little happier much of the time?”

The latter. Which is interesting for me, because when I was younger, I was obsessed with feeling ecstatic all the time, whereas now I just want to be content.

It seems quite likely that living as long as possible will require the world to be a better place.

That doesn't mean that it has to be you who helps make the world a better place, but that's more of a coordination problem than a happiness question. There is also the question of your happiness (or other utility measures) in possible outcomes where you fail to achieve your goal in each case.

If expensive life-extension technology isn't available, or you never succeeded in amassing enough wealth to buy it, would you look back and decide that you would have been happier having tried to make the world a better place? Likewise if the world never gets any better than it is now (and possibly worse) despite your parts in trying to improve it, would you have preferred to have tried to amass wealth instead?

This doesn't address the likelihood of these outcomes. It seems much more likely that you'll amass enough wealth to buy expensive life extension technology than that you'll make a global difference in the the state of the world, but I suspect it's likely that you could make a large difference in the state of the world for quite a number of people, depending upon what you do.

“If expensive life-extension technology isn't available, or you never succeeded in amassing enough wealth to buy it, would you look back and decide that you would have been happier having tried to make the world a better place? Likewise if the world never gets any better than it is now (and possibly worse) despite your parts in trying to improve it, would you have preferred to have tried to amass wealth instead?”

Well, I don’t know. That’s what I was trying to figure out by asking this question. For the first question, it’s quite likely, as my wealth wouldn’t have gotten me much in the end (except possibly a higher standard of living).

As for the second one, it depends on whether life extension is in fact available (and available only) to the wealthy. If it’s not, then it doesn’t make much difference anyway. If it is, I might deeply regret not taking the opportunity to become wealthy.

I was going to comment that the idea of living to be only 70 while the wealthy get life extension seems scarier to me than getting bored with ultra-realistic VR games after a year and having nothing else to do (which, to be fair, might happen even if I did make the world a better place, but in that case I still might feel marginally more satisfied knowing that I made the world a better place) but I thought about it a little more and now I’m not sure.

It’s entirely possible that Less Wrong (and Friendship is Optimal - https://www.fimfiction.net/story/62074/friendship-is-optimal) has been a bad influence on my thinking, as it’s trained me to focus on amazing VR hedonistic utopias while neglecting the things that actually make human existence worthwhile. (You know, the kind of stuff Brave New World and Wall-E’s about.) That’s only a possibility though. Maybe VR hedonistic utopias are the key to happiness after all.

Anyway, I should probably note that I think I’ve found an answer to my original dilemma by reflecting on the response Dagon gave me. I may be able to do a little bit of both if I, for example, become reasonably wealthy as a lawyer and donate 10% of my income to AI safety research.

Happiness is like health or money, a life support indicator. People are not inherently life support technicians, there are other things that matter, such as the actual things that might happen to make you happy, as opposed to the happiness itself. There is even the possibility of pursuing things you are indifferent about.

Well, the main thing I care about right now is happiness.

If you want to be happy, find a career that you enjoy! (but spend more time on personal relationships and a fulfilling social scene)

Making the world a better place can indeed be fulfilling and contribute to personal happiness, but I would not recommend AI safety work on that basis.

I think the possibility of living for a googol years vastly outweighs the amount of happiness I’d get directly from any job. And making the world a better place is agreed by everyone I’ve seen comment on the topic (including Eliezer Yudkowsky - https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny) to be an essential part of happiness, and the window of opportunity for that might well close in a hundred years or so, when AI is able to do everything for us.