Wiki Contributions

Comments

Would the Cthulhu Mythos be at least somewhat close to what you are imagining?

Aliens running our universe as a simulation would probably not be recognized as “spiritual” beings by religious people, so I guess the intelligence would have to be of the “just-magically-appearing” kind.

OK, perhaps I should have been more precise. Suppose Omega tells you that there is something out there in the Universe (whether it’s karma, spirits, gods, or God) which we humans would recognize as agent-like and which inspired some human religions, but is not the result of evolution. What would your model of the Universe now look like?

“If expensive life-extension technology isn't available, or you never succeeded in amassing enough wealth to buy it, would you look back and decide that you would have been happier having tried to make the world a better place? Likewise if the world never gets any better than it is now (and possibly worse) despite your parts in trying to improve it, would you have preferred to have tried to amass wealth instead?”

Well, I don’t know. That’s what I was trying to figure out by asking this question. For the first question, it’s quite likely, as my wealth wouldn’t have gotten me much in the end (except possibly a higher standard of living).

As for the second one, it depends on whether life extension is in fact available (and available only) to the wealthy. If it’s not, then it doesn’t make much difference anyway. If it is, I might deeply regret not taking the opportunity to become wealthy.

I was going to comment that the idea of living to be only 70 while the wealthy get life extension seems scarier to me than getting bored with ultra-realistic VR games after a year and having nothing else to do (which, to be fair, might happen even if I did make the world a better place, but in that case I still might feel marginally more satisfied knowing that I made the world a better place) but I thought about it a little more and now I’m not sure.

It’s entirely possible that Less Wrong (and Friendship is Optimal - https://www.fimfiction.net/story/62074/friendship-is-optimal) has been a bad influence on my thinking, as it’s trained me to focus on amazing VR hedonistic utopias while neglecting the things that actually make human existence worthwhile. (You know, the kind of stuff Brave New World and Wall-E’s about.) That’s only a possibility though. Maybe VR hedonistic utopias are the key to happiness after all.

Anyway, I should probably note that I think I’ve found an answer to my original dilemma by reflecting on the response Dagon gave me. I may be able to do a little bit of both if I, for example, become reasonably wealthy as a lawyer and donate 10% of my income to AI safety research.

I think the possibility of living for a googol years vastly outweighs the amount of happiness I’d get directly from any job. And making the world a better place is agreed by everyone I’ve seen comment on the topic (including Eliezer Yudkowsky - https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny) to be an essential part of happiness, and the window of opportunity for that might well close in a hundred years or so, when AI is able to do everything for us.

“Look back over the last year. Do you wish you'd done things that made you have a few much happier moments, or do you wish you'd done things that made you a little happier much of the time?”

The latter. Which is interesting for me, because when I was younger, I was obsessed with feeling ecstatic all the time, whereas now I just want to be content.

Well, the main thing I care about right now is happiness.

Which would be better for my level of happiness: living as long as possible, or making the world a better place?

I expect the answer to this question to determine my career. If living as long as possible is more important, then it seems like I should try to make as much money as possible so that I can afford life-extension technology. If making the world a better place is more important, then I should probably aim to work in AI alignment, in which I might have a small but significant impact but (I think) won’t make as much of a difference to my personal lifespan. The answers I’ll receive will probably be biased toward the latter option, seeing as how the people giving the answers would be part of the group impacted by me making the world a better place, but I might as well ask anyway.