as a preference utilitarian I dislike happiness studies. they're much too easy to use as justification for social engineering schemes.
You're solving the wrong problem. Did you really just call a body of experimental knowledge a political inconvenience?
Fun seems to require not fun in my experience with this particular body. Nevertheless, sign me up for the orgasmium (which appropriately came right after 'twice as hard')?
I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.And the twenty lines are from the "spam" sketch. :)
I agree with the basic thing you're saying here, although, personally, I would want to right away start with some amount of mental improvements, a bit of debugging here, improvement there. Maybe even taking it a bit slow, but definitely not waiting to at least start. But that's just me. :)
I certainly agree that we don't need to instantly all, well, I believe the phrase you once used was "burn for the light," but I think I'd prefer for the option to at least be available.
Other than that, I could always spend a year contemplating the number 1, then a year contemplating the number 2... (sorry, it was the obvious reference that HAD to be made here. :D)
yeah, I did. Only because I see political machinations as far more dangerous than the problems happiness studies solve.
'Fun' is just a word. Your use of it probably doesn't coincide with the standard meaning. The standard version of fun could likely be easily boxed in an orgasmium type state. You've chosen a reasonable sounding word to encapsulate the mechanisms of your own preference. Nietzsche would call that mechanism instinct, Crowley love. What it boils down to is that we all have a will, and that will is often counter to prior moralities and biological imperatives.
My own arbitrary word for preferable future states is 'interesting'. You'd have to be me for that to really mean anything though.
You're solving the wrong problem. Did you really just call a body of experimental knowledge a political inconvenience?Oh, snap.
Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral ... to talk about human-level minds still running around the day after the Singularity.We're offended by the inequity - why does that big hunk of meat get to use 2,000W plus 2000 square feet that it doesn't know what to do with, while the poor, hardworking, higher-social-value em gets 5W and one square inch? And by the failure to maximize social utility.
Fun is a cognitive phenomenon. Whatever your theory of fun is, I predict that more fun will be better than less fun, and the moral thing to do seems to be to pack in as much fun as you can before the heat death of the universe. Following that line of thought could lead to universe-tiling.
Suppose you develop a theory of fun/good/morality. What are arguments for not tiling the universe in a way that maximizes it? Are there any such arguments that don't rely on either diversity as an inherent good, or on the possibility that your theory is wrong?
Your post seems to say that fun and morality are the same. But we use the term "moral" only in cases when the moral thing to do isn't fun. I think morality = fun only if it's a collective fun. If that collective fun is also summed over hypothetical agents you could create, then we come back to moral outrage at humans.
The problem brings to mind the colonization of America. Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization that can support a population of about 100 times as many people, who think they are living more pleasurable and interesting lives, and hardly ever cut out their neighbors' hearts on the tops of temples to the sun god? Intellectuals today unanimously say "yes". But I don't think they've allowed themselves to actually consider the question.
What is the moral argument for not colonizing America?
Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization that can support a population of about 100 times as many people, who think they are living more pleasurable and interesting lives, and hardly ever cut out their neighbors' hearts on the tops of temples to the sun god?
Dude, false dichotomy. What if the colonists had just colonized America without being such total dicks about it?
I bet there are plausible scenarios leading from such a policy that would've led to about the same level of awesomeness on the American continent that we see today, or possibly more awesomeness.
Edit: I see this has already been addressed below. These pre-threading conversations are disorienting.
Does that mean I could play a better version of World of Warcraft all day after the singularity? Even though it's a "waste of time"?
Yep, you just have to give yourself permission first.
Also, this is the least interesting post-singularity world I've ever heard of. ;-) Well, unless your "better version of WoW" is ramped up to be at least as good a source of novelty as a Star Trek holodeck.
The transhumanist philosopher David Pearce is an advocate of what he calls the Hedonistic Imperative: The eudaimonic life is the one that is as pleasurable as possible. So even happiness attained through drugs is good? Yes, in fact: Pearce's motto is "Better Living Through Chemistry".
Well, it's definitely better than the alternative. We don't necessarily want to build Jupiter-sized blobs of orgasmium, but getting rid of misery would be a big step in the right direction. Pleasure and happiness aren't always good, but misery and pain are almost always bad. Getting rid of most misery seems like a necessary, but not sufficient, condition for Paradise.
I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.
You know, I wouldn't be surprised, considering that you can fit most of physics on a T-shirt. (Isn't God written in Lisp, though?)
Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization...?
False dichotomy.
Slightly tangential, but I think this needs addressing:
What is the moral argument for not colonizing America?
Literally interpreted, that's a meaningless question. We can't change history by moral argument. What we can do is point to past deeds and say, "let's not do things like that anymore".
If European civilization circa 1500 had been morally advanced enough to say, "let's not trample upon the rights of other peoples", chances are they would already have been significantly more advanced in other ways too. Moral progress takes work, just like technological and intellectual progress. Indeed we should expect some correlation among these modes of progress, should we not? And isn't that largely what we find?
By critiquing the errors of the past, we may hope to speed up our own progress on all fronts. This is (or should be) the point of labeling the colonization of America (in the way it happened) as "wrong".
It's getting close to the time when you have to justify your bias against universe-tiling.
If you truly believe that happiness/fun/interest/love/utils is the measurement to maximize, and you believe in shutting up and multiplying, then converting all matter to orgasmium sounds right, as a first approximation. You'd want self-improving orgasmium, so it can choose to replace itself with something that can enjoy even more fully and efficiently, of course.
Heh, if I could believe in a limited creator-god, I'd be tempted to think humans might be seed orgasmium. Our job is to get better at it and fill the universe with ourselves.
I'm an atheist who likes singing Song of Hope in church. I'd like to be a wirehead (or enter Nozick's experience machine). I don't know of any reason to delay becoming a superintelligence unless being a wirehead is the alternative.
The Indians were in large part killed by disease introduced by English fishermen. That's why Plymouth was relatively depopulated when the Pilgrims arrived and the Mound-Building Civilization collapsed without ever coming into contact with Europeans.
komponisto, as a non-cognitivist I don't find the notion of moral "progress" to be meaningful, and I'd like to hear your argument for why we should expect some sort of empirical correlation between it and, say, technological advancement (which gives the overwhelming power that in turn makes genocide possible).
"...if you would prefer not to become orgasmium, then why should you?"
I'd prefer not to become orgasmium, because I value my consciousness and humanity, my ability to think, decide, and interact. However, it's unclear to me what exactly preference is, other than the traversal of pathways we've constructed, whether we're aware of them or not, leading to pleasure, or away from pain. To drastically oversimplify, those values exist in me as a result of beliefs I've constructed, linking the lack of those things to an identity that I don't want, which in turn is eventually linked to an emotional state of sadness and loss that I'd like to avoid. There's also probably a link to a certain identity that I do want, which leads to a certain sense of pride and rightness, which leads me to a positive emotional state.
Eliezer, you said there was nothing higher to override your preference to increase intelligence gradually. But what about the preferences that led you to that one? What was it about the gradual increase of intelligence, and your beliefs about what that means, that compelled you to prefer it? Isn't that deeper motivation closer to your actual volition? How far down can we chase this? What is the terminal value of fun, if not orgasmium?
Or is "fun" in this context the pursuit specifically of those preferences that we're consciously aware of as goals?
TGGP, I'm afraid you've committed the moral analogue of replying to some truth claim with a statement of the form: "As a non-X-ist, I don't find the notion of truth to be meaningful".
By "moral progress" I simply mean the sense in which Western civilization is nicer today than it used to be. E.g. we don't keep slaves, burn live cats, etc. (If you have any doubts about whether such progress has occurred, you underestimate the nastiness of previous eras.) In particular, please note that I am not invoking any sort of fancy ontology, so let's not get derailed that way.
As for why we should expect moral progress to correlate with other kinds: well, maybe for arbitrary minds we shouldn't. But we humans keep trying to become both smarter and nicer, so it shouldn't be surprising that we succeed in both dimensions more and more over time.
Eliezer: Isn't your possible future self's disapproval one highly plausible reason for not spending lots of resources developing slowly?
Honestly, the long recognized awfulness of classic descriptions of heaven seems like counter-evidence to the thesis of "Stumbling on Happiness". I can't be confident regarding how good I am at knowing what would make me happy, so if the evidence that people in general are bad at knowing what will make them happy I should expect to be bad at it, but if I know that people in general are comically awful at knowing what will make them happy compared to myself and to most people the judgment of whom I respect then that fact basically screens off the standard empirical evidence of bad judgment as it applies to me.
Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?
Komponisto: "Moral progress takes work, just like technological and intellectual progress. Indeed we should expect some correlation among these modes of progress, should we not?" Honestly, this seemed obvious before the 20th century when the Germans showed that it was possible to be plausibly the world's most scientifically advanced culture but morally backward. Our civilization still doesn't know what to make of that. We obviously see correlation, but also outliers.
"because you don't actually want to wake up in an incomprehensible world"
Is not it what all people do each morning anyway?
I don't know if this comment will get pass the political correctness criterion. May the webadmin have mercy on my soul :)
Eliezer, I am very much tempted to go into personal comments. I will do that on one premise only – that the title of this blog is “Overcoming Bias”. I would like to contribute to that purpose in good faith.
Having read some of Eliezer’s posts I was sure that he has been treated with a high dose of Orthodox Judaism. In this post he specifically points to that fact, thus confirming my analysis. To other readers: Orthodox Judaism requires that every action and thought be informed by divinely revealed law and ethics. It is one of the most sophisticated religious dogmas imaginable, and in its complexity and depth is comparable only to Buddhism.
Another important feature of Orthodox Judaism is its compartmentilization. This provides adherents of this religion with a very special belief system centered on indisputable sacredness of all things Jewish. It is so strong a system indeed that it sometimes leads to well-documented obsessive compulsive disorders.
Gladly, Eliezer has evaded the intricate chains of that belief system, it appears. My wild guess here is that he needs a substitute system. That is why he is so keen on Singularity. That is why he would like to have his Fun Theory – to counter his lack of security after he has left the warm house of Jahveh. So he is building a new, quite complex and evolutionary belief system that looks to me like a modern day חסידות.
I can only sympathize.
Michael, I take the point about outliers -- but claims like the one I made are inherently statistical in nature.
Furthermore, it is worth noting that (1) pre-WWI Germany would indeed have to be considered one of the more morally enlightened societies of the time; and (2) the Nazi regime ultimately proved no help to the cause of German scientific and cultural advancement -- and that's putting it way too mildly.
So perhaps this episode, rather than undermining the proposed correlation, merely illustrates the point that even advanced civilizations remain vulnerable to the occasional total disaster.
Doug S.: if it were 20 lines of lisp... it is'nt, see http://xkcd.com/224/ :)
Furthermore... it seems to me that a FAI which creates a nice world for us needs the whole human value system AND its coherent extrapolation. And knowing how complicated the human value system is, I'm not sure we can accomplish even the former task. So what about creating a "safety net" AI instead? Let's upload everyone who is dying or suffering too much, create advanced tools for us to use, but otherwise preserve everything until we come up with a better solution. This would fit into 20 lines, "be nice" wouldn't.
Were the people burning cats really trying to become non-cat-burners? Wasn't slavery viewed as divinely ordained for some time?
Regarding the Germans: winners write the history books. That is why the Soviet Union is not the anathema that Nazi Germany is to us today. If the Germans had won we would not consider them quite so evil. Technological advancement aids in winning wars.
V.G., good theory but I think it's ethnic rather than religious. Ayn Rand fell prey to the same failure mode with an agnostic upbringing. Anyway this is a kind of ad hominem called the Bulverism fallacy ("ah, I know why you'd say that"), not a substantive critique of Eliezer's views.
Substantively: Eliezer, I've seen indications that you want to change the utility function that guides your everyday actions (the "self-help" post). If you had the power to instantly and effortlessly modify your utility function, what kind of Eliezer would you converge to? (Remember each change is influenced by the resultant utility function after the previous change.) I believe (but can't prove) you would either self-destruct, or evolve into a creature the current you would hate. This is a condensed version of the FAI problem, without the AI part :-)
Vladimir, Kant once advised: "free yourself from the self-incurred tutelage of others".
I think that even if you consider Eliezer's Fun Theory as a somehow independent ethical construct (whatever that means), you still fail to accommodate for the lack of evidentialism in it. To me it appears as a mash-up of sporadic belief and wishful thinking, and definitely worth considering the ad hominem causality for it.
V.G., see my exchange with Eliezer about this in November: http://lesswrong.com/lw/vg/building_something_smarter/ , search for "religion". I believe he has registered our opinion. Maybe it will prompt an overflow at some point, maybe not.
The discussion reminds me of Master of Orion. Anyone remember that game? I usually played as Psilons, a research-focused race, and by the endgame my research tree got maxed out. Nothing more to do with all those ultra-terraformed planets allocated to 100% research. Opponents still sit around but I can wipe the whole galaxy with a single ship at any moment. Wait for the opponents to catch up a little, stage some nice space battles... close the game window at some point. What if our universe is like that?
Eliezer:
Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral, and above all a failure of imagination, to talk about human-level minds still running around the day after the Singularity.It won't be any of the ghastly things sometimes professed by the enthusiastic, but I think you should expect a creative surprise, and antipredict specific abstractions not obviously relevant to the whole of morality, such as gradual change in intelligence.
"Wait for the opponents to catch up a little, stage some nice space battles... close the game window at some point. What if our universe is like that?"
Wow, what a nice elegant Fermi paradox solution:)
Michael Vassar: "Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?"
Huh? No need. Why would you think I'm unaware of that?
I notice that several people replied to my question, Why not colonize America?; yet no one addressed it. I think they fail to see the strength of the analogy. Humans use many more resources than ems or AIs. If you take the resources from the humans and give them to the AI, you will at some point be able to support 100 times as many "equivalent", equally happy people. Make an argument for not doing that. And don't, as komponisto did, just say that it's the right thing to do.
Everybody says that not taking the land from the Native Americans would have been the right thing to do; but nobody wants to give it back.
An argument against universe-tiling would also be welcome.
TGGP, I'm not going to argue the point that there has been moral progress. It isn't the topic of this post.
Phil Goetz:
Everybody says that not taking the land from the Native Americans would have been the right thing to do; but nobody wants to give it back.
The whole point of my original comment was to refute this very inference. Arguing that taking land from the Native Americans was wrong is not the same as arguing that it should be "given back" now (whatever that would mean). Nor is it the same as wishing we lived in a world where it never happened.
What it means is wishing we lived in a world where the Europeans had our moral values -- and thus also in all probability our science and technology -- centuries ago. Before committing misdeeds against Native Americans.
Also, an argument that the actual colonization of America was "wrong" is not the same as an argument that America should never have been turned into a civilization. Surely there are ways to accomplish this without imposing so much disutility on the existing inhabitants*. Likewise for creating nice worlds with ems and AIs.
*There lies the implicit moral principle, in case you didn't notice.
I can't believe the discussion has got this far and no-one has mentioned The Land of Infinite Fun.
Yes, thank you, I was expecting someone to mention the Culture. I'll discuss it explicitly at some point.
komponisto, we can leave aside the question of whether moral progress is possible or actual and focus on why we should expect it to be associated with technological progress. We can easily see that in the middle ages people were trying to create tougher armor and more powerful weaponry. Ethically, they seem to strive to be more obedient Christians. That includes setting as a goal things that many of us today consider IMMORAL. Rather than hoping for progress along that axis, many instead thought that mankind was Fallen from an earlier golden age and if anything sought to turn the clock back (that is how the early Protestants and Puritans viewed themselves). It was never the case that anybody simply made moral discoveries that were simply proven to all who would listen, as in Eliezer's silly example of At'gra'len'ley. It was often the case that two sides considered each other immoral and one of them outcompeted the other militarily and shut up its propagandists. For what reason should we think it most likely that the victor actually was more moral?
So I appologize, Vladimir for bringing this up again, but i'm sort of a newcomer :)
However, notice that even in "Building Something Smarter" Eliezer does NOT deny his underlying need for a religious foundation (he simply declines to comment, which, among other things denotes his own dissatisfaction with that, well, bias).
How odd, I just finished reading The State of the Art yesterday. And even stranger, I thought 'Theory of Fun' while reading it. Also, nowhere near the first time that something I've been reading has come up here in a short timeframe. Need to spend less time on this blog!
Trying to anticipate the next few posts without reading:
Any Theory of Fun will have to focus on that elusive magical barrier that distinguishes what we do from what Orgasmium does. Why should it be that we place a different on earning fun from simply mainlining it? The intuitive answer is that 'fun' is the synthesis of endeavour and payoff. Fun is what our brains do when we are rewarded for effort. The more economical and elegant the effort we put in for higher rewards the better. It's more fun to play Guitar Hero when you're good at it, right?
But it can't just be about ratio of effort to reward, since orgasmium has an infinite ratio in this sense. So we want to put in a quantity of elegant, efficient effort, and get back a requisite reward. Still lots of taboo-able terms in there, but I'll think further on this.
V.G., since you seem to be an intelligent newcomer, I direct you to Is Humanism A Religion-Substitute? and suggest that you also browse the Religion tag.
Hmm. I wonder if this indicates that we may expect to see an exposition on the topic of Eliezer's preferred attempt at a solution to the wirehead problem. That would be fun.
We don't need to transform the universe into something we feel dutifully obligated to create, but isn't really much fun - in the same way that a Christian would feel dutifully obliged to enjoy heaven - or that some strange folk think that creating orgasmium is, logically, the rightest thing to do.
It doesn't seem impossible to me (only unlikely) that orgasmium is really the best thing there could be according to our idealized preferences, and far better than anything we could be transformed into while preserving personal identity, such that we would be dutifully obligated to create it, even though it's no fun for us or anything we identify with. I think this would stretch the point somewhat.
In other words, "what is well-being?", in such terms that we can apply it to a completely alien situation. This is an important issue.
One red herring, I think, is this:
One major set of experimental results in hedonic psychology has to do with overestimating the impact of life events on happiness.
That could be read two ways. One way is the way that you and these psychologists are reading it. Another interpretation is that the subjects estimated the impact on their future well-being correctly, but after the events, they reported their happiness with respect to their new baseline, which became adjusted to their new situation. The second thing is effectively the derivative of the first. In this interpretation the subjects' mistake is confusing the two.
I'm finding it hard to choose between letting the AI decide what to do with the piece of matter/energy/information that allegedly is me or having it give me some Master PC like console / wishing well through which I can gradually change myself and explore reality at my own pace. I feel quite certain that if I chose the later, once I got close enough to the AI's level of understanding I would have wished to just have let it take charge from the very beginning. I mean come on, how can anything a puny human like me chooses to do possibly be better than the decision of a God-like AI?
Btw, why do so many of you appear to be so certain that the heat death of this universe would be the end of everything? Doesn't the Anthropic principle, quantum mechanics, the overall exciting weirdness about reality (which has recently started to become increasingly more apparent), eastern philosophy and a plethora of other things make it seem quite likely that this is not the only universe that exists?
Btw, why do so many of you appear to be so certain that the heat death of this universe would be the end of everything? Doesn't . . . make it seem quite likely that this is not the only universe that exists?
Yes, but (as we understand it now, and for all practical purposes) this particular universe contains the entirety of what I will experience, what everyone I could ever know will experience, and what I can ever have an effect on. As such, I don't much care about what happens in other universes.
For some reason I've "always" found this extraordinarily beautiful and some kind of focal point of all morality related posts. Maybe because this wraps up the intuitions behind Fun Theory, and Fun Theory helped me grasp naturalistic metaethics and morality better than anything else.
A possible problem with a fun universe-- it seems to me that a good many people get their sense of their own value by doing things to make the world better, with better being somewhat framed as making it less bad for other people, or making it good in ways which prevent badness. That is, you feed a baby both because the baby is happy being fed and because the baby will be miserable if it isn't fed.
This is called "wanting to be needed". What makes this desire go away? It's possible that people will stop feeling it if they're sure that they don't need to prove their value, or it might be that they'd feel adrift and pointless in a universe where they feel that there's nothing important for them to do.
As for a fast upgrade, I think being intelligent is fun, and I assume (perhaps wrongly) that being more intelligent would be more fun. A fast upgrade (if safe, I don't think I'd be a fast adopter) sounds good to me. I'd be waking up in a world which would be incomprehensible to me as I am now, but presumably manageable for me as I would be then, or at least no worse than being a baby in this world.
Fun Theory, in my imagination, would cover "wanting to be needed". I'd bet that's part of why you'd not want an FAI to instantly make everything as good as possible.
How do we know that our own preferences are worth trusting? Surely you believe in possible preference systems that are defective (I'm reminded of another post involving giant cheesecakes). But how do we know that ours isn't one of them? It seems plausible to me that evolution would optimize for preferences that aren't morally optimal, because its utility function is inclusive fitness.
This requires us to ask what metric we would use, outside our own preferences; not an easy question, but one I think we have to face up to asking. Otherwise, we'll end up making giant cheesecakes.
I can personally testify that praising God is an enormously boring activity, even if you're still young enough to truly believe in God.
To each eir own. Praying was actually pretty fun, given that I thought I was getting to talk to an all-powerful superbeing who was also my best friend. Think of Calvin talking to Hobbes.
As for group singing praising God, I loved that. Singing loudly and proudly with a large group of friends is probably what I miss most of all about Christianity.
As someone who said his share of prayers back in his Orthodox Jewish childhood upbringing, I can personally testify that praising God is an enormously boring activity, even if you're still young enough to truly believe in God. The part about praising God is there as an applause light that no one is allowed to contradict: it's something theists believe they should enjoy, even though, if you ran them through an fMRI machine, you probably wouldn't find their pleasure centers lighting up much.
I think this is typical minding. It really can be joyful to exalt in how wonderful something is, and that is how many people relate to god, even if this exaltation is based on some confused beliefs and they don't know it.
Just imagine singing a song about something that does have deep meaning for you.
Raise the topic of cryonics, uploading, or just medically extended lifespan/healthspan, and some bioconservative neo-Luddite is bound to ask, in portentous tones:
They don't try to actually answer the question. That is not a bioethicist's role, in the scheme of things. They're just there to collect credit for the Deep Wisdom of asking the question. It's enough to imply that the question is unanswerable, and therefore, we should all drop dead.
That doesn't mean it's a bad question.
It's not an easy question to answer, either. The primary experimental result in hedonic psychology—the study of happiness—is that people don't know what makes them happy.
And there are many exciting results in this new field, which go a long way toward explaining the emptiness of classical Utopias. But it's worth remembering that human hedonic psychology is not enough for us to consider, if we're asking whether a million-year lifespan could be worth living.
Fun Theory, then, is the field of knowledge that would deal in questions like:
One major set of experimental results in hedonic psychology has to do with overestimating the impact of life events on happiness. Six months after the event, lottery winners aren't as happy as they expected to be, and quadriplegics aren't as sad. A parent who loses a child isn't as sad as they think they'll be, a few years later. If you look at one moment snapshotted out of their lives a few years later, that moment isn't likely to be about the lost child. Maybe they're playing with one of their surviving children on a swing. Maybe they're just listening to a nice song on the radio.
When people are asked to imagine how happy or sad an event will make them, they anchor on the moment of first receiving the news, rather than realistically imagining the process of daily life years later.
Consider what the Christians made of their Heaven, meant to be literally eternal. Endless rest, the glorious presence of God, and occasionally—in the more clueless sort of sermon—golden streets and diamond buildings. Is this eudaimonia? It doesn't even seem very hedonic.
As someone who said his share of prayers back in his Orthodox Jewish childhood upbringing, I can personally testify that praising God is an enormously boring activity, even if you're still young enough to truly believe in God. The part about praising God is there as an applause light that no one is allowed to contradict: it's something theists believe they should enjoy, even though, if you ran them through an fMRI machine, you probably wouldn't find their pleasure centers lighting up much.
Ideology is one major wellspring of flawed Utopias, containing things that the imaginer believes should be enjoyed, rather than things that would actually be enjoyable.
And eternal rest? What could possibly be more boring than eternal rest?
But to an exhausted, poverty-stricken medieval peasant, the Christian Heaven sounds like good news in the moment of being first informed: You can lay down the plow and rest! Forever! Never to work again!
It'd get boring after... what, a week? A day? An hour?
Heaven is not configured as a nice place to live. It is rather memetically optimized to be a nice place for an exhausted peasant to imagine. It's not like some Christians actually got a chance to live in various Heavens, and voted on how well they liked it after a year, and then they kept the best one. The Paradise that survived was the one that was retold, not lived.
Timothy Feriss observed, "Living like a millionaire requires doing interesting things and not just owning enviable things." Golden streets and diamond walls would fade swiftly into the background, once obtained —but so long as you don't actually have gold, it stays desirable.
And there's two lessons required to get past such failures; and these lessons are in some sense opposite to one another.
The first lesson is that humans are terrible judges of what will actually make them happy, in the real world and the living moments. Daniel Gilbert's Stumbling on Happiness is the most famous popular introduction to the research.
We need to be ready to correct for such biases—the world that is fun to live in, may not be the world that sounds good when spoken into our ears.
And the second lesson is that there's nothing in the universe out of which to construct Fun Theory, except that which we want for ourselves or prefer to become.
If, in fact, you don't like praying, then there's no higher God than yourself to tell you that you should enjoy it. We sometimes do things we don't like, but that's still our own choice. There's no outside force to scold us for making the wrong decision.
This is something for transhumanists to keep in mind—not because we're tempted to pray, of course, but because there are so many other logical-sounding solutions we wouldn't really want.
The transhumanist philosopher David Pearce is an advocate of what he calls the Hedonistic Imperative: The eudaimonic life is the one that is as pleasurable as possible. So even happiness attained through drugs is good? Yes, in fact: Pearce's motto is "Better Living Through Chemistry".
Or similarly: When giving a small informal talk once on the Stanford campus, I raised the topic of Fun Theory in the post-talk mingling. And someone there said that his ultimate objective was to experience delta pleasure. That's "delta" as in the Dirac delta—roughly, an infinitely high spike (that happens to be integrable). "Why?" I asked. He said, "Because that means I win."
(I replied, "How about if you get two times delta pleasure? Do you win twice as hard?")
In the transhumanist lexicon, "orgasmium" refers to simplified brains that are just pleasure centers experiencing huge amounts of stimulation—a happiness counter containing a large number, plus whatever the minimum surrounding framework to experience it. You can imagine a whole galaxy tiled with orgasmium. Would this be a good thing?
And the vertigo-inducing thought is this—if you would prefer not to become orgasmium, then why should you?
Mind you, there are many reasons why something that sounds unpreferred at first glance, might be worth a closer look. That was the first lesson. Many Christians think they want to go to Heaven.
But when it comes to the question, "Don't I have to want to be as happy as possible?" then the answer is simply "No. If you don't prefer it, why go there?"
There's nothing except such preferences out of which to construct Fun Theory—a second look is still a look, and must still be constructed out of preferences at some level.
In the era of my foolish youth, when I went into an affective death spiral around intelligence, I thought that the mysterious "right" thing that any superintelligence would inevitably do, would be to upgrade every nearby mind to superintelligence as fast as possible. Intelligence was good; therefore, more intelligence was better.
Somewhat later I imagined the scenario of unlimited computing power, so that no matter how smart you got, you were still just as far from infinity as ever. That got me thinking about a journey rather than a destination, and allowed me to think "What rate of intelligence increase would be fun?"
But the real break came when I naturalized my understanding of morality, and value stopped being a mysterious attribute of unknown origins.
Then if there was no outside light in the sky to order me to do things—
The thought occurred to me that I didn't actually want to bloat up immediately into a superintelligence, or have my world transformed instantaneously and completely into something incomprehensible. I'd prefer to have it happen gradually, with time to stop and smell the flowers along the way.
It felt like a very guilty thought, but—
But there was nothing higher to override this preference.
In which case, if the Friendly AI project succeeded, there would be a day after the Singularity to wake up to, and myself to wake up to it.
You may not see why this would be a vertigo-inducing concept. Pretend you're Eliezer2003 who has spent the last seven years talking about how it's forbidden to try to look beyond the Singularity—because the AI is smarter than you, and if you knew what it would do, you would have to be that smart yourself—
—but what if you don't want the world to be made suddenly incomprehensible? Then there might be something to understand, that next morning, because you don't actually want to wake up in an incomprehensible world, any more than you actually want to suddenly be a superintelligence, or turn into orgasmium.
I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.
You may find it hard to sympathize. Well, Eliezer1996, who originally made the mistake, was smart but methodologically inept, as I've mentioned a few times.
Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral, and above all a failure of imagination, to talk about human-level minds still running around the day after the Singularity.
That's the frame of mind I used to occupy—that the things I wanted were selfish, and that I shouldn't think about them too much, or at all, because I would need to sacrifice them for something higher.
People who talk about an existential pit of meaninglessness in a universe devoid of meaning—I'm pretty sure they don't understand morality in naturalistic terms. There is vertigo involved, but it's not the vertigo of meaninglessness.
More like a theist who is frightened that someday God will order him to murder children, and then he realizes that there is no God and his fear of being ordered to murder children was morality. It's a strange relief, mixed with the realization that you've been very silly, as the last remnant of outrage at your own selfishness fades away.
So the first step toward Fun Theory is that, so far as I can tell, it looks basically okay to make our future light cone—all the galaxies that we can get our hands on—into a place that is fun rather than not fun.
We don't need to transform the universe into something we feel dutifully obligated to create, but isn't really much fun—in the same way that a Christian would feel dutifully obliged to enjoy heaven—or that some strange folk think that creating orgasmium is, logically, the rightest thing to do.
Fun is okay. It's allowed. It doesn't get any better than fun.
And then we can turn our attention to the question of what is fun, and how to have it.