You are viewing revision 1.0.0, last edited by PeerInfinity

A concrete theory of transhuman values. How much fun is there in the universe; will we ever run out of fun; are we having fun yet; could we be having more fun. Part of the complexity of value thesis. Also forms part of the fully general answer to religious theodicy.

This page is copied directly from The Fun Theory Sequence:

(A shorter gloss of Fun Theory is "31 Laws of Fun", which summarizes the advice of Fun Theory to would-be Eutopian authors and futurists.)

Fun Theory is the field of knowledge that deals in questions such as "How much fun is there in the universe?", "Will we ever run out of fun?", "Are we having fun yet?" and "Could we be having more fun?"

Fun Theory is serious business. The prospect of endless boredom is routinely fielded by conservatives as a knockdown argument against research on lifespan extension, against cryonics, against all transhumanism, and occasionally against the entire Enlightenment ideal of a better future.

Many critics (including George Orwell) have commented on the inability of authors to imagine Utopias where anyone would actually want to live. If no one can imagine a Future where anyone would want to live, that may drain off motivation to work on the project. But there are some quite understandable biases that get in the way of such visualization.

Fun Theory is also the fully general reply to religious theodicy (attempts to justify why God permits evil). Our present world has flaws even from the standpoint of such eudaimonic considerations as freedom, personal responsibility, and self-reliance. Fun Theory tries to describe the dimensions along which a benevolently designed world can and should be optimized, and our present world is clearly not the result of such optimization - there is room for improvement. Fun Theory also highlights the flaws of any particular religion's perfect afterlife - you wouldn't want to go to their Heaven.

Finally, going into the details of Fun Theory helps you see that eudaimonia is complicated - that there are many properties which contribute to a life worth living. Which helps you appreciate just how worthless a galaxy would end up looking (with very high probability) if the galaxy was optimized by something with a utility function rolled up at random. The narrowness of this target is the motivation to create AIs with precisely chosen goal systems (Friendly AI).

Fun Theory is built on top of the naturalistic metaethics summarized in Joy in the Merely Good; as such, its arguments ground in "On reflection, don't you think this is what you would actually want (for yourself and others)?"

Posts in the Fun Theory sequence (reorganized by topic, not necessarily in the original chronological order):

  • Prolegomena to a Theory of Fun: Fun Theory is an attempt to actually answer questions about eternal boredom that are more often posed and left hanging. Attempts to visualize Utopia are often defeated by standard biases, such as the attempt to imagine a single moment of good news ("You don't have to work anymore!") rather than a typical moment of daily life ten years later. People also believe they should enjoy various activities that they actually don't. But since human values have no supernatural source, it is quite reasonable for us to try to understand what we want. There is no external authority telling us that the future of humanity should not be fun.
  • High Challenge: Life should not always be made easier for the same reason that video games should not always be made easier. Think in terms of eliminating low-quality work to make way for high-quality work, rather than eliminating all challenge. One needs games that are fun to play and not just fun to win. Life's utility function is over 4D trajectories, not just 3D outcomes. Values can legitimately be over the subjective experience, the objective result, and the challenging process by which it is achieved - the traveller, the destination and the journey.
  • Complex Novelty: Are we likely to run out of new challenges, and be reduced to playing the same video game over and over? How large is Fun Space? This depends on how fast you learn; the faster you generalize, the more challenges you see as similar to each other. Learning is fun, but uses up fun; you can't have the same stroke of genius twice. But the more intelligent you are, the more potential insights you can understand; human Fun Space is larger than chimpanzee Fun Space, and not just by a linear factor of our brain size. In a well-lived life, you may need to increase in intelligence fast enough to integrate your accumulating experiences. If so, the rate at which new Fun becomes available to intelligence, is likely to overwhelmingly swamp the amount of time you could spend at that fixed level of intelligence. The Busy Beaver sequence is an infinite series of deep insights not reducible to each other or to any more general insight.
  • Continuous Improvement: Humans seem to be on a hedonic treadmill; over time, we adjust to any improvements in our environment - after a month, the new sports car no longer seems quite as wonderful. This aspect of our evolved psychology is not surprising: is a rare organism in a rare environment whose optimal reproductive strategy is to rest with a smile on its face, feeling happy with what it already has. To entirely delete the hedonic treadmill seems perilously close to tampering with Boredom itself. Is there enough fun in the universe for a transhuman to jog off the treadmill - improve their life continuously, leaping to ever-higher hedonic levels before adjusting to the previous one? Can ever-higher levels of pleasure be created by the simple increase of ever-larger floating-point numbers in a digital pleasure center, or would that fail to have the full subjective quality of happiness? If we continue to bind our pleasures to novel challenges, can we find higher levels of pleasure fast enough, without cheating? The rate at which value can increase as more bits are added, and the rate at which value must increase for eudaimonia, together determine the lifespan of a mind. If minds must use exponentially more resources over time in order to lead a eudaimonic existence, their subjective lifespan is measured in mere millennia even if they can draw on galaxy-sized resources.
  • Sensual Experience: Much of the anomie and disconnect in modern society can be attributed to our spending all day on tasks (like office work) that we didn't evolve to perform (unlike hunting and gathering on the savanna). Thus, many of the tasks we perform all day do not engage our senses - even the most realistic modern video game is not the same level of sensual experience as outrunning a real tiger on the real savanna. Even the best modern video game is low-bandwidth fun - a low-bandwidth connection to a relatively simple challenge, which doesn't fill our brains well as a result. But future entities could have different senses and higher-bandwidth connections to more complicated challenges, even if those challenges didn't exist on the savanna.
  • Living By Your Own Strength: Our hunter-gatherer ancestors strung their own bows, wove their own baskets and whittled their own flutes. Part of our alienation from our design environment is the number of tools we use that we don't understand and couldn't make for ourselves. It's much less fun to read something in a book than to discover it for yourself. Specialization is critical to our current civilization. But the future does not have to be a continuation of this trend in which we rely more and more on things outside ourselves which become less and less comprehensible. With a surplus of power, you could begin to rethink the life experience as a road to internalizing new strengths, not just staying alive efficiently through extreme specialization.
  • Free to Optimize: Stare decisis is the legal principle which binds courts to follow precedent. The rationale is not that past courts were wiser, but jurisprudence constante: The legal system must be predictable so that people can implement contracts and behaviors knowing their implications. The purpose of law is not to make the world perfect, but to provide a predictable environment in which people can optimize their own futures. If an extremely powerful entity is choosing good futures on your behalf, that may leave little slack for you to navigate through your own strength. Describing how an AI can avoid stomping your self-determination is a structurally complicated problem. A simple (possibly not best) solution would be the gift of a world that works by improved rules, stable enough that the inhabitants could understand them and optimize their own futures together, but otherwise hands-off. Modern legal systems fail along this dimension; no one can possibly know all the laws, let alone obey them.
  • Harmful Options: Offering people more choices that differ along many dimensions, may diminish their satisfaction with their final choice. Losses are more painful than the corresponding gains are pleasurable, so people think of the dimensions along which their final choice was inferior, and of all the other opportunities passed up. If you can only choose one dessert, you're likely to be happier choosing from a menu of two than from a menu of fourteen. Refusing tempting choices consumes mental energy and decreases performance on other cognitive tasks. A video game that contained an always-visible easier route through, would probably be less fun to play even if that easier route were deliberately foregone. You can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken. And what if a worse option is taken due to a predictable mistake? There are many ways to harm people by offering them more choices.
  • Devil's Offers: It is dangerous to live in an environment in which a single failure of resolve, throughout your entire life, can result in a permanent addiction or in a poor edit of your own brain. For example, a civilization which is constantly offering people tempting ways to shoot off their own feet - for example, offering them a cheap escape into eternal virtual reality, or customized drugs. It requires a constant stern will that may not be much fun. And it's questionable whether a superintelligence that descends from above to offer people huge dangerous temptations that they wouldn't encounter on their own, is helping.
  • Nonperson Predicates, Nonsentient Optimizers, Can't Unbirth a Child: Discusses some of the problems of, and justification for, creating AIs that are knowably not conscious / sentient / people / citizens / subjective experiencers. We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living.
  • Amputation of Destiny: C. S. Lewis's Narnia has a problem, and that problem is the super-lion Aslan - who demotes the four human children from the status of main characters, to mere hangers-on while Aslan does all the work. Iain Banks's Culture novels have a similar problem; the humans are mere hangers-on of the superintelligent Minds. We already have strong ethical reasons to prefer to create nonsentient AIs rather than sentient AIs, at least at first. But we may also prefer in just a fun-theoretic sense that we not be overshadowed by hugely more powerful entities occupying a level playing field with us. Entities with human emotional makeups should not be competing on a level playing field with superintelligences - either keep the superintelligences off the playing field, or design the smaller (human-level) minds with a different emotional makeup that doesn't mind being overshadowed.
  • Dunbar's Function: Robin Dunbar's original calculation showed that the maximum human group size was around 150. But a typical size for a hunter-gatherer band would be 30-50, cohesive online groups peak at 50-60, and small task forces may peak in internal cohesiveness around 7. Our attempt to live in a world of six billion people has many emotional costs: We aren't likely to know our President or Prime Minister, or to have any significant influence over our country's politics, although we go on behaving as if we did. We are constantly bombarded with news about improbably pretty and wealthy individuals. We aren't likely to find a significant profession where we can be the best in our field. But if intelligence keeps increasing, the number of personal relationships we can track will also increase, along with the natural degree of specialization. Eventually there might be a single community of sentients that really was a single community.
  • In Praise of Boredom: "Boredom" is an immensely subtle and important aspect of human values, nowhere near as straightforward as it sounds to a human. We don't want to get bored with breathing or with thinking. We do want to get bored with playing the same level of the same video game over and over. We don't want changing the shade of the pixels in the game to make it stop counting as "the same game". We want a steady stream of novelty, rather than spending most of our time playing the best video game level so far discovered (over and over) and occasionally trying out a different video game level as a new candidate for "best". These considerations would not arise in most utility functions in expected utility maximizers.
  • Sympathetic Minds: Mirror neurons are neurons that fire both when performing an action oneself, and watching someone else perform the same action - for example, a neuron that fires when you raise your hand or watch someone else raise theirs. We predictively model other minds by putting ourselves in their shoes, which is empathy. But some of our desire to help relatives and friends, or be concerned with the feelings of allies, is expressed as sympathy, feeling what (we believe) they feel. Like "boredom", the human form of sympathy would not be expected to arise in an arbitrary expected-utility-maximizing AI. Most such agents would regard any agents in its environment as a special case of complex systems to be modeled or optimized; it would not feel what they feel.
  • Interpersonal Entanglement: Our sympathy with other minds makes our interpersonal relationships one of the most complex aspects of human existence. Romance, in particular, is more complicated than being nice to friends and kin, negotiating with allies, or outsmarting enemies - it contains aspects of all three. Replacing human romance with anything simpler or easier would decrease the peak complexity of the human species - a major step in the wrong direction, it seems to me. This is my problem with proposals to give people perfect, nonsentient sexual/romantic partners, which I usually refer to as "catgirls" ("catboys"). The human species does have a statistical sex problem: evolution has not optimized the average man to make the average woman happy or vice versa. But there are less sad ways to solve this problem than both genders giving up on each other and retreating to catgirls/catboys.
  • Failed Utopia #4-2: A fictional short story illustrating some of the ideas in Interpersonal Entanglement above. (Many commenters seemed to like this story, and some said that the ideas were easier to understand in this form.)
  • Growing Up is Hard: Each piece of the human brain is optimized on the assumption that all the other pieces are working the same way they did in the ancestral environment. Simple neurotransmitter imbalances can result in psychosis, and some aspects of Williams Syndrome are probably due to having a frontal cortex that is too large relative to the rest of the brain. Evolution creates limited robustness, but often stepping outside the ancestral parameter box just breaks things. Even if the first change works, the second and third changes are less likely to work as the total parameters get less ancestral and the brain's tolerance is used up. A cleanly designed AI might improve itself to the point where it was smart enough to unravel and augment the human brain. Or uploads might be able to make themselves smart enough to solve the increasingly difficult problem of not going slowly, subtly insane. Neither path is easy. There seems to be an irreducible residue of danger and difficulty associated with an adult version of humankind ever coming into being. Being a transhumanist means wanting certain things; it doesn't mean you think those things are easy.
  • Changing Emotions: Creating new emotions seems like a desirable aspect of many parts of Fun Theory, but this is not to be trivially postulated. It's the sort of thing best done with superintelligent help, and slowly and conservatively even then. We can illustrate these difficulties by trying to translate the short English phrase "change sex" into a cognitive transformation of extraordinary complexity and many hidden subproblems.
  • Emotional Involvement: Since the events in video games have no actual long-term consequences, playing a video game is not likely to be nearly as emotionally involving as much less dramatic events in real life. The supposed Utopia of playing lots of cool video games forever, is life as a series of disconnected episodes with no lasting consequences. Our current emotions are bound to activities that were subgoals of reproduction in the ancestral environment - but we now pursue these activities as independent goals regardless of whether they lead to reproduction. (Sex with birth control is the classic example.) A transhuman existence would need new emotions suited to the important short-term and long-term events of that existence.
  • Serious Stories: Stories and lives are optimized according to rather different criteria. Advice on how to write fiction will tell you that "stories are about people's pain" and "every scene must end in disaster". I once assumed that it was not possible to write any story about a successful Singularity because the inhabitants would not be in any pain; but something about the final conclusion that the post-Singularity world would contain no stories worth telling seemed alarming. Stories in which nothing ever goes wrong, are painful to read; would a life of endless success have the same painful quality? If so, should we simply eliminate that revulsion via neural rewiring? Pleasure probably does retain its meaning in the absence of pain to contrast it; they are different neural systems. The present world has an imbalance between pain and pleasure; it is much easier to produce severe pain than correspondingly intense pleasure. One path would be to address the imbalance and create a world with more pleasures, and free of the more grindingly destructive and pointless sorts of pain. Another approach would be to eliminate pain entirely. I feel like I prefer the former approach, but I don't know if it can last in the long run.
  • Eutopia is Scary: If a citizen of the Past were dropped into the Present world, they would be pleasantly surprised along at least some dimensions; they would also be horrified, disgusted, and frightened. This is not because our world has gone wrong, but because it has gone right. A true Future gone right would, realistically, be shocking to us along at least some dimensions. This may help explain why most literary Utopias fail; as George Orwell observed, "they are chiefly concerned with avoiding fuss". Heavens are meant to sound like good news; political utopias are meant to show how neatly their underlying ideas work. Utopia is reassuring, unsurprising, and dull. Eutopia would be scary. (Of course the vast majority of scary things are not Eutopian, just entropic.) Try to imagine a genuinely better world in which you would be out of place - not a world that would make you smugly satisfied at how well all your current ideas had worked. This proved to be a very important exercise when I tried it; it made me realize that all my old proposals had been optimized to sound safe and reassuring.
  • Building Weirdtopia: Utopia and Dystopia both confirm the moral sensibilities you started with; whether the world is a libertarian utopia of government non-interference, or a hellish dystopia of government intrusion and regulation, either way you get to say "Guess I was right all along." To break out of this mold, write down the Utopia, and the Dystopia, and then try to write down the Weirdtopia - an arguably-better world that zogs instead of zigging or zagging. (Judging from the comments, this exercise seems to have mostly failed.)
  • Justified Expectation of Pleasant Surprises: A pleasant surprise probably has a greater hedonic impact than being told about the same positive event long in advance - hearing about the positive event is good news in the moment of first hearing, but you don't have the gift actually in hand. Then you have to wait, perhaps for a long time, possibly comparing the expected pleasure of the future to the lesser pleasure of the present. This argues that if you have a choice between a world in which the same pleasant events occur, but in the first world you are told about them long in advance, and in the second world they are kept secret until they occur, you would prefer to live in the second world. The importance of hope is widely appreciated - people who do not expect their lives to improve in the future are less likely to be happy in the present - but the importance of vague hope may be understated.
  • Seduced by Imagination: Vagueness usually has a poor name in rationality, but the Future is something about which, in fact, we do not possess strong reliable specific information. Vague (but justified!) hopes may also be hedonically better. But a more important caution for today's world is that highly specific pleasant scenarios can exert a dangerous power over human minds - suck out our emotional energy, make us forget what we don't know, and cause our mere actual lives to pale by comparison. (This post is not about Fun Theory proper, but it contains an important warning about how not to use Fun Theory.)
  • The Uses of Fun (Theory): Fun Theory is important for replying to critics of human progress; for inspiring people to keep working on human progress; for refuting religious arguments that the world could possibly have been benevolently designed; for showing that religious Heavens show the signature of the same human biases that torpedo other attempts at Utopia; and for appreciating the great complexity of our values and of a life worth living, which requires a correspondingly strong effort of AI design to create AIs that can play good roles in a good future.
  • Higher Purpose: Having a Purpose in Life consistently shows up as something that increases stated well-being. Of course, the problem with trying to pick out "a Purpose in Life" in order to make yourself happier, is that this doesn't take you outside yourself; it's still all about you. To find purpose, you need to turn your eyes outward to look at the world and find things there that you care about - rather than obsessing about the wonderful spiritual benefits you're getting from helping others. In today's world, most of the highest-priority legitimate Causes consist of large groups of people in extreme jeopardy: Aging threatens the old, starvation threatens the poor, extinction risks threaten humanity as a whole. If the future goes right, many and perhaps all such problems will be solved - depleting the stream of victims to be helped. Will the future therefore consist of self-obsessed individuals, with nothing to take them outside themselves? I suggest, though, that even if there were no large groups of people in extreme jeopardy, we would still, looking around, find things outside ourselves that we cared about - friends, family; truth, freedom... Nonetheless, if the Future goes sufficiently well, there will come a time when you could search the whole of civilization, and never find a single person so much in need of help, as dozens you now pass on the street. If you do want to save someone from death, or help a great many people, then act now; your opportunity may not last, one way or another.