Utopias are hard to believe in.  

Our basis for normality - and thus for our frame of reference for the world - obviously follows from what we’ve seen and experienced in the world around us.  But utopias are hard to write, so, never seeing them, we’re left with a glaring emptiness in place of cached thoughts about a future that actually hits ceilings of hedonism.  

When you’re a writer trying to create a utopian world, historically, you’ll end up at one of a few kinds of stories.  You either envision a world that seems perfect to its people - as in Brave New World, for the most part - but that we can see is flawed from our morality; thus hardly qualifying as a utopia to us.  Or you create a seemingly utopian world, filled with a hedonic people, but which harbors one sinister secret that breaks the illusion - in some stories, these arise as a natural consequence of trying to build a utopia.  If you really tried to think of a perfect world, you’d probably realize you were wasting your time; utopias are boring.  Writing needs challenges for the characters.

This criticism follows through to the real world.  The idea of eternal boredom is commonly touted as an argument against life extension, transhumanism, and sometimes the ideal of a better future.  At first glance, it may sound like understandable rhetoric - people routinely get bored with things even today, where opportunities for happiness are far lower than their upper limits (though still higher than all of history).

But the future wouldn't resemble the present, in less obvious ways.  The scientific prowess we would have to develop before we can viably imagine actually creating a utopia is still leagues beyond us (from a technical perspective - from a chronological perspective, it might only be a few decades before we have AGI).  Humans are marvelously bad at predicting the growth of new technologies, but there are a few we can speculate of today that we have reason to believe can’t be solved with time and sufficiently large amounts of resources (which we could obtain through interstellar exploration or on a long enough timeline, stuff like Dyson spheres).  Some modern ideas may run against what we consider today to be fundamental laws of physics, but those are outside the scope of this post.

The technologies we would conceivably need to build a utopia would center around exerting absolute control over our environment.  Most if not all problems we face in modern society originate from priors in our native environment that we take for granted - the physical weaknesses of our species, resource scarcity, natural disasters, and on a meta level, characteristics of humanity such as greed or hatred that resulted from evolution.  The former seems more obviously workable - genetic modification or transitioning to digital minds, interstellar capital, all weaken the direct hold of the natural world on us.  But its indirect hold, the results of natural processes that form a part of us, appears to be harder problems even to properly define.

But humans aren’t special, not in the pure physical sense.  The human mind can be described in terms of signals, subroutines, and memory storage, even if doing so now is beyond us.  So it is possible, given enough technological superiority, to alter the presuppositions of the human mind.  In other words, we might have the power to eliminate the negative results of evolution - hate, anger, a plethora of psychological effects we try to overcome in our daily lives anyway, and yes, boredom.  There have been posts along this line before, that advise heavy caution on changing emotions, but I see this as one of a class of solutions that could work to the boredom problem, some of which are pointed out in the Fun Theory Sequence.

Boredom was, and is, useful in our native environment for many reasons, including the purpose of scientific advancement.  But in a future world where advancing technology’s returns on the human condition stop compensating for a state less than perfect hedonism, we can imagine editing boredom out of our lives, or allowing one direct control over when and how they experience it. 

This seems like an alarming proposition.  We’re talking about changing what we fundamentally are, after all.  But consider that we already do that in our daily lives.  One might try to distance themselves from certain people because they feel terrible in their company, or from certain communities, even well-aligned ones, because constant feelings of hatred and anger are unhealthy.  The most common argument I foresee against this is that giving us root control over our own emotions removes unpredictability, and thus a vital component of what excites us.  Whether that’s true is a discussion beyond the scope of this, but I will say that if it is true, the obvious follow-up is that we can change that as well, and make our subjective experience post-transition as enjoyable as before.

Going back a bit, what would a utopia then look like?  We’ve assumed something approximating perfect control over our environment.  By that point we’d most likely have moved beyond physical bodies and any sort of self analogous to something easily inferred by our present selves, but for the sake of clarity, I will use as an assumption that we’re still born into human bodies, and live in a society that, while not similar to ours in content, is in structure.  

From the very beginning of life, we’d want to remove all potential for defects in the bodies inhabited by future humans.  In fact, while we’re at it, why not give everyone a perfect body?  And since by that point we’d have the precision to actually distinguish extremely minor variations in the genetic code that would give one individual an advantage over another, it seems humane, and moral to work off of a literally perfect, ideal genetic template for all (this already carries certain dystopian overtones mimicking Brave New World, but not every new idea expressed in a dystopian novel is necessary to a dystopia, some are just used to fit negative ideals).

What happens after birth?  Even today, we’ve eliminated a great number of hindrances to proper human growth from many countries, such as malnutrition.  It thus makes sense to remove all possible hindrances we can from a child’s life, such as the potential for disease or wounds (by this point, we’d be past the stage where life is inherently unfair and a source of occasional pain that children have to learn to tolerate to survive when they grow up), or even anything less than the absolute best care that can be provided to instill the values we cherish.  Because children end up with different levels of advantage from different levels of care (children of rich people often ending up better suited to accomplishing their goals than children of poorer people even if their financial status is removed after the fact), it again seems morally right to give everyone the exact same level of maximal care.  This would then be an environment filled with substantially more joy and delight than even our best nurseries today, while still nurturing in them the ideals we value.

What would the world look like, after adolescence?  Everything in our space would be of our design, leaving anything up to the temptations of chance would seem unfair to the people who would be affected by it.  So from birth to beyond, their life is a perfectly arranged paradise.

At this point, you might realize that every person created out of this system, while what we would call today ideal individuals, are all identical.  Personalities, ideas, interests, skills, are formed of genetics and nurture, both of which we’ve now optimized to the ideals.  What we’re left with is a society filled with identical individuals, in body and mind (by that point, us, the inhabitants of Old Earth, would most likely have altered ourselves of our own volition to perfection - and over time and the social nature of humanity, that idea of perfection would probably converge; if it appears like it might not, that’s where this analogy of still-physical bodies resembling humans today fails - what we’d have then would be so far removed from what we know today that we wouldn’t have strong preconceptions of what forms in that nearly infinite possibility space we’d consider ideal).

I’ll admit to a fair level of personal horror at that thought at first, when it occurred to me in the middle of a heated argument about Brave New WorldBut utopias are scary by design.  Even a reasonably smart person from the past would be horrified at our present.  It isn’t just values dissonance - someone intelligent enough would probably agree that life is better - it’s also demonstrative of the extent to which normality normally biases our judgement.

When you view the future as a perfect paradise filled with innumerable copies of functionally the same sentient being, it doesn’t seem like a utopia.  Not even like a flawed utopia.  Our first reaction probably screams dystopia.  But to not control nature when we have that power can be seen as a great act of evil, considering the relative pain or lack of happiness we’re inflicting on people from that choice.  Maybe if we break down the transition from our modern society to that world into steps more tangible to us, the choices leading there are more individually consistent with our moral values.

New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 3:23 PM

How do you motivate the embedded assumption that there is no such thing as harmless variation?

I was thinking about less ideal variations more than explicitly harmful ones.  If we're optimizing for a set of values - like happiness, intelligence, virtuousness - through birth and environment, then I thought it unlikely that we'd have multiple options with the exact same maximal optimization distribution.  If there are, the identical people part of it doesn't hold yeah - if there's more than one option, it's likely that there are many, so there might not be identicals at all.

Yes its unlikely that the utility turns out literally identical. However, people enjoy having friends that aren't just clones of themselves. (Alright, I don't have evidence for this, but it seems like something people might enjoy) Hence it is possible for a mixture of different types of people to be happier than either type of people on their own. 

If you use some computational theories of consciousness, there is no morally meaningful difference between one mind and two copies of the same mind. 

https://slatestarcodex.com/2015/03/15/answer-to-job/

Given the large but finite resources of reality, it is optimal to create a fair bit of harmless variation.

However, people enjoy having friends that aren't just clones of themselves.

This is true, yeah, but I think that's more a natural human trait than something intrinsic to sentient life.  It's possible that entirely different forms of life would still be happier with different types of people, but if that happiness is what we value, wouldn't replicating it directly achieve the same effect?

Maybe there's a combination of birth and environment conditions that maximize utility for an individual, but we may have different values for society in general which would lead to a lower overall utility for a society of identical people. For example, we generally value diversity, and I think the utility function we use for society in general would probably return a lower result for a population of identical optimally born/raised people than for a diverse population of slightly-less-than-optimally born/raised people. 

If we hold diversity as a terminal value then yes, a diverse population of less-than-optimal people is better.  But don't we generally see diversity less as a terminal value than something that's useful because it approximates terminal values?  

I think at least some people do, but I don't have a good argument or evidence to support that claim. Even if your only terminal values are more traditional conceptions of utility, diversity still serves those values really well. A homogenous population is not just more boring, but also less resilient to change (and pathogens, depending on the degree of homogeneity). I think it would be shortsighted and overconfident to design an optimal, identical population since they would lack the resilience and variety of experience to maintain that optimally once any problems appeared.

Boring matters if they consider it a negative, which isn't a necessity (boredom being something we can edit if needed).

Re: resilience, I agree that those are good reasons to not try anything like this today or in the immediate future. But at a far enough point where we understand our environment with enough precision to not have to overly worry about external threats, would that still hold? Or do you think that kind of future isn't possible? (Realistically, and outside the simplified scenario, AGI could take care of any future problems without our needing to trouble ourselves).

The final chapter of "The Worm Ouroboros" has something to say of one failed utopia.

"All were silent awhile. Then the Lord Juss spake saying, "O Queen Sophonisba, hast thou looked ever, on a showery day in spring, upon the rainbow flung across earth and sky, and marked how all things of earth beyond it, trees, mountain-sides, and rivers, and fields, and woods, and homes of men, are transfigured by the colours that are in the bow?"

"Yes," she said, "and oft desired to reach them."

"We," said Juss, "have flown beyond the rainbow. And there we found no fabled land of heart's desire, but wet rain and wind only and the cold mountain-side. And our hearts are a-cold because of it."

The Queen said, "How old art thou, my Lord Juss, that thou speakest as an old man might speak?"

He answered, "I shall be thirty-three years old tomorrow, and that is young by the reckoning of men. None of us be old, and my brethren and Lord Brandoch Daha younger than I. Yet as old men may we now look forth on our lives, since the goodness thereof is gone by for us." And he said, "Thou O Queen canst scarcely know our grief; for to thee the blessed Gods gave thy heart's desire: youth for ever, and peace. Would they might give us our good gift, that should be youth for ever, and war; and unwaning strength and skill in arms. Would they might but give us our great enemies alive and whole again. For better it were we should run hazard again of utter destruction, than thus live out our lives like cattle fattening for the slaughter, or like silly garden plants."

If you ask people how much of an income would be "enough" for them, they generally pick (so I have heard) about 10 times their present income, whatever their present income is. This is the largest amount they can imagine being able to spend. They cannot imagine how anyone could need more and think they are wrong to want more.

To a homeless person on the streets, paradise is a house of their own and a steady job. To someone with the house and the job, paradise is a big house and a pile of cash to retire on. To someone with the big house and the pile, maybe some of them do just laze around, a mere two utopias up from living on the streets, especially if they're peasants with a huge lottery prize, but I think most of the people at that level do use their lives more purposefully. The utopia above that level might be daydreaming of being CEO of a major conglomerate, with underlings to do all the work, and spending their life flying around to parties on tropical islands in a private jet. But people at that level don't live like that either. Elon Musk is three or four utopias up from living on the streets, and he spends his time doing big things, not lazing in luxury.

When people imagine utopia, their imagination generally goes no further than imagining away the things in their life that they don't like, that they can imagine being gone, and imagining more of the things that they do like, that they can imagine having. And what they can imagine goes no further that what they see around them. But having more wealth and using it brings new possibilities, good and bad, that the poorer person never knew existed. The more you know, do, and experience, the more you discover there is to know, do, and experience.

The concept of a final "utopia" is a distraction. Reach the next utopia above ours and there will still be things worth doing. If the sequence does end somewhere, I hope that we climb up through as many utopias as it takes to find out.

We value new experiences now because without that prospect we'd be bored, but is there any reason why new experiences necessarily forms part of a future with entirely different forms of life?  There could be alien species that value experiences the more they repeat them; to them, new experiences may be seen as unnecessary (I think that's unlikely under evolutionary mechanisms, but not under sentient design).

I'm not interested in the values of hypothetical aliens, especially those dreamt up only to provide imaginary counterexamples.

My point was that we value new experiences. Future forms of life, like humans after we can alter our preferences at root level, might not find that preference as necessary. So we could reach a level where we don't have to worry about bad possibilities, and call it "paradise".

I don't really think endless boredom is as much of a risk as others seem to.  Certainly not enough to be worth lobotomizing the entire human race in order to achieve some faux state of "eternal bliss".  Consider, for example, that Godel's Incompleteness implies there are a literal infinite number of math problems to be solved.  Math not your thing?  Why would we imagine there are only a finite number advancements that can be made in dance, music, poetry, etc?  Are these fields less rich than mathematics somehow?  

In my mind the only actual "utopia" is one of infinite endless growth and adventure.  Either we continue to grow forever, discovering new and exciting things, or we die out.  Any kind of "steady state utopia" is just an extended version of the latter.

I don't think I see how Godel's theorem implies that.  Could you elaborate?  Concept space is massive, but I don't see it being literally unbounded.

Certainly not enough to be worth lobotomizing the entire human race in order to achieve some faux state of "eternal bliss".

If we reach the point where we can safely add and edit our own emotions, I don't think removing one emotion that we deem counterproductive would be seen as negative.  We already actively try to suppress negative emotions today, why would removing it altogether be more significant in an environment where its old positives don't apply?

Either we continue to grow forever, discovering new and exciting things, or we die out.  Any kind of "steady state utopia" is just an extended version of the latter.

Why is a steady state utopia equal to us dying out?  I can see why that would be somewhat true given the preference we give now to the state of excitement at discovery and novelty, but why objectively?

Godel's incompleteness implies that the general question "is statement X true" (for arbitrary X) can never be answered by a finite set of Axioms.  Hence, finding new axioms and using them to prove new sets of statements is an endless problem.  Similar infinite problems exist in computability "Does program X halt?" and computational complexity "What is the Kolmogorov compexity of string X?" as well as topology "Are 2 structures which have properties X, Y, Z... in common homeomorphic?".  

Why is a steady state utopia equal to us dying out?  I can see why that would be somewhat true given the preference we give now to the state of excitement at discovery and novelty, but why objectively?

I should clarify, this is a value judgement.  I personally consider existing in a steady state (or a finitely repeating set of states) morally equivalent to death, since creativity is one of my "terminal" values.

If we reach the point where we can safely add and edit our own emotions, I don't think removing one emotion that we deem counterproductive would be seen as negative.

Again, this is a value judgement.  I would consider modifying my mind so that I no longer cared about learning new things morally repugnant.

 

It's probably worth noting that my moral opinions seem to be in disagreement with many of the people around here, as I place much less weight on avoidance of suffering and experiencing physical bliss and much more on novelty of experience, helping others and seeking truth than the general feeling I get from people who want to maximize qualies or don't consider orgasmium morally repugnant.

Hence, finding new axioms and using them to prove new sets of statements is an endless problem.  Similar infinite problems exist in computability "Does program X halt?" and computational complexity "What is the Kolmogorov compexity of string X?" as well as topology "Are 2 structures which have properties X, Y, Z... in common homeomorphic?".

Aren't these single problems that deal with infinities rather than each being an infinite sequence of problems?  Would that kind of infinity bring about any sense of excitement or novelty more than discovering say, the nth digit of pi?

It's probably worth noting that my moral opinions seem to be in disagreement with many of the people around here, as I place much less weight on avoidance of suffering and experiencing physical bliss and much more on novelty of experience, helping others and seeking truth than the general feeling I get from people who want to maximize qualies or don't consider orgasmium morally repugnant.

Out of curiosity, if we did run out of new exciting truths to discover and there was a way to feel the exact same thrill and novelty directly that you would have in those situations, would you take it?

Aren't these single problems that deal with infinities rather than each being an infinite sequence of problems?  Would that kind of infinity bring about any sense of excitement or novelty more than discovering say, the nth digit of pi?

 

The n-th digit of pi is computable, meaning there exists a deterministic algorithm that runs in finite time and always gives you the right answer.  The n-th busy bever number is not, meaning that discovering it will require new advancements in mathematics to figure it out.  I'm not claiming that you personally will find that problem interesting (although mathematicians certainly do).  I'm claiming that it is likely that whatever field you do find interesting probably has similar classes of problems with a literally inexhaustible supply of interesting problems.

 

Out of curiosity, if we did run out of new exciting truths to discover and there was a way to feel the exact same thrill and novelty directly that you would have in those situations, would you take it?

No.  I would consider such a technology abhorrent for the same reason I consider taking a drug that would make me feel infinitely happy forever abhorrent.  I would literally prefer death to such a state.  If such a mindset seems unfathomable to you, consider reading the death of Socrates since he expresses the idea that there are things worse than death much more eloquently than I can.

I would consider such a technology abhorrent for the same reason I consider taking a drug that would make me feel infinitely happy forever abhorrent.

What reasons are those?  I can understand the idea that there are things worse than death, but I don't see what part of this makes it qualify.

What reasons are those?  I can understand the idea that there are things worse than death, but I don't see what part of this makes it qualify.

 

Can you imagine why taking a drug that made you feel happy forever but cut you off from reality might be perceived as worse than death?