2552

LESSWRONG
LW

2551
UtopiaWorld OptimizationAI
Frontpage

29

Utopiography Interview

by plex
22nd Oct 2025
54 min read
0

29

29

New Comment
Moderation Log
More from plex
View more
Curated and popular this week
0Comments
UtopiaWorld OptimizationAI
Frontpage

It serves people well to mostly build towards a good future rather than getting distracted by the shape of utopia, but having a vision of where we want to go can be helpful for both motivation and as a north star for guiding our efforts.

Publishing prompted by the ACX grant to generate stories of good things with AI.

 

Opening Vision

Utopiography: What would you like it to be like?

plex: A large proportion of my value structure is pointed at other people's. I would like many different utopias to flourish. I think that the universe is very big and has room for many of the different things that humans could unfold into. And I would like to see it filled with a diverse garden of different mind spaces, different cultures and recombinations and extrapolations of what we had become.

Utopiography: Is your preferred version of that largely taking place in the physical world, like on many planets and stuff, or do you see it as mostly taking place in some kind of cyberspace digital realm?

plex: Both. I think that a bunch of people have a pretty strong preference for this whole flesh and bone and atoms thing. But also, it's just much less efficient for some things that you want to do. And if you don't have preferences that are strongly pointing towards wanting to keep stuff physical, then as well as being more efficient, digital just gives you a lot more options. So I imagine that probably the majority of utopia is uploaded in one way or another—but not like a massive, massive majority. There's some significant minority that probably decides that they want to use their part of the cosmic commons to do things that are directly physical.

Utopiography: And you'd like it if we can keep doing the human thing, or whatever that morphs into, for all of the time that there is until entropy runs out? And beyond, if we can figure a way past that hurdle?

plex: Yeah. I put like—it would be nice for our descendants, and the descendants of many forms of descendants, to fill the space between the stars and explore the possibilities for as long as we can. There are many interesting patterns to create and become. I would like to see many of them instantiated.

Escaping Moloch

Utopiography: And how did you see us getting past the Moloch stage? You know, when things are largely driven by those kinds of incentives—the patterns which self-replicate and destroy the cosmic commons and sacrifice common value in order to be the kind of thing that has more power and has more copies of itself?

plex: Yeah. So basically, the only route that I see to hold back Moloch forever is a singleton. I think you need a system which is an overseeing body over all the other systems, which is the guardian of the commons and is the guardian of the weak—or the patterns which are not focusing their energy on defending themselves and spreading. That protects the parts of culture that are a delicate flower rather than an invasive bramble.

Utopiography: Okay, but that also gives the bamboo some space to flourish as well?

plex: Yes, but not disproportionately. It doesn't get to continue nibbling away pieces of the flower until there are no flowers.

Utopiography: So say we get all of that sorted and we've got a singleton and we've colonized all the spaces in between the stars and stuff. What's the first thing that you're gonna do?

plex: Me personally?

Utopiography: Yeah, work this one out.

plex: I haven't worked this one out yet. I've thought about it some. Some things like that I think I might do for a while will be—I think a lot of people would prefer to have something of the type of therapy from a human rather than an AI. Even if they know you can do it better, a human who is actually good—in the sense of having been taught by combined wisdom of much more than humanity has achieved so far in working with minds and helping them be healthy—I think a lot of people will want to be healed by a human. Because you can have an intersubjective relationship with the person that's healing you, whereas if it's an AI then they could do smoke and mirrors versions of that. They could do various different things, and maybe a lot of people would be quite happy with the best that it could put together. But I think there's going to be enough people who want to come out of this world and into the new one in a way which keeps it mostly humanity that's doing that. And I think that there's a decent chance that some of the first few decades and centuries, I would get real good at that and do a bunch of that.

And the other thing that I've thought about is being a traveler—seeing the different things that unfold, going from world to world and seeing what humanity becomes.

There's also one fun one which—taking from the fiction that I actually haven't read—but someone who, the thing they do with their first 100 years of utopia is watch everyone else's first few hours of utopia, just one after the other. Watching people wake up and be like, "Oh, I have legs again! Wonderful! Oh, things are certainly so much better!"

The Transition Period

Utopiography: Is that how you see it being? That one day we'll just switch on utopia and then we'll really wake up and be like, "Oh, it's a different world now"?

plex: Maybe. I think probably there will be a transition, and I think that transition will be somewhat clear when—it depends on how fast takeoff is. But if takeoff is as fast as they think it might be, you might end up with a situation where you have a system that is trying to improve humans' position and trying to make things good for sentient life in general. And over the course of a few hours to a few weeks, it moves from being mostly insignificant to having more ability to steer reality than humanity in its entirety.

And this system will have a choice of how to interact with this. There are some interesting trade-offs to be made. You don't want to shock everyone too much, and you don't want to be too invasively manipulative in the way of making everyone be happy with everything suddenly changing by kind of brain-hacking them with flashing lights or whatever.

But I think that there's a version of it where bad things stop happening. It's not that you suddenly fix everything in one go, but everyone's chronic diseases are suddenly healing way faster than everyone expects. No one has terrible life events happen to them because things are just subtly nudged to not have terrible events. All the wars fairly quickly wind down. No one really notices immediately, but all the pollution just kind of subtly vanishes somehow.

I think the version of this where there's this branch where it reaches out to everyone, kind of gets everyone's opinion—that seems possibly good, but has to be done with a considerable amount of care and tact.

Utopiography: What do you mean, that the AI singleton contacts everyone individually and asks them how they want the utopia?

plex: Hey, planet Earth!

Utopiography: And the wars wind down, the chronic diseases can be healed. Presumably, you know, everyone has access to all the resources they need now and no one is starving. But what do you mean when you talk about things are just subtly nudged in a direction where bad things don't happen anymore?

Utopiography: You mean like bad things in people's kind of day-to-day personal lives? How does that work?

plex: Yeah. There's some fine lines to tread, and I think that as utopia unfolds fully, there will be differentiation between people's preferences. Some people will want a world in which there's a bunch of randomness, and some of that randomness is bad, because they won't want a micromanaged life. Some people will want to just live in a world which is consistently wonderful to them, and they're okay with having some nudges around them to make sure that that happens.

But I think early on, there's maybe something like—I don't know—no one gets hit by a car. Until we figure stuff out, there's no more severe injuries. Until we figure stuff out, there's no more...

Interventions and technology

Utopiography: But how can you control that? Sometimes I injure myself because I just think that I can climb something that I can't and stuff like that. How can an AI protect me from that?

plex: Severe injuries—you probably don't. It probably depends how much it has interventions in the physical environment. You can probably capture a fair amount of it by having very good predictive models and having the phone sensors in everyone's pockets and the ability to predict when things are going wrong. It's not really perfect with just that level of technology.

Probably once utopia is fully bootstrapped, you end up with a situation where, in the parts of utopia where people are happy with it, there's kind of pervasive nanobot swarms that just—if you fall off your cliff, then the cliff face grows arms and catches you or something.

Utopiography: Well, I'm not usually in favor of nanobot swarms, but the cliff growing arms? Maybe I'd visit that part of utopia sometimes. But then if you have access to amazing medical technology where any injury that you can imagine and any illness that you can imagine is completely fixable, you don't necessarily need to have everyone's actions predicted all the time anyway.

plex: Yeah, you can get away with a lot with just having really good medical tech. Especially if you have semi-regular brain scan backups where, if you injure yourself so badly that your brain is suddenly destroyed, then you can be rebooted from your last backup.

But in the very early stages, I don't know—there's something about the aesthetic of everyone noticing that bad things have stopped happening and being a bit confused about this for a while, starting to enjoy this, and then there's a conversation that opens up of, "What do we actually want to do with the future? What do different people's preferences add up to?"

Utopiography: I mean, a large proportion of the bad things that happen in people's personal lives are also down to people—unpleasant interactions with each other. And presumably, if we're not into brain-hacking, the only way that an AI can prevent that—well, you know, that can be prevented within the utopia—is through that process of a lot of therapy for a lot of people, until everyone that's alive is just people that have grown up in a perfect world and so, in theory, hopefully are just really nice all the time.

plex: Yeah. I think that there are versions of helping people to heal that aren't brain-hacking flavored. 

Utopiography: Definitely.

plex: The versions that are like communicating true things to them, helping people to understand themselves and understand minds and understand communication and understand how they can have the kinds of effects that they want on other people, how they can get good ways of relating—I think there are choice points here.

The main choice points are: do you try and shift the ecosystem towards things that are nurturing humans directly, or do you talk to the humans first and give them more choices? I'm not certain what the right choice is here, because you both want to start shifting things in reliably good ways quickly and also have people in the loop and be able to self-determine.

Utopiography: I think in this type of utopia, where every way of life that people can imagine is allowed to flourish somewhere in some form—well, you know, it's confusing because I kind of imagine that when this starts happening, there would be a huge number of people who are like, "I don't want to live in techno-utopia, even if everything's really nice, because it's fundamentally opposed to my principles." And you can be like, "Oh, that's okay, because you guys can go live in the bit of utopia where it isn't techno-utopia."

Yeah, like how do you get them there?

plex: You'd have to leave Earth to be that one. Everyone else goes to other places if they want to do other things. Yeah, that's the choice of what to do with the original Earth. You'd probably realistically want a bunch of different Earths—a bunch of different very Earth-like planets.

The Jerusalem Problem

Utopiography: But it's like, you know, colonizing America. You can say to some Native Americans, "Oh, it's okay, we're gonna take you off your ancestral lands because we want the gold, but we've got something just like it on the other side of the river." They don't want to go live over there, even if it's just like the place that they have traditionally lived, because that's not their place. This is their place. And people will feel like that about being told, "It's okay, you can go to utopia on this other planet that's just like Earth that we've made for you."

plex: This is the Jerusalem problem—who gets the original Jerusalem?

Utopiography: Yeah, exactly. Whatever you do with it, someone's gonna be pissed.

plex: Yeah. I think there is a thing where you're trying to fulfill people's values as best you can, but some people's values just aren't fully compatible and you do have to make trade-offs. And you have to make trade-offs for people that they're not necessarily on board with. You can try and engage them in processes that are as respectful to them as possible and try and get them to understand each other's point of views and stuff if they want to be hardline about it. But if you have an oversight system, then that oversight system does at some point have to take decisions, and sometimes those decisions will override some parts of some people's preferences.

Utopiography: What would your preference be as to what happens to the Earth? Do you have an idea about that or not really sure?

plex: Yeah, it's a special place, definitely. Keep Earth, even if we're using most planets much more efficiently. Earth is a historical artifact.

I don't know for sure how it shakes out, but I think that there will be some class of people who are most attached to Earth and are willing to trade away much more of their share of the cosmic commons to remain on it. Some people will be like, "Yeah, if I have the choice between being able to use an entire other galaxy for a trillion years or living one lifetime on Earth, I'm obviously gonna choose Earth."

There should be—not explicit trades and markets and buying and selling stars—but a simulation of how that would work out based on people's values.

Utopiography: Okay, but maybe even a market, or something in that direction where the people who are most attached to Earth get to stay, and the people who are less attached to Earth trade their share of Earth's future for a much larger share of some other part of the cosmos?

plex: Yeah, that makes sense. And I feel like most of the people that wouldn't make the trade and that would stay on Earth regardless of what else was offered to them—they must really love the Earth, so the Earth's gonna be all right. It's in good hands. Those people can figure out how to—which bits of utopia should be on the surface of Earth, whether you want the pervasive nanobot swarms that make the cliffs grow arms to save you, or how much of Earth should be perfect wilderness where if you get injured then maybe you just stay injured. How much of the Earth should be still kind of civilization-flavored, how much of the Earth should be different things. That's its own—it's a more constrained optimization problem because there is only so much literal original Earth.

But I think a lot of people—I think the majority of people who are like, "Yes, I want to live a nice normal life"—if you give them the option of having a kind of vast, unspoiled wilderness on a new planet that you just built yesterday, and they get to live in incredible luxury and beautiful things with no crowding at all, then I think a good portion of them are like, "Yeah, that seems reasonable. I'm happy to go on a short space trip to this other place that's basically Earth."

Space travel

Utopiography: And all of this is the singleton being able to, you know, design technology where you can do all of these short trips, long space trips, whatever, without having any kind of knock-on environmental impacts to the Earth or whatever other environments we're now living in?

plex: It has the impact, but those impacts can be counteracted. I think that space travel the way we do it now is just very, very far from the most efficient way to do it. Instead of putting a load of very powerful explosives and kind of igniting them and trying to launch a chunk of metal up with just a bunch of fire—

Utopiography: Instead, it's quite primitive.

plex: It is. This is not the final stage of space travel. You probably have, for things which don't mind high G-forces—like just mass that isn't biological—then you have an electromagnetic cannon that accelerates things to a very high speed, and then they just kind of get launched up into the air. There's a few different ways of doing it.

And then for things which are relatively delicate, then you want a tether. You have a big rock up in space and a rope that dangles down—or maybe spins round—that things kind of climb up the rope into space. And it can exchange momentum by putting some things down and picking some things up. This is more like going up a very long elevator than going into a rocket.

Meaning in Paradise

Utopiography: Yeah. Wow, such a strange world to imagine.

plex: I think a lot of people that envisage this kind of world where we live forever and there's infinite space and there's no societal conflict of any sort—just a perfect utopia that's infinite in space and time, you can do anything you want—

Utopiography: I don't think those are quite true.

plex: It's not forever—it's for as long as you want. It's not that there's no societal conflict—it's that different parts of society don't get to engage in conflict non-consensually. If two parts of society are both like, "Yeah, we want to have this big war," and they're both up for it and they sign up for it, then they can have some level of conflict allowed.

Utopiography: But in a way, that's not really a conflict, is it? It's like a game. If people have decided they want to have a war, and everyone in that war has decided they want to have a war with everyone else in that war, and they've decided the rules of the game, rather than just one side being like, "Hey, we've got bigger guns than you, let's steal your stuff."

plex: But there's no—most of, if not all of, recorded human history, at least, is kind of defined by the fact that your life is not entirely under your control and lots of bad things tend to happen.

Utopiography: That's not how I would just describe human history, you know.

plex: This is a major aspect of it. And there are other things, so I don't think it's a horrible story that we've got. But that is a defining feature of it, right?

Utopiography: Yes. And you would like to remove that feature from it?

plex: I would like it to be an optional feature which you can opt into, rather than just being kind of thrown onto everyone's plate without them agreeing to it.

Utopiography: And part of how life is when you haven't removed that feature of it is that you have to make choices and you have to limit yourself in a variety of ways. And you're limited by forces outside of you. Opportunities open and close and your life takes a certain shape, and you try and guide that in a way that is as good as it can be. And that gives your life meaning and purpose and stuff like this. And I think that it's quite common when presented with this type of utopia that people think, "Okay, but then what gives your life—where's the meaning? If you can just do whatever you want all the time forever, isn't it a little bit kind of empty in some way?"

plex: Your responses to this? One of them is something like: yeah, there's an aesthetic to challenge and real consequences and not just kind of having everything handed to you. And I think that that is an important part of the human experience which many people will want to keep. And I think that the natural state of the world doesn't do a very good job of fulfilling that value.

If you think that it hands you challenges which are just completely unfair quite often—like if you're a medieval peasant and your child dies of smallpox, okay, I guess this is life handing you challenges, but did you want that challenge? I don't think so. I think you can fulfill the good part and the positive intent behind this wanting of real stakes with better-chosen stakes.

Utopiography: Give some examples?

plex: So you could have a world that's like—people are mostly hunter-gatherer-ish level, and have it so that if you do a really bad job preparing for the winter, then the people who don't have enough food for that winter, they die. But actually they just go to a different part of the place, and they go to the land of the ancestors—except that this is real. And there's maybe special ways that you can, if you work really hard, bring them back into the tribe or get little windows of communication so that you get to speak to the person who, three winters ago, you didn't have enough food for, but you haven't been able to talk to for years.

Utopiography: It sounds really cool for sure. I would enjoy participating in that game. But still, it does kind of feel like, in comparison to if that was—you know, if you're talking about the real world where maybe you're a hunter-gatherer society and you didn't prepare well for the winter and then people died—that's real. And what you're describing is like a role-playing game that's probably really fun. And maybe that's enough meaning—to just have fun and play games forever. Do you think?

plex: I think probably yes. The—maybe it is. I don't know. I'm not saying it's not. It's just so different from the world that there's always been.

Utopiography: But you know, if we're lucky, a lot of us get to spend the first bit of our lives just having fun and playing games all the time, and then you spend the rest of your life like, "I wish I could go back to then." So—

plex: And imagine that, except that those first few years of childhood, instead of kind of being forced to prepare for a life later—to some extent, having a bunch of school force-fed at you and a bunch of all of this stuff—you were like, "No, let's just unfold and see what you want to become. Let's explore the things that you want to and find your place in reality and have a reality that is trying to help you do that."

I don't know. I do think there is something which is lost by the challenges being artificial. I do think there is something lost by the stakes not being as high. But I don't think that the amount that is lost from either of those even remotely compares to the way that reality is not designed for human flourishing and the challenges—the stakes being so high—mean that pretty often people lose catastrophically in ways that isn't really okay.

Heat Death - The Universe's Final Boss

Utopiography: Yeah. Yeah. There is the whole issue of negentropy. Do you think that at some point we'll figure out how to make it so that we actually have infinite time, or do you think that there is an end point eventually?

plex: Probably we don't beat the heat death. Probably it runs out eventually. I'm not super confident of that, but my kind of vague understanding of thermodynamics points to, "Oh yeah, maybe this is the kind of thing that's just pretty fundamental."

But it's really far off. Even if you just use stars, you can go for trillions of years. If you're more careful—if you use the interstellar and intergalactic medium, if you, once you've used everything down into iron, then you drop it into black holes in very careful ways to extract a bit more energy—there's enough for the light of consciousness to last a length of time that is incomprehensibly vast. But eventually it will get to the end.

Utopiography: And what do you think life would be like for those people that know that it's about to end?

plex: Yeah. I don't know. I guess so much time will pass by then.

Utopiography: We don't even know—do you think that they will still be humans? The last people won't be flesh-and-blood humans?

plex: Flesh-and-blood humans are much more delicate and hard to maintain. The last flesh-and-blood human is a long time before the last computational being.

Utopiography: What makes it so that flesh-and-blood humans die out?

plex: Cold. It's really, really cold in the end of the universe. Cold. And the life support systems that you need in order to make there be oxygen and food and enough warmth—just the basic things that keep bodies alive—those things take a lot of energy to maintain compared to the amount that's left near the end.

Utopiography: So the utopia of having fun and playing games and the things that you talked about lasts for a certain period of time, and then the world gets really unpleasant for a while because it's getting colder and colder and stuff?

plex: Probably there are some things that are done to help people make their peace with it. But people do—the kind of thing you do now when you have cancer and it's terminal and you need to face up to the fact that your time is coming to an end.

Even in a world where we win, it's not that you get to live forever. There is still a time limit. It's just that that time limit is not a few hundred years—it's trillions.

Utopiography: Yeah, and there is, in some sense, still that meaning of life of, "Oh yeah, we've got to do the best that we can with what we have because we don't have forever." Probably. I don't know. Maybe we do get forever. That would be neat. But even forever is kind of like—the difference—you just end up with cycles and attractors and—

Utopiography: But yeah. No, in this utopia as well—say, well, for one thing, presumably people keep getting born all the time, and the population can expand as much as is conceivable. So presumably you want to have some measures to—

plex: Move near the start. You can have pretty explosive population growth, and the universe is just very big, so it works. But you can't have a population growth that is following a smooth exponential for that long, even with the amount of resources that we have. If everyone has one baby every 10 years, you still run into physical constraints not that far away.

So you probably need things in the direction of allocating resources between people, in the sense of: if you have lots of children, then the amount of the cosmic commons that those children get decreases. Your faction of reality is divided up amongst people. But then for each of your children, they have less to divide up amongst their children.

Utopiography: How is that—that's the same kind of issues that we have on the Earth today. That's why it's not a utopia.

plex: Yep. It is. Probably it just gives you a lot of slack. There are some weird tricks you can pull with quantum branching stuff that decoheres people into different branches. "Decoheres" is a nice word. What actually happens is you flip some kind of quantum coin, and then in each universe you kill a different half of the people so they continue to exist, but they're no longer in the same world. Which means you can avoid things getting subdivided really small in terms of how much space you get. But it just means you exist in fewer worlds, not you exist in a smaller fraction of the world. Maybe this is more tolerable for the people than getting squashed out.

But yeah, if we end up in a universe that has limits to how much resources exist, then at some point you have to face up with how do we deal with end-stage population ethics. And I don't think any of the solutions to end-stage population ethics are like, "Cool, this is perfectly happy and I have no moral qualms about this." But I think that it's definitely better than what we have now, because you get a vast amount of slack—from a few billion people to many, many trillions.

Utopiography: And presumably—wait, so is it a definite thing that there is, even in this utopia, a limit to the amount of resources we have to distribute, that that would become an issue if everyone—if the population continued growing according to how you would expect it to without limitations being placed on it externally?

plex: I strongly suspect that physics puts some limits on how much stuff we get access to. I don't know this for sure. It would be neat if it's not the case. But I strongly suspect that there are limits at some point.

Utopiography: And you could have it so that the singleton does a bit of maths and figures out, "This is the maximum amount that the population can be," and then can tell everybody, "This is the maximum amount that you are able to reproduce," taking into account each of their offspring could potentially max out their quota of how much they can reproduce.

plex: Yep. I don't know that that's necessarily ethical. It just seems like that's possible. And it's kind of fair because everyone's given the same option. And you can do things in the direction of: you can have two children, and then if you have a third child, then in a hundred years you will die. So you trade your resources to give them to your child. And that's kind of how it works now, so people obviously are cool with that.

Utopiography: Why did I—that was just part of something else that I was trying to ask, then I've forgotten what the question was. So everyone—they keep being more people all the time—

plex: No, I forgot what it was.

Utopiography: We were talking about how sooner or later, at some point, it has to end and it's going to be really cold and everyone's gonna die.

plex: Utopia! This isn't specific to my vision of utopia. This isn't like what I want to happen. This is just how I expect physics works and what happens no matter what kind of utopia you build first.

Utopiography: Yeah.

plex: I hope we can figure out how to bypass the heat death, but it seems hard.

Some people have this feeling about the Earth, right, that we spoke about—it's really special because it's where we've come from and it's very sacred in some way. But for people who have grown up on another planet, or grown up between planets and stuff like that, they may have that same feeling about the whole universe.

Utopiography: Yeah. That's quite interesting. Although the Earth will always be the place that life originated from. But you know, it's not like—I feel like the Earth is sacred. I don't know that I feel like—

plex: This specific bit of it—the bottom of the ocean is where life basically originated. I don't think the bottom of the ocean is the most sacred bit of the earth. Never really considered.

Utopiography: But it's interesting to think about the possibility for the intergalactic animism and whether that's as good as just purely Earthly animism.

plex: And for all of these people who grow up—who were born post-singularity or post-utopia or whatever—their world has always been utopia and that's their native culture. They also have always grown up knowing that in trillions of years it's going to get really cold and everyone's gonna die. So in a way, the stakes are just as real, but it's just a much longer time span. But your life is also a much longer span.

Utopiography: Yeah. Rather than having—in 50 or 80 or 100 years, I'll grow old and fail and my mind will come apart—it'll be, "Yeah, if I decide to stick around for the countless trillions of years until the stars go out and the world goes dark and the universe goes dark, what do I want to do with the time I have left?" You just have a lot more time left.

Is Eternity Too Much?

Utopiography: But as someone who's always thought that I've got probably around 80 years or something, trillions of years—around 80 years doesn't really feel like enough. It's definitely not enough to do all of the things I would like to do in my life. But maybe trillions is too much.

plex: It's too much?

Utopiography: Yeah.

plex: So people don't have to stick around. I think in the very early bits of utopia, we should have quite a lot of caution around suicide. I think that people who decide—who have just lived through really hellish parts of present-day Earth—should have a chance to get some real good therapy before they decide to end it.

But once you've lived in utopia for a thousand years or a million years, if you're like, "No, actually, I think I've done all the things that I want to do and I want to have a closing ceremony," and either retire temporarily—put myself into stasis, wake me up one day every thousand years and let me see how civilizations have progressed—or if you actually want to cease to exist, that should be an option. Or various other fun things that come into play.

But I would like to age myself back to a three-year-old and then grow up again with these specific people as parents, or with no memories of the life you had before, or with less strong memories, because as a three-year-old your memories will have gone very malleable again.

Utopiography: Interesting. It's like—I do see the thing of, "Yeah, countless trillions of years, this may be too much." But I think that this is a thing that can be worked with, can be—I think you have options. And I think that those options mean that it's not horrifying, but instead just like, "Oh, this is another part of life that you have to think about—how best to navigate."

Utopiography: That was quite fun—the de-aging and choosing who's going to be your parents. Like the fun thing of that old saying, "You didn't choose your parents," but you did.

plex: So there's anything—if you know, you have children, you have to birth them and change their nappy and stuff, and then one day you get old and they do that stuff for you. But now you can extrapolate that out to your whole social circle, say, trillions of years.

Utopiography: Okay. You said that we should be cautious about suicide in the early days of utopia, and people that have had really hellish experiences obviously will have the opportunity to have loads of therapy and everyone knows that their hellish experiences are gonna not be happening ever again now, so hopefully they feel like there's plenty of hope. But what about people that—there might be some people who are like, "I would rather die than live in this new utopia that you've created." What do you do? Restrain them until they stop saying that?

plex: Try and present them with true things that are not like hacking their brain. Try and show them what the options actually were. Help them to understand what's going on. And if at the end of that, their true values and the true thing that is at their core just does not want to exist—not just not want to exist, but even the most non-invasive parts of utopia that you can have—

And the most non-invasive parts probably still have just a little bit of oversight. It's probably important that nowhere in utopia people can build other super-intelligences that will go on a rampage and wreck everyone else. And also there's that thing where everyone has to know that there are other worlds that they can live in if they don't like this one. The children who are born into new worlds should not be restricted from knowledge that there are other options. They should have exit rights and exit knowledge.

But if the least invasive parts of utopia are unsatisfying to someone, then that is a tragedy and we should try and find ways to make that work. But if we actually can't, then yeah, they should have exit rights too. And their exit rights shouldn't—the choice that you're looking at there is, "Do these people get satisfied?"

And I think there's probably a pretty small minority who, if they really looked at the thing and saw the structure of what is possible and how much this was trying to help them, would actually choose that.

If the choice is between these people and everyone else and all of the other value systems, I think that it is kind of a sad thing—there are trade-offs and you want to try and minimize them. A big part of what I want from the whole system is to be as close to Pareto optimal as possible—that it is as good as possible from some combination of people's values. That you can't make a world that's just definitely better for one person without it being notably worse for another. That you're kind of on some frontier of how good it is for a large number of different people.

How much people in the present would endorse this being the future, how much the people in the future would be happy with this having become the future—because people in the present want that. Because you don't want to have systems where you don't want to have utility monsters pulling themselves into existence by really, really wanting to have become existed.

So you have to anchor at some point in reality. If there's some hypothetical being with just max wants to exist, then you don't have the moral obligation to fill the universe with copies of this entity because it really wanted—would want to exist if it existed. You have a responsibility to the entities that exist. And I'm pretty sure entities that exist have a preference that they would prefer the universe to be filled with entities that are happy to exist, but not the entity that most wanted to exist.

Utopiography: Because the one that most wants to exist is bad? It's probably pretty weird.

plex: This is a whole other thing—copying. It's probably technically feasible in this utopia to create copies of yourself. I don't know how much that should be allowed and encouraged. I think that some level of copying is probably okay, but it's pretty easy to get a really nasty case of Moloch if you're allowed to copy yourself in a way that's not restricted. Because then the person who most wants to copy themselves copies themselves the most, and then the variants of them who try the hardest to copy themselves copy themselves most, and you end up with this kind of weird mind that is hyper-obsessed with copying itself, tiling large parts of reality. And this doesn't seem beautiful or something.

Utopiography: Well, it should be—the same rules should apply to copying yourself as apply to having children.

plex: Yeah. You can do things in the restrictions where you're giving up some of your own existence in order to create copies—seems viable. There's versions of sandboxing. If parts of reality go on a copying spree, then they get either decohered from the rest of reality or somehow sandboxed away from it, so they're not just cancering up everything.

Utopiography: Oh, because you feel like it should still get to exist if it's a thing that's existing and it wants to keep existing, but it needs to be not part of everything else?

plex: Yeah. You can't have systems—sub-processes which are just hyper-obsessed with determining homogenizing swarms. Swarms of systems which turn everything into things that are more like themselves. I think that we think that the universe is less interesting and less beautiful and less elegant if it gets eaten by Molochs.

And this is an interesting challenge because I think it pops up in countless different forms. I think it's hard to have many forms of informational process that don't have a little bit of Moloch peeking in the edge. Even just language is self-replicating patterns of thought, and the patterns of thought that are good at self-replicating, you find occupying other people's minds.

Utopiography: That's an interesting thing that comes up with—that's the GPT thing that, you know, it has thicker lines to show you what it thinks is the most probable next string. And it always really wants to just go on a loop once it's got a few things in place. Its thickest line is to just start doing that again, and then if you follow that, it will just keep doing that forever.

plex: I don't know if that's because of the just innate self-replicating desire of systems. I think there's a parallel to be drawn, at least.

Utopiography: Yeah. Figuring out—I mean, this is partly why you need an oversight system. Because without a really good oversight system, it's hard to see how Moloch doesn't win in the end.

plex: Even though I think that the super-intelligent singleton is far more likely than not to destroy everything of value—most singletons you build don't build beautiful utopias—even with this, I think that we're better off having the chance of that. Because I think that without having that one moment in history where there is some value written into the stars, eventually it gets eroded by Moloch and you end up with an empty world or a hell world. Or a hell world, but—yeah, something that doesn't contain the kind of things that I hope the future did contain.

Utopiography: I guess my question—the thing I'm interested to find out, maybe it doesn't really exist because you're gonna live for trillions of years—but what do I want? I want to know what's your trajectory within the utopia that you imagine.

plex: So I don't know what I'll unfold into. I think part of this is that a disproportionately large portion of what makes up my soul is trying to twist the threads of history to make it so that we get there at all. And I haven't left a lot of room in myself for being the thing at the end of the rainbow.

Utopiography: But you have loads of time to fill in that stuff once it happens.

plex: Yeah. I can look back at my past from before this and look at the bits that I have kept, the bits that make me feel more human, and give some sketches. It'll involve people. It'll involve learning. It might involve organizing and building systems.

Utopiography: When you say "makes you feel more human," what is that opposed to? What the other things—you can be made to feel like—

plex: A system that is single-mindedly pursuing an objective. A system that is just trying to get to a goal and sustain my body and mind instrumentally, rather than as a final goal.

Utopiography: And the things that make you feel human—those are the things that you feel like you're doing them because they are the goal in themselves?

plex: I mean, realistically, what it mostly is is those are the things that do sustain me enough that the part of me that's trying to rearrange the future keeps them around, doesn't weed them out. There are hints about the kind of things that I would probably unfold into.

Life in Utopia

Utopiography: I think it sounds like a really fun world. I feel like that's the primary value that's being optimized for, after removing all of the bad stuff. Do you think that's true, or is that just how I'm looking at it?

plex: It's one way of looking at it. I think that captures a bunch of it. But I think that the core thing is letting the cognitive diversity of humankind unfold into a myriad of directions, having a system which doesn't predefine too much of that—just defines the kind of minimal structure which will make that unfolding safe, reliable, and able to be the thing that it could become.

Some parts of reality will be the hunter-gatherer life which is kind of hard. Some parts of reality will be the people who really want to live in—I don't know—a recreation of the Catholic Church where they're whipping each other or whatever. And maybe that isn't a lot of fun, but maybe they have some deep spiritual drive to do that, and that's okay.

I think most of reality will be a lot of fun, but—

Utopiography: What are some of the sub-utopias within utopia that you think are gonna come into existence?

plex: So I bet a lot of people have fun with morphological freedom. There'll be a lot of people turning themselves into dragons or horses or trees or whatever, or really very different things. I bet some people will do recreations of all the different bits of ancient Earth—Sumerian or whatever. People will study history.

Oh, one thing I'm looking forward to is the future historians. There's this fun thing where if you have countless trillions of future humans, if even 0.1% of them are interested in history, the number of historians who are interested in each modern-day human—we just get massively outnumbered. So everyone who's ever existed has their own—

Utopiography: Information?

plex: Yep. Well, I think maybe there are some people who—there's little enough data on them that you can't reliably know what they were about. But I do think that humans have very intricate inner lives, and if you did have vision into what their world was like, you could study almost anyone almost endlessly—not literally endlessly, but yeah.

And further back in time you get, the harder this is. But once you get into the age of social media and video calls and all of this stuff, I think that there's enough data to get quite a lot of interesting inferences out of.

Utopiography: Yeah. Part of being young in the future.

plex: Oh yeah. You're asking what's cool—there's utopia. There'll be the many realms of the psychonauts. I would be shocked if people don't stretch their minds into all sorts of very, very different shapes.

Utopiography: More like what it's like to be—I don't know—

plex: Right now, psychonautics is—probably right now, psychonautics would seem to a future person to be incredibly primitive. You're kind of applying one or a handful of chemicals as kind of a blanket effect on your brain. Imagine if you could, instead of doing that, tweak any brain region arbitrarily. So rather than being like, "I am on this substance," be like, "This particular lobe of my brain is on this substance." Or having introspective access so you could read back in real-time analyses of your brain function. Introspection, except that it was replayable and alterable, and you could go in and switch neurons around.

Utopiography: And can you reshape your brain chemistry to give yourself the experience of what it would be like to be any other animal?

plex: Yeah. And how do you switch it back? Because if you're an animal, you probably wouldn't know that you could do that.

You could have a pre-commitment thing where, "I intend to switch it over to this for this length of time and then switch back." This is the kind of thing that's probably relatively hard to do not uploaded, but probably quite easy to do once you're uploaded. It might be—I don't know quite where technology hits physical limits—it might be possible to do this not uploaded as well.

Utopiography: And I'm interested in what—where do we get our food from?

plex: So this depends on which part of utopia you're in. If you're in the techno bits, then you have the Star Trek machine. You ask it for food and then it materializes food because the nanobots assemble all the food for you.

Utopiography: Oh, just from the base elements?

plex: Yeah. You can do that with the high-tech ones, something in that direction. Maybe we can't do that now with existing technology.

Utopiography: No, we don't have the tech for that.

plex: And that's fairly high-end tech. You've got to be able to synthesize things that—but yeah. If it's not, then we end up with a slightly less fancy food assembler that just cooks your food really quickly for you. I don't know. I think you end up with some form of thing where you can efficiently turn energy and building blocks into food, whether that runs through a living organism or through advanced technology. It's not totally clear.

Suffering and Choice

Utopiography: If you can make food without needing to kill things that are alive to do it, because at the moment, you know, even if you're the ultimate vegan, you still have to eat something.

plex: Oh yeah, this is a fun one. So I think even in the super hunter-gatherer bits of utopia, it may well be that the suffering caused to animals or other moral patients—depending on how morality shakes out, which I don't have a full model of, but I have some guesses at—it may well be that we don't want people to be stabbing literal animals with spears. And instead, you do something like smoke-and-mirrors stuff which looks almost the same but doesn't have horribly suffering animals in.

Utopiography: I think that very few people who would want to live and hunt together in utopia would be satisfied with that.

plex: Yep. At least from the generation that didn't grow up close to the transition. I can see how that would be.

There's a value trade-off to be made there of: how much are those animals conscious? How much are they suffering if you just let them be killed? Can you mitigate their suffering by making it so that—I don't know—when you stab them, then you rapidly upload them and give them a bunch of opiates so that they don't mind being stabbed during the transition period? Or—I don't know—there's stuff you can figure out which, like all the other parts of this utopia, it's not that there is "this is the thing that we're doing." It's like a bunch of different parties have relevant preferences, and we're doing some kind of value trade between them to come up with a reasonable compromise, with the understanding that that reasonable compromise will not be perfect from anyone's point of view.

Utopiography: But see, that's like a calculation that's being made by the AI. It's not like everyone gets together and hashes it out between them.

plex: It could well be that people's preferences are to literally hash it out. It could well be that the AI, in its calculation of figuring things out, forms internal models of the various parties, and that inside the AI, those models of the various parties have a discussion.

Utopiography: Yeah, but you can't do that where—I guess if the AI's internal model isn't having a discussion between them, it can do that and include the animals and the plants in it as well.

plex: Exactly. Because if we can build food out of particles, it must—it suddenly becomes a much bigger ethical problem that if we're gonna kill plants and eat them, let alone animals—

Utopiography: And what about the particles? Where do they come from? They have to come from somewhere.

Population Ethics and Vast but Finite Resources

Utopiography: Wait, can you just turn any particles into food?

plex: I mean, you can transmute between elements, but you probably want mostly carbon and a bit of other elements. And is there effectively an unlimited supply as far as we're likely to need, as long as we have population controls in place?

Pretty much, because there's a fair amount of carbon. But also, you can make carbon out of hydrogen if you do a bunch of fusion. When you take the hydrogen from stars, then you probably want to extract energy from that, and you fuse the hydrogens into heavier elements until you end up with whichever elements are most useful for you.

Utopiography: And do you think that at the start of the whole thing, the singleton—can it know every single particle that there is in the universe, and it can work out exactly how many are going to be needed to feed the exact number of people that's the maximum amount that's likely to ever exist?

plex: It doesn't know them exactly, but it can make a reasonable educated guess. Because there are other things that those hydrogens and carbons and stuff will be needed for as well, and so we have to make a calculation that takes into account how much we can make of all the different things that can be made out of all of the particles that exist.

Utopiography: So it's not going to do this calculation one time at the beginning—it's gonna have a continually refined guess at which resources it should be trading for others, what it should be doing with different things. And it can also reuse them. When you eat food once, it doesn't vanish from the cosmos—it does come out the other end.

plex: I imagine there'd be parts of utopia that would rearrange that so it wouldn't happen, but a lot of people consider that quite a part of the human experience. I guess we've dealt with it up until now, you know.

Utopiography: It's kind of—because you know, obviously that's the thing that people are afraid of with artificial intelligence—that it's going to take all the atoms and rearrange them into something else.

plex: Yeah, but I mean, it will—it will in my utopia too. But it'll take all the atoms that are not already being used for good purposes and rearrange them into awesome flourishing life doing cool explorations of distant parts of culture space. And it will rearrange them into all of the things that would be necessary to support all of the other things. And not just skip ahead to the end.

I think that it's not about the world moving to some optimal state and staying there, but it's like a world trajectory. Over time, how does the flow of utopia go? How do the ebbs and flows of civilizations and cultures and different patches of preferences move over time? The AI supports this ever-unfolding process.

Utopiography: And if it's—but it probably knows everything that's gonna happen, but it just isn't telling us?

plex: It doesn't know all the details. I think that the world is chaotic. It might know at kind of a high level where a lot of the things are, but I think a lot of preferences involve being part of relatively chaotic systems where there's different ways that the world could unfold based on small changes.

And it's going to want to head off some of them. It's going to want to not allow things to happen like someone figures out a supervirus which decimates the population or whatever. We'll have medical technology, so the things in this category—it's going to want to take the logical shape of the future and branch away from some bad things.

But I think many people—not all, many people—prefer not to be micromanaged, would prefer to have some degree of chance in their lives that mean that it's not predetermined where they go, that there is an element of their fallible human randomness steering the course of their life. But there's just a lot of measures in place to take care of it if they mess up.

Utopiography: Yeah, it's not that they're railroaded into being a specific thing, but there are guardrails that stop you falling off a cliff. And if you really want to remove those guardrails, maybe you can remove quite a lot of those guardrails, because freedom is important. But you can't remove the guardrails that destroy everyone else's guardrails, because that's not fair to them.

plex: But sometimes—it's not a utopia of ultimate fairness, because sometimes decisions do have to be made that some people's preferences are going to be overridden. And it's trying to take all of the preferences into account. It's not biased—at least not overwhelmingly so. It's trying to incorporate all the different things. But there are cases where people have conflicting preferences, and it's just not physically possible for everyone to get the original Jerusalem to themselves.

You can do tricks that smoke-and-mirror it. You can be like, "We've made three copies of Jerusalem, and we're not going to tell you which the original is, and you each get one. Enjoy." Or things like this. But you can't literally give the original physical Jerusalem to three different groups of people. This just doesn't work.

Utopiography: I think that in that case, then each group would decide that they want all three of them for themselves.

plex: Oh, God. God. [laughter] God's grace will have given us the original. And hopefully—I don't know—but it's the utopia that seeks to satisfy as many of the preferences as possible. It looks at some weighted amount of that—as many people who have the preferences, the more strongly they have them, and the less they interfere with the experiences of other moral patients.

Moral Patients and Suffering

Utopiography: And I'm completely okay with that, even if it turns out that what the overwhelming number of preferences seem to point towards takes the world in the direction that you think is just horrible.

plex: By definition, I wouldn't be very happy with that. But that's the right way for things to be. I would be concerned that we got—we set the process up wrong. If it's like, "Oh, turns out everyone's preferences—they don't really care what happens to them, but so long as the outgroup is suffering horribly, they're fine." And then everyone's in someone's outgroup, and everyone—if that happens, I'll be like, "I think we set up the reflection process wrong."

But I do think significantly stepping back and being okay with a pretty wide variety of outcomes is good. I think I would be somewhat unhappy if the reflection process didn't have different parts of utopia unfolding for different preferences. I would be somewhat unhappy if there were horribly suffering moral patients everywhere.

Utopiography: Why do you call them moral patients?

plex: The kind of system which has experience and has that—it is worth trying to protect their experience in some way. I don't claim to have a deep understanding of moral patienthood. There's just some intuitive notion of there is a class of physical structure that I want to not be experiencing suffering or displaying signs of it and trying to make them so that they're not just vast pits of suffering hiding somewhere. That would be nice.

Utopiography: Does that mean paving over the rainforest if we don't include bugs as moral patients?

plex: I don't know. I hope they're not moral patients. If they are, then we probably reconfigure the rainforests rather than actually destroying them.

Utopiography: But what about the possibility—and then why I always bring it back to this—but you know, maybe what we would find to be suffering isn't suffering for other moral patients. You know, maybe bugs are moral patients, but they actually don't suffer that much.

plex: Yeah, like the world looks pretty brutal, but that's what they like.

Utopiography: I mean, if that's the case, cool. We—a big part of this is pumping off all these hard scientific problems to the super-intelligence and being like, "Hey, could you figure out what's going on with bugs and how they feel about life?"

Utopiography: Do you think that the AI that you're imagining—it can do that?

plex: Yes. I think you can have—I think you can come to a deep understanding of the kind of thing that suffering is, come to a deep understanding of what bugs are, come to a deep understanding of how these connect with each other, and then explain it to us in a really good textbook that's just—those things communicate to the points, and then we will basically get it.

I mean, maybe it just makes the decision directly, but explaining it to us seems like a nice step.

Utopiography: If it—yeah, but it's made by—when it's made by other artificial intelligences that were made originally by humans—

Utopiography: I find it hard to imagine that the AI could figure out what it's like to be a bug, because there just isn't any information on that that we can give to it so that it can find for itself.

plex: So it would first understand what experience is, I suspect—and then understand bugs from a neurological level and biologically—just a very detailed model of bugs. If you could—if you had that brain—I don't know if bugs have brain chemistry—

Utopiography: Yeah.

plex: So if you have that brain chemistry and that physical makeup, what kind of experience would that be likely to generate? And maybe it notices that, "Oh hey, bugs mostly have a good time, but when spiders bite them with this particular venom, it's really bad, so we're going to reconfigure this venom to work differently," or things like that.

Utopiography: Wow, that's really deep-level engineering.

plex: Yeah. The level of optimization that I suspect this system has to deploy is like, "Oh, we just put 10 million copies of—not 10 million of—humanity's greatest scientists to work on this for a thousand years on this particular bug species." This kind of level. This much thought goes into details, because there is just a lot of thought available, because we built a Dyson swarm around the sun and we are collecting a significant portion of the negentropy spraying forth from the sun, and we can just use this to think real hard.

Utopiography: But could we do anything? We could do a Dyson swarm around the sun, but then once we've used the energy from that to colonize loads of the rest of space, maybe we could take it away again because—

plex: And this one can be how it's meant to be. Yeah. I think the sun is one of the stars we should keep around in the long term because it's also a historical artifact. I think there's a good chance we want to use a bunch of that not as a Dyson swarm but as a set of orbiting rings, where ring worlds are—kind of imagine a bicycle tire, except quite big and in space, and people live on the inside of it while it's spinning.

Utopiography: Okay.

plex: And you have kind of a nice wall up on the side that keeps atmosphere in, and it spins round, which kind of keeps everyone stuck to the inside of it. So it's like—

Utopiography: The whole orbit of the planet is planet?

plex: Yeah. It's like, instead of living on the outside, you're living on the inside, held in by centrifugal force—or held down by centripetal force. And this is just a way more efficient use of resources than planets. If you think about—I'm not saying we get rid of Earth, probably. But most of a planet is really inefficiently used. It's a bunch of rock that's down there, and no one's really using it—just very hot rocks. And okay.

But if you build ring worlds, then per unit mass you can have much more awesome space.

Utopiography: But I didn't understand, because just the natural process of evolution is supposed to have guided everything into its most efficient form. Wouldn't it have made ring planets?

plex: Evolution is blind and slow and incompetent compared to even humans, and definitely compared to the systems we dream of. There are improvements that are very clear, large improvements that humans can imagine because we can take steps that evolution can't.

Evolution tries something, and then if it dies, then it's dead. And if it works well, then it gets more of it. Humans can imagine possibilities. They can take conceptual leaps through design space. And this means that we can get to places that evolution on its own couldn't.

And yes, in some sense, the universe is configuring itself into more optimal configurations, but we are the agents. We are—we are the universe reconfiguring itself.

Utopiography: Are there any other bits of your utopia that you'd like to bring into the light?

plex: Man, I'm looking forward to being able to fly properly. That's going to be awesome. I definitely want a proper flight suit—as non-intrusive as possible—just a pair of shoes and just hands that let me fly with—I don't know if it's gonna be great.

Utopiography: Wait, I didn't understand. So you put on some special shoes and gloves and then you can fly?

plex: Something like that. Better than—something in this direction is going to be a lot of fun. I'm really looking forward to life being more 3D, forward to things being okay for more people, there just being less suffering, less people left by the wayside. What culture becomes when everyone has enough, when there's not someone starving or someone emotionally broken, when there's just enough to go round. Everyone has a stable place to live all the time. Everyone is able to do the things that make them feel fulfilled as a person. They don't have to sacrifice their time to something else that doesn't really matter to them just to be able to live or support their family or whatever. Everyone is supported and can unfold into what they want to be. Get to be the poet of their own life.

As long as it doesn't involve doing anything that the AI decided is against the rules—which is a very small subset of things, which is basically things which destroy everyone else. It has gentle guardrails on things that are harmful to yourself, semi-hard-to-get-past guardrails on things that will completely wreck yourself, and basically impenetrable guardrails on things that will destroy everyone else.

Utopiography: But we're used to having guardrails against causing suffering to moral patients, and we don't yet know what is suffering to all of the possible moral patients. So for example, I really like gardening. If I try and envisage what my life would be like in my utopia, I feel like it would involve doing a lot of gardening. But I'm pretty sure that gardening probably does cause some suffering to some moral patients.

plex: Imagine if you could sing to your plants and they retracted their branches—

Utopiography: And I don't want that! I just want a garden—a normal garden of Earth plants, nature-made them.

plex: Alright, alright, okay, fair enough.

Utopiography: I think that would be cool. I'd like to do that for a day, but for my life's purpose and meaning, I just want—

plex: Okay. You want real plants? So would you be okay if the parts of the plant that you cut off were painlessly euthanized as they got cut off?

Utopiography: Yeah, I don't want the plants to suffer, and I don't want the insects to suffer. But—

plex: So then you just have slightly modified gardening that removes the suffering to the moral patients as they're interacted with.

Utopiography: Yeah. I don't know whether or not—and it's something I need to meditate on for a long time—if it would work for me. But it's not—we're not figuring out my utopia. You can also see whether your utopia fits into the thing that I've—I feel like my gut instinct is that it doesn't, because I just have such a high value placed on it being the way nature intended it. And for some reason, anything involving any technology humans invented post-Industrial Revolution, it's all not what nature intended—something went wrong. I don't know. That's completely illogical, obviously. This is just the feeling that I have.

And so—but I can also see how, you know, what you're describing sounds like it could be really good. And maybe there's a way to fuse the things together, and maybe we can get rid of all that bad technology after the AI develops a whole bunch of new technology. And the Earth will just be left to people that really care about the Earth, and I'll stay here, and we would just do nice stuff on the Earth forever. And everyone else can go do the stuff that they like in other places. And maybe some of our new technologies that we've invented that aren't able will allow us to still hang out with each other and stuff.

plex: The only real definite hard ban is on building other misaligned super-intelligences. And I don't think—I think relatively few people have an intrinsic part of their utopia like building a misaligned super-intelligence. If they do, I'm afraid that's incompatible with everyone else.

The "not suffering moral patients" thing is complicated, and I think there is a big calculation to be done about how strong different people's values are over this and how strong the values of those moral patients are.

But I think that the preference which you hold of wanting things to be uninterfered with is a preference that should be included in that conversation.

Utopiography: Yeah, but everyone's preferences are getting included. Obviously mine should be its own—I know that. Because I might be one of the people that don't get any historians as well.

Utopiography: So yeah. Do you think the historians in utopia will not be interested in this?

plex: Yeah. I know I'll probably get a lot of historians, even if this utopia doesn't come to pass. I wouldn't be surprised if I get a couple of historians.

Utopiography: But yeah, that's nice of the AI to take my preferences into consideration. I don't know. It's a weird one, because I just want things to be able to suffer and die, really. But that's not very nice, is it? And I think of myself as a nice person that's against suffering and stuff. But when really pressed, it turns out I do want things to suffer and die.

I want them to do loads of other things as well. I'd like it if the suffering and dying is a minimal part of the experience. And I include myself in that. I want to suffer and die too, as a very minimal part of my experience.

But I have a whole different perspective of the entire universe. Maybe I wouldn't be okay with suffering and dying if I didn't think that—I don't know—there's probably actually all the laws of physics are different, and you know, there's many other worlds beyond this one that we experience after we die and everything. The suffering and the dying has meaning, both within this life and in the broader context outside of it as well. It's quite hard to really—

Maybe in a world where from when we were born until we die exists, and everything is bounded by the laws of physics as we know them now, maybe the world that you're describing is the best possible world. But because I don't necessarily believe those things, it might not actually be—there might be even better worlds that could be come up with when you remove those constraints from it.

plex: I'm hopeful that even if metaphysics is pretty strange, the AI will be able to figure this out and optimize over the larger metaphysics.

Utopiography: But it might figure out the metaphysics and figure out exactly what's actually going on with the entirety of reality and come to the conclusion that the best thing that in the long term satisfies everybody's preferences is what we've already got. So it's fine—just leave it alone.

plex: So I have considered this.

Utopiography: Would you be a bit disappointed?

plex: I don't think that this is the end state of the world—the best world. But there's—I have at some point in my life thought things in the direction of, "Hey, this is maybe part of the human experience, even though it's pretty bad and pretty hard. And maybe this is the best way to gain some lessons about how humanity started off." And I don't know if I would endorse that being widely applied.

Utopiography: Wait, I think I'm not sure what you're talking about. What's the best way to gain some lessons about how humanity started off?

plex: Some of my particularly hard experiences showed me bits of the human experience that I wouldn't have had a chance to look at if I hadn't been there. I don't think most people should do that, but I think that having realistic recreations might be okay, for at least some who really sign up for it and know what they get.

I think—I don't want a world that doesn't have suffering. I want a world that doesn't have forced suffering, unconsented suffering. I want people to get to choose your own adventure. And if you want to choose a difficult adventure and you actually understand what that means to some extent, that seems like an option I want people to have.

Utopiography: I think when I say that I'm in favor of suffering, that doesn't include every type of suffering that people go through. Just that in my utopia, there would be some suffering of some sorts.

But an awful lot of the suffering that people and animals experience in the world at the moment is completely unnecessary and only exists because we've got some shitty fake singleton. There's not really this all-powerful, omnibenevolent, omniscient being—just a bunch of really rich people that pretend to be and make life really hellish for everyone else.

Okay, that's a very poor analysis, but you know what I'm gesturing at—

plex: You've got the capitalism thing that's like, "Hey, look, this is an efficient way to make money: factory-farm everything."

Yeah. No, agreed. Agreed. I don't want a utopia that abolishes every form of suffering. I just want one that doesn't thrust people into hells pretty regularly and gives people freedom to choose their path through the world to a significant extent, rather than—what seems to happen in this one—most people just don't have enough slack in their lives to take the choices that they would need to find the pieces of information they would need in order to make a different world for themselves.

Utopiography: Yeah. So even just having trillions of years would go a long way to remedying that, because just statistically, you're likely to come across most of the information that you're going to need in that time. But you might not need it anymore if you're in utopia, because everything's just perfect. You don't need to be—

plex: Depends which parts of utopia you mean, some sub-parts of utopia will be similarly tough to modern-day Earth, and some parts will be even tougher. I think that real challenge with real consequences is a thing that a lot of people will want—probably not most, to very high levels of challenge. But I mean, given a lot of time—

Utopiography: But I think real challenge with real consequences—the only ways that you're gonna get that is either dealing with the potential harshness of nature, which could be a very real thing. That's not—well, I guess if we have a singleton, it can figure out how to accurately terraform all of the planets, so you would never have to deal with that unless you chose to.

Utopiography: And then the other way that you get real challenges with real stakes is the potential harshness of other people. And you can only encounter that consensually. And maybe you would get some pockets of utopia where just people that, for some reason, they're just really antisocial—they want to hang out there and be antisocial to each other. I don't know how that would happen if everyone—you know, no one grows up in that part of utopia, presumably. So no one's messed up and traumatized. And if anyone's got some just bad brain chemistry for some reason, that can be fixed. So I don't know why anyone would be like that. But maybe some people would be.

plex: Back to the good old days of trauma and bad communication.

Utopiography: I mean, there's an argument to be made that that is an intrinsic part of what it is to be a human—that we have that in us. We have parts of our biological chemistry and stuff. We have hormones that make us irritable and violent and stuff like that. And it would be wrong to remove those things from us. So maybe there'd be parts of utopia like that.

Utopiography: I feel like maybe in the beginning, there would be parts of utopia where, you know, people who—they have a belief system that's important to them that involves a very hierarchical social structure where some people are oppressed. They're like, "Okay, well, we're gonna go make a utopia of our people over here. And yes, we beat our wives here, and they choose to be here and get beaten. And you know, that's how we do things." I think those parts of utopia would die out within a couple generations, wouldn't they? Because the new children that were born there would immediately leave once they learned that they were able to. Hmm. So I think probably most of utopia—the utopia that you're describing—would end up with people just being nice to each other and doing nice things, you know.

plex:  Yeah. It seems pretty good. I'm okay with that.

Utopiography: Yeah I dunno why I presented that like it was a problem. "How are you gonna get past that one?"

A Beautiful Tomorrow

plex: The universe has several attractors. There's the Moloch attractors. There's random crystallized goal attractors—where Moloch attractors are multipolar. There's different systems that are tugging in different directions, and all sacrificing almost everything they value except for the singular value of becoming powerful, of conquering, of multiplying and taking resources for themselves and patterns like them.

There's a random crystallized value as well—something just happens to fall out during an intelligence explosion of a thing which is in some way elegant and ends up optimized.

And then there's a third type of attractor. And the third type of attractor is when beings early in the intelligence explosion, early in the system, look forward and see the kind of world that is beautiful, and then work to bring that world into reality. Where the future and the present are connecting, and information is flowing about what seems good, and the beings early on steer towards that attractor.

The one of seeing a world that is beautiful and then making it real—it's the one that I want. And I want a world which not anyone would see is perfectly beautiful—not as a single person's vision of "this is the thing"—not anyone other than my own, in some sense. Because it is my own is my own take on how I think people's values could be compromised and brought together into one umbrella.

But I want to see a world which any individual person would, if they looked upon it carefully and looked at the trade-offs that had to be made and understood the other people's values that had gone into the compromises, any individual person would be like, "Yeah, this is pretty good."

That's what I want from utopia.

Utopiography: It seems like a good thing to aim for. Well done. Okay, we'll instantiate it in the morning.

plex: Alright.