To get an exact copy of yourself,
An exact copy of you is still not you, even ignoring the immediate divergence.
If you even want to get to the point of thinking about how many people you'd have to create to get every possible person, you first have to justify the step where you collapse the entire equivalence class of copies into "one person". Otherwise, the number of potential people is definitely infinite, and the probability of any one of them existing is definitely zero.
Your hack of creating them in some order that eventually reaches all of them assumes that you'll be able to keep doing that for infinite time, which is false. The energy isn't there. You probably don't even have trillions of years, let alone infinity. And I'm not even sure it works with infinite time if you refuse to collapse those equivalence classes; I think that infinity is probably not countable.
So, anyway, you have to get away from having an infinite number of entities of concern, and especially from having an infinite number of entities of "equal dignity", or the measure of your impact is going to be zero and you'll have no defensible way of saying when you've done "enough". And you have to escape the infinity without losing your ability to believe in what you're constructing.
Even if you do collapse equivalence classes, in any universe with a continuous space of states (like the one we probably live in), there are no exact copies. So you're stuck in your "sufficiently similar" case even if you just want to escape the infinities and instantiate any fraction of the entities of concern who could exist, let alone a meaningful fraction.
I don't think you solve that in a way that's going to settle anything. I don't think many people who'd buy into any of this to begin with are going to be ready to accept exploring the combinations of discrete values of a two-digit number of "personality parameters" as equivalent to "creating everybody who could possibly exist". You might get me to accept that outcome... although frankly a lot of them would be assholes who by my lights probably should not exist. But I think my acceptance would be related to the reasons I don't accept the whole premise that every possible person should exist. I'm outside of the intended audience for the solution.
The really fundamental problem is that the entire idea of a "right to exist" is bad and a dead end. You're in a swamp as soon as you start to think that way. There's no obvious, compelling reason you should extend your circle of concern without bound. It's even less clear that you should extend your concern to people who don't and never will exist before you extend your concern to, say, rocks that do exist. Remember, if you don't exist, you don't have any experiences... so how is any of this about "experiencing beings", again?
To make it be anything you can act on or get people to agree on, you also have to be pretty careful with what sense of "exist" you pick. If you "Tegmarkmaxx", and possibly even if you "Everettmaxx", every possible being, sentient or otherwise, does exist, and you have no power to change that anyway. If you start counting being simulated as "existing", you're going to lose a lot of people, and more the less fidelity you demand in the simulation. If you demand the sort of embodied existence we all have right now, you're going to lose a lot of "fellow travelers" who'd see that as a waste of resources that could be used to produce some other "perfectly good" kind of existence. And that's on top of all the disagreements you'll get about distinct beings. So it's impractical as a way of guiding collective action as well as being philosophically bizarre.
The whole thing has the feel of the "ontological argument" for the existence of God. I don't think it's fallacious the same way. After all, it's arguing for adopting an ethical stance, not a factual belief, and I hope we can agree that those are separate categories of argument. But it feels like the same kind of wandering off into the weeds.
I think people spend way too much time and energy on this "right to exist" thing. It wouldn't be worrying except that they also seriously seem to be trying Order The Entire Future according to it. It's probably still not a problem because there's approximately zero chance that anybody can actually influence the future in that direction, but it's kind of nervous-making to see people take that kind of thing seriously. Especially because when you realize you can't achieve it, the next step is to start trying to approximate it...
If you even want to get to the point of thinking about how many people you'd have to create to get every possible person, you first have to justify the step where you collapse the entire equivalence class of copies into "one person". Otherwise, the number of potential people is definitely infinite, and the probability of any one of them existing is definitely zero.
I didn't consider this, because the part about infeasibility of getting the exact copy is very strong anyway. The point of the "solution" is to show how hard it is to get there. In this thought experiment, I think the pre-incarnation intelligence would be happy to get one person out of the equivalence class.
Your hack of creating them in some order that eventually reaches all of them assumes that you'll be able to keep doing that for infinite time, which is false.
Yes, this is why I wrote "If we solve entropy problems". I know the current physical understanding doesn't let one run this process forever, but I wanted to note that I am not arguing that we have a mathematical impossibility (only very possibly physical impossibility). I should have been more clear on this point.
Another thing I should have mentioned more clearly is that when I started thinking about this issue, the "right to exist" claim sounded interesting but possibly dubious, and I wanted to see if I can figure out what makes it interesting. I'm not saying that the original position argument captures all of what people think when they refer to the right to exist. Rather, the application of the original position argument is interesting in itself here.
It's even less clear that you should extend your concern to people who don't and never will exist before you extend your concern to, say, rocks that do exist.
Most people value future humans' happiness, even though they don't yet exist. I don't think I get very far away from that position? The reason we need to think about every possible human is that we don't know which ones will get to exist. So in a way, most of the persons of concern will never exist, but this seems beside the point, as all considerations about future humans have this same issue.
If you "Tegmarkmaxx", and possibly even if you "Everettmaxx", every possible being, sentient or otherwise, does exist, and you have no power to change that anyway.
I didn't want to go there, as infinite ethics is pretty hard. Even though we couldn't affect whether or not every possible sentient being exists, Carlsmith (linked) says that we might still be able to affect the overall utility of of all those existences.
I think people spend way too much time and energy on this "right to exist" thing. It wouldn't be worrying except that they also seriously seem to be trying Order The Entire Future according to it.
I was happy with the conclusion I got to, because it sounds so reasonable: more people is good, but if there's trade-off with quality of life, then value QoL much more than total utilitarianism does. As I mentioned, this isn't a complete solution, as one would at least need to determine what is the utility as a function of the number of people and their QoLs. Plus very probably consider other arguments and their corresponding modifications to the utility function. Pretty much the only thing I currently use this utility function for is check what it says about some thought experiments (like the Repugnant Conclusion mentioned in the footnotes).
Thanks for the reply.
Most people value future humans' happiness, even though they don't yet exist. I don't think I get very far away from that position?
Usually what I hear on this is that people want to take a timeless view, and I'm actually kind of sympathetic to that. But it's always followed by assuming that "yet".
Somebody who doesn't exist yet seems really, really, qualitatively different from somebody who will never exist. One stakes out an actual presence in the "space-time continuum", and the other doesn't. You can point at a 4D volume occupied by one, and not by the other. Insofar as you believe experience is connected with physics, you can point at the experiences of one and not the other.
I'm actually really confused about why people just blithely slip in the "yet", because the distinction seems so obvious to me, and the "yet" denies it.
The reason we need to think about every possible human is that we don't know which ones will get to exist. So in a way, most of the persons of concern will never exist, but this seems beside the point, as all considerations about future humans have this same issue.
If you don't know who will exist (but doesn't exist yet), then you might want to apply your best guess about it, and if you can't do that, you might want to try to leave them a world approximating what the "average person" would want to live in. You don't know who they'll be, but you it's pretty likely that somebody will exist, and you can make some reasonable guesses about what they'd prefer.
Talking about a "right to exist" makes it a question about whether you're obliged to act to move people from the category of "never will exist" to "will exist". If you don't care about "never will exist" people to begin with, then you can have no such obligation toward them. So to even bring up such a right, you first have to extend your concern people who never will exist. Not "not yet", but never.
In steps [1] :
... on the other hand, if you do create them, then you have reason to be concerned about them. In that case, they will exist, and not only that but their experiences will in some part be your doing.
It's only when you do assign a "right to exist" that you start having to make repugnant conclusion tradeoffs, or try to find ways to get around those tradeoffs by arguing about which beings are diverse enough to count. That's the point at which you're accepting a really serious obligation to create more people just for the sake of doing so.
I tend to get worked up about that because it has an actual practical impact [2] . Most of the people who see a "right to exist" don't seem to end up where you are, and I don't think they'll be very amenable to being convinced to go there. Many more of them seem to end up absolutely wallowing in the repugnant conclusion. I find the Bostromian program of tiling the light cone with people (for whatever value of "people") to be really, really creepy.
I'm sorry for the repetition, but I seem to have trouble finding the right words to point out the distinction I'm trying to make in a way that people actually understand, so I thought I'd make more than one attempt. ↩︎
Well, quasi-practical. As I said, I don't think anybody's actually in a position to do much about any of this in a truly practical sense. ↩︎
The idea that "The most fundamental right is the right to exist" seems to come from following the idea of expanding the moral circle: First, we step by step included humans in the moral circle. At some point we got to including animals, and now we are thinking about the moral status of AIs. The next natural extension is an extension in time, so that we consider the future people (and other sentient beings) as well.
I think John Rawls' "Original position" is a good way to ground many considerations. Here's a description by Scott Alexander:
This kind of thinking is somewhat harder when considering animals or AIs. On the other hand, when thinking about the future, we also think about future humans, so I think this could work quite well there. This view for example tells that we should not rush to use almost all the possible resources in the next one billion years and condemn everyone else to scarcity for the trillions of years to come. Or at least that we shouldn’t condemn later people to suffering with only marginal gain to ourselves.
However, this reasoning doesn't work that well on the right to exist. The original position takes for granted that we will be one of the beings in the universe. The right to exist, on the other hand, seems to point to a probability of getting to exist. This leads to some difficulties.
So, say that one is designing the universe. They might or might not appear in the universe depending on the specifications. One could think that by creating a universe with more sentient beings (or more humans) one would increase the chances of getting to appear in the universe. But what is the “99 % probability” like here? To get an exact copy of yourself, the universe would need a very large amount of humans in it, and so the difference between creating more people (or getting humans to live that much longer)[1] in option B than option A might only increase the chance from to .[2]
In particular, if the most fundamental right is the right to exist, it seems to me that we are completely failing if we cannot create every possible sentient being.
(If there's a finite upper bound M on the number of sentient beings we ever get to create, then this divided by countable infinity is zero. Note that this is not a fundamental mathematical problem: If we solve entropy problems, we could just create all sentient beings in the order of simplicity. Continuing this indefinitely, every possible sentient being would get to exist at some point, so I’d say this would count as solving the problem. On the other hand, this solution highlights how (infinitely) far off we are if we only create more beings over a finite time span.)
----------------------
You could try to salvage the situation by saying that you only want a person sufficiently similar to you to get to exist. This might at first look like useless goal to maximize: Increasing the count of subjective person-years by a factor of 10 would probably decrease the distance between your brain and the most similar brain state to come extremely little.
On the other hand, quite much of a person's personality can be expressed using a two-digit number of discrete parameters. Thus, creating ten times more people might get one more parameter right in the most similar person, which doesn't sound that useless.
So one can argue that the right to exist implies that we should have a very large amount of people!
Before starting to optimize only the number of people it is good to note that the original-position-type argument tells more than this. In the same way as in the original application, the pre-incarnation angelic intelligences also want to consider the quality of life of the people in the universe to come.
This is important, as there is likely a trade-off between the quality of life and the number of people in the (spatially and temporally) finite universe we are considering. A completely selfish person would probably choose 2x quality of life over increasing the number of correct parameters from 49 to 50.
If the latter case means increasing the number of people tenfold, then the sum of utilities of persons is five times lower in the case we are choosing. Which is not only a reason not to create maximally many humans but also an argument against total utilitarianism.[3]
It is so annoying when your nice argument for a rebuttal of a claim leads to a possible building block of utilitarian ethical theory with a hard to think about trade-off parameter.
Human brains change over time, and it might be that the person just wants their current brain state to appear at some point. Hence, having people live 10 times longer should be at least almost as useful as having 10 times more people.
Here's a way to get a very crude lower estimate for a number of possible human brain configurations capable of sentience: Take an adult human's brain which has about 86 billion neurons. Choosing for each neuron x one neuron y out of the closest 1000 neurons to x and adding or deleting a connection from x to y results in = different configurations. If we assume that the initial brain is sentient, then probably pretty many of these new configurations will also be (one neuron has about 7000 connections to other neurons, so adding or removing one shouldn't affect too much).
Maybe one could even reject Parfit's Repugnant Conclusion with this argument, but this seems quite dubious, as we reason "I don't like Repugnant Conclusion so I don't choose a universe where that would come true".
Of course, in the finite case it can easily be the case that one cannot create enough agents to justify the quality of life going too low (when we measure our utility of the universe using the similarity + average utility method described).
Also, even if the option were to have so many people that one of them would get the same neural structure as the angelic designer down to the last memory (through some quantum effects, say), they would pretty quickly notice the change in the surroundings and reason that they have no way to trust their previous memories, which probably wouldn't lead to a very enjoyable life. So one cannot tempt the angel with this offer.