Infinity is big. You just won't believe how vastly, hugely, mindbogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to infinity.
And there are a lot of paradoxes connected with infinity. Here we'll be looking at a small selection of them, connected with infinite ethics.
Suppose that you had some ethical principles that you would want to spread to infinitely many different agents - maybe through acausal decision making, maybe through some sort of Kantian categorical imperative. So even if the universe is infinite, filled with infinitely many agents, you have potentially infinite influence (which is more than most of us have most days). What would you do with this influence - what kind of decisions would you like to impose across the universe(s)'s population? What would count as an improvement?
There are many different ethical theories you could use - but one thing you'd want is that your improvements are actual improvements. You wouldn't want to implement improvements that turn out to be illusionary. And you certainly wouldn't want to implement improvements that could be undone by relabeling people.
How so? Well, imagine that you have a countable infinity of agents, with utilities (..., -3, -2, -1, 0, 1, 2, 3, ...). Then suppose everyone gets +1 utility. You'd think that giving an infinity of agents one extra utility each would be fabulous - but the utilities are exactly the same as before. The current -1 utility belongs to the person who had -2 before, but there's still currently someone with -1, just as there was someone with -1 before the change. And this holds for every utility value: an infinity of improvements has accomplished... nothing. As soon as you relabel who is who, you're in exactly the same position as before.
But things can get worse. Subtracting one utility from everyone also leaves the outcome the same, after relabeling everyone. So this universal improvement is completely indistinguishable from a universal regression.
Conditions for improvement
So the question is, under what conditions can we be sure that an improvement is genuine?
We'll assume that we have a countable infinity of agents, and we'll make a few highly non-trivial assumptions (the only real justification being that these assumptions are traditional). First, we'll assume that everyone's personal preferences/welfare/hedonistic levels (or whatever we're using) are expressed in terms of a utility function (unrealistic). Secondly, we'll assume that each of these utility functions has a defined zero point (dubious). Finally, we'll assume that these utility functions can be put on a common scale, so they can be compared with each other (extremely dubious).
So, what counts as an improvement? We've already seen that adding +1 to everyone's utility is not an improvement. What about multiplication? If everyone's utility is above 0, then surely multiplying everyone's utility by 2 must make things better?
Not so. Assume everyone's utility is (..., 1/8, 1/4, 1/2, 1, 2, 4, 8, ...), then it's clear that multiplying by 2 (or dividing by 2) has no impact on the overall situation.
Since addition and multiplication are out, what about increasing the number of happy people? It is even easier to see that this can have no impact: simply assume that everyone's utility is some constant c>0. Then if we get everyone to construct a copy of themselves with same utility, we end up with twice as many people with utility c - but since we started with infinitely many people and ended up with infinitely many people, we've accomplished nothing.
Bounding the individual
To avoid being completely ineffective, we need to make some stronger assumptions. For instance, we could bound the personal utilities. If the utilities are bounded above (or, indeed, below), then adding +1 will have definite impact: we can't undo that effect by relabeling people. This is because the set of utilities now has a supremum (a generalisation of maximum) or an infimum (a generalisation of minimum). And when we add +1, we increase the supremum or infimum by +1, so the two collections of utilities are no longer comparable.
Bounding people's utilities on one side is not enough to ensure multiplication has an impact, as we saw above with the example, which had everyone's utility above 0. But if people's utilities are bounded above and below, then we can ensure that multiplying will change the overall situation (unless everyone's utility is at zero). The argument for supremum and infimum will work just as above, as long as one of them is non-zero.
Ok, so adding +1 utility to everyone is now a definite improvement (or at the least a definite change, and it certainly feels like an improvement). What about adding different amounts to different people? Is this an improvement?
Not so. Assume people utilities are (..., 1/8, 1/4, 1/2, 1, 3/2, 7/4, 15/8, ...). This is bounded below (by 0) and bounded above (by 2). Yet if you move everyone's utility up to the amount of the person just above them, you will have increased everyone's utility and changed... absolutely nothing.
To make sure that your improvement is genuine, you need to ensure that you increase everyone's utility by at least ε, for any given ε>0. But you need not increase everyone's utility - for instance, you can skip a finite number of people and still get a clear improvement.
What about skipping an infinite number of people? This won't work in general. Assume you have infinitely many people at utility 1, and infinitely many at utility 2. Then if you move infinitely many people from 1 to 2 (while still leaving infinitely many at 1), you will have accomplished nothing.
This leads to a rather curious form of egalitarianism: the only way of ensuring that you've got an improvement overall is to ensure that (almost) everyone shares in the improvement - to at least a small extent.
Duplicating happy people is still ineffective in the bounded cases.
Bounding the collective
What if not only the individual utilities are bounded, but the sum of utilities is also bounded - just as 1+1/2+1/4+1/8+... sum to 2? This is a very unlikely situation (most people's utilities would be arbitrarily close to zero). But, if it were to occur, everything becomes easy. Any time you increase anyone's utility by any amount, the overall situation is not longer equivalent with the initial one. Same goes for increasing everyone's utility by any amount, whether or not this is above an ε>0. We can slightly generalise this situation by changing the zero point of everyone's utility: if the sum of everyone's utility is bounded, for any choice of the zero point, then any increase of utility changes the situation. Relabeling cannot undo these improvements.
Similarly, making finitely many extra copies of happy people is now finally an inarguable change. Unlike above, however, this no longer holds true if we move the zero point.
More versus happier people
It is interesting that improving overall happiness is successful in more situations that duplicating happy people. Infinity, it seems, often wants happier people, nor more of them.