Diagram:

See https://en.wikipedia.org/wiki/Mere_addition_paradox for the diagram, though in case the key parts of it are edited into a different form in the future, I'll provide a description here (adding appropriate numbers of my own invention, chosen to reflect the height of the bars in the diagram)

Description of diagram:

Population A might have 1000 people in it with a quality of life of 8, which we'll call Q8.

Population A+ is a combination of 1000 people at Q8 (population A) plus another 1000 people at Q4 (population A', though this population is not normally named).

Population B- is a combination of two lots of 1000 people which are both at Q7.

Population B has 2000 people at Q7.

The distinction between group B and B- is that B- keeps the two lots of 1000 people apart, which should reduce their happiness a bit as they have fewer options for friends, but we're supposed to imagine that they're equally happy whether they're kept apart (as in B-) or merged (in B).

Parfit's argument (to illustrate the paradox):

"Parfit observes that i) A+ seems no worse than A. This is because the people in A are no worse-off in A+, while the additional people who exist in A+ are better off in A+ compared to A [where they simply wouldn't exist] (if it is stipulated that their lives are good enough that living them is better than not existing)."

"Next, Parfit suggests that ii) B− seems better than A+. This is because B− has greater total and average happiness than A+."

"Then, he notes that iii) B seems equally as good as B−, as the only difference between B− and B is that the two groups in B− are merged to form one group in B."

"Together, these three comparisons entail that B is better than A. However, Parfit observes that when we directly compare A (a population with high average happiness) and B (a population with lower average happiness, but more total happiness because of its larger population), it may seem that B can be worse than A."

Paradox Lost:

First of all, we need to understand why there should be an optimal population size for a given amount of available resources, and if the population grows too high, total happiness goes down rather than up. This must be the case because the happiness of a population falls to zero long before the resources per person approach zero, and if you drag people out of poverty by giving them a modest increase in resources, their happiness shoots up, so it isn't a linear relationship either. The paradox superficially appears to deny this, but it only does so by introducing a fundamental error.

The error in the argument is hidden in the allocation of resources for A. Initially, A has access only to the resources of A and not to the resources of A'. When A' is added to A to make A+, new resources are brought in at the same time.

We can see now that A with access to all the resources of A+ (but without the population A') is inferior to A+ in terms of happiness because it's failing to use all the resources available to it, whereas A with access only to the resources of A is superior to A+ in terms of happiness per unit of resources. This is the key difference which Parfit missed.

When we look at A+, we see an unfair distribution of resources, and if we fixed that by sharing things evenly for all members of A+, A+ might well end up looking like B- because so many people would be lifted out of poverty and gain greatly in happiness without dragging A down very far.

We can thus see that A+ is inferior to an adjusted A+ with a redistribution of resources to even them out, and we can see that A+ is inferior to B- and B if it lacks that even distribution of resources, or it might be on a level with B- and B if it has redistributed of resources to even them out.

B is superior to A if A has access to the resources of A' while population A'=0, but A is superior to B if it doesn't have access to those extra resources of A' when you compare A and B per unit of available resources (which B has a lot more of).

So, the paradox evaporates: A is better than B if A only has the resources of A; but B is better than A if A has access to the full resources of A+ while it fails to use the component of those resources relating to A'.

(Note: If A was to use the resources of A' as well, it might go up to Q9 and have happier people in it, but the optimum population would then be higher and so it would have to grow to maximise total happiness per unit of resources.)

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 6:14 PM

Adding resources to this thought experiment is just adding noise. If something other than life quality values matters in this model, then the model is bad.

A>B is correct in average utilitarianism and incorrect in total utilitarianism. The way to resolve this is to send average utilitarianism into the trash can, because it fails in so many desidarata.

I'm not adding resources - they are inherent to the thought experiment, so all I've done is draw attention to their presence and their crucial role which should not be neglected. If you run this past a competent mathematician, they will confirm exactly what I've said (and be aware that this applies directly to total utilitarianism).

Think very carefully about why the population A' should have a lower level of happiness than A if this thought experiment is resources-independent. How would that work? Why would the quality of life for individuals fall as the population goes up and up infinitely if there's no dependence on resources?

If you don't have access to a suitable mathematician, I have access to some of the best ones via Hugh Porteous, so I could bring one in if necessary. It would be better though if you could find your own (so avoid any connected with Sheffield and the Open University, and you'd better avoid Liverpool and Cambridge too, because they could be accused of being biased in my favour as well).

I think that you have missed the point of the thought experiment. We can compare the utilities of a scenario without having to consider the mechanics that could produce each scenario. Just imagine that Omega comes up to you and says "You can choose which world I implement: A or A+." Which one would you rather have Omega instantiate?

The key point of the paradox is that preferences seem to be circular, which is very bad. If U(A)<U(A+)<U(B-)<(B)<U(A), then the utility function is fundamentally broken. It doesn't matter that there's usually no way to get from B to A, or anything like that.

On the basis you just described, we actually have

U(A)<U(A+) : Q8x1000 < Q8x1000 + Q4x1000

U(A+)<U(B-) : Q8x1000 +Q4x1000 < Q7x2000

U(B-)=(B) : Q7x2000 = Q7x2000

(B)>U(A) : Q7x2000 > Q8x1000

In the last line you put "<" in where mathematics dictates that there should be a ">". Why have you gone against the rules of mathematics?

You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources.

In the last line you put "<" in where mathematics dictates that there should be a ">". Why have you gone against the rules of mathematics?

That's my point! My entire point is that this circular ordering of utilities violates mathematical reasoning. The paradox is that A+ seems better than A, B- seems better than A+, B seems equal to B-, and yet B seems worse than A. (Dutch booking problem!) Most people do not consider "a world with the maximal number of people such that they are all still barely subsisting" to be the best possible world. Yet this is what you get when you carry out the Parfit operation repeatedly, and each individual step of the Parfit operation seems to increase preferability.

You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources.

No, it's not. It is a brute fact of my utility function that I do not want to live in a world with a trillion people that each have a single quantum of happiness. I would rather live in a world with a billion people that are each rather happy. The feasibility of the world doesn't matter - the resources involved are irrelevant - it is only the preferability that is being considered, and the preference structure has a Dutch book problem. That and that alone is the Parfit paradox.

"That's my point! My entire point is that this circular ordering of utilities violates mathematical reasoning."

It only violated it because you had wrongly put "<" where it should have been ">". With that corrected, there is no paradox. If you stick to using the same basis for comparing the four scenarios, you never get a paradox (regardless of which basis you choose to use for all four). You only get something that superficially looks like a paradox by changing the basis of comparison for different pairs, and that's cheating.

"The paradox is that A+ seems better than A, B- seems better than A+, B seems equal to B-, and yet B seems worse than A."

Only on a different basis. That is not a paradox. (The word "paradox" is ambiguous though, so things that are confusing can be called paradoxes even though they can be resolved, but in philosophy/logic/mathematics, the only paradoxes that are of significance are the ones that have no resolution, if any such paradoxes actually exist.)

"Most people do not consider "a world with the maximal number of people such that they are all still barely subsisting" to be the best possible world. Yet this is what you get when you carry out the Parfit operation repeatedly, and each individual step of the Parfit operation seems to increasepreferability."

That's because most people intuitively go on the basis that there's an optimal population size for a given amount of resources. If you want to do the four comparisons on that basis, you get the following: (A)>(A+)<(B-)=(B)<(A), and again there's no paradox there. The only semblance of a paradox appears when you break the rules of mathematics by mixing the results of the two lots of analysis. Note too that you're introducing misleading factors as soon as you talk about "barely subsisting" - that introduces the idea of great suffering, but that would lead to a happiness level <0 rather than >0. For the happiness level to be just above zero, the people must be just inside the range of a state of contentment.

""You changed to a different basis to declare that (B)<(A), and the basis that you switched to is the one that recognises the relation between happiness, population size and resources." --> "No, it's not."

If you stick to a single basis, you get this:-

8000 < 12000 < 14000 = 14000 > 8000

No paradox.

But, you may indeed be using a different basis from the different basis I've chosen (see below).

"It is a brute fact of my utility function that I do not want to live in a world with a trillion people that each have a single quantum of happiness."

Don't let that blind you to the fact that it is not a paradox. There are a number of reasons why you might not like B, or a later example Z where happiness for each person is at Q0.00000...0000001, and one of them may be that your're adding unstated conditions to happiness, such as the idea that if happiness is more spaced out, you'll feel deprived of happiness during the long spacings between happy moments, or if there is only one happy moment reserved for you in total, you'll feel sad after that moment has come because you know there won't be another one coming, but for the stats to be correct, these would have to be populations of modified people who have been stripped of many normal human emotions. For real people to have a happiness level of a quantum of happiness in total, that would need to be an average where they actually have a lot of happiness in their lives - enough to keep at low levels the negative feeling of being deprived of happiness much of the rest of the time and to cancel out those negatives overall, which means they're living good lives with some real happiness.

"I would rather live in a world with a billion people that are each rather happy."

Well, if that isn't driven by an intuitive recognition of there being optimal population sizes for a given amount of resources, you're still switching to a different basis where you will eliminate people who are less happy in order to increase happiness of the survivors. So, why not go the whole hog and extend that to a world with just one person who is extremely happy but where total happiness is less than in any other scenario? Someone can then take your basis for choosing smaller populations with greater happiness for each individual and bring in the same fake paradox by making the illegal switch to a different basis to say that a population with a thousand people marginally less happy than that single ecstatic individual is self-evidently better, even though you'd rather be that single ecstatic person.

All you ever have with this paradox is an illegal mixing of two bases, such as using one which seeks maximum total happiness while the other seeks maximum happiness of a single individual. So, why is it that when you're at one extreme you want to move away from it? The answer is that you recognise that there is a compromise position that is somehow better, and in seeking that, you're bringing in undeclared conditions (such as the loneliness of the ecstatic individual which renders him less happy than the stated value, or the disappointing idea of many other people being deprived of happiness which could easily have been made available to them). If you declare all of those conditions, you will have a method for determining the best choice. Your failure to identify all your undeclared conditions does not make this a paradox - it merely demonstrates that your calculations are incomplete. When you attempt to do maths with half your numbers missing, you shouldn't bet on your answers being reliable.

However, the main intuition that's actually acting here is the one I identified at the top: that there is an optimal population size for a given amount of available resources, and if the population grows too big (and leaves people in grinding poverty), decline in happiness will accelerate towards zero and continue accelerating into the negative, while if the population grows too small, happiness of individuals also declines. Utilitarianism drives us towards optimal population size and not to ever-larger populations with ever-decreasing happiness, because more total happiness can always be generated by adjusting the population size over time until it becomes optimal.

That only breaks if you switch to a different scenario. Imagine that for case Z we have added trillions of unintelligent sentient devices which can only handle a maximum happiness of the single quantum of happiness that they are getting. They are content enough and the total happiness is greater than in an equivalent of case A where only a thousand unintelligent sentient devices exist, but where these devices can handle (and are getting) a happiness level of Q8. Is the universe better with just a thousand devices at Q8 or trillions of them at Q0.000000001? The answer is, it's better to have trillions of them with less individual but greater total happiness. When you strip away all the unstated conditions, you find that utilitarianism works fine. There is no possible way to make these trillion devices feel happier, so reducing their population relative to the available amount of resources reduces total happiness instead of becoming more optimal, so it doesn't feel wrong in the way that it does with humans.

"The feasibility of the world doesn't matter - the resources involved are irrelevant - it is only the preferability that is being considered, and the preference structure has a Dutch book problem. That and that alone is the Parfit paradox."

If you want a version with no involvement of resources, then use my version with the unintelligent sentient devices so that you aren't bringing a host of unstated conditions along for the ride. There is no paradox regardless of how you cut the cake. All we see in the "paradox" is a woeful attempt at mathematics which wouldn't get past a school maths teacher. You do not have a set of numbers that shows a paradox where you use the same basis throughout (as would be required for it to be a paradox).