epistemic status: musings on a book I read over a decade ago
My lay formulation of the repugnant conclusion is as follows:
Imagine an island utopia, a wonderful land of grace, happiness and surplus. A sea full of fish. Trees filled with fruits. There are 100 people on it, and each gets their slice of this small utopia. It would seem preferable, though, if 101 people were on it, even if the slice of the other 100 would be reduced by their presence. We then continue adding people until everyone is left with the barest resources needed to just slightly thrive. Every new person seems worth adding. But the final, Malthusian state seems repugnant to many people, though famously not Robin Hanson and Nick Land.
Derek Parfit thinks of it in terms of the balance of pleasures and pains in each individual life and the sum of total pleasures and pains of the population:
For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living
Parfit calls the act of adding a new person "mere addition." But people's intuitions appear to be relying on an assumption that appears false to me. Many are assuming individuals must have some proportion of pain in their lives, that precarity is intrinsically a side effect of using all possible resources to sustain lives. And I think they are doing this because they are assuming the surplus the population enjoys must be created by the population. People are assuming as a background fact the following false thing: the entities that embody the value of a society must also produce the wealth that sustains it.
This assumption is defeated if economic surplus can be created by non-sentient robots, which seems pretty possible to me. If for some odd reason sentience is needed to create surplus, it may be possible to create motivational structures that do not rely on suffering to function. [1]If either of these is possible, then as far as I can tell each new life added can be one entirely of positive experience-moments without cost.
One way to think about this is to think about a world of brain emulations. Imagine a race of non-sentient robots. Imagine they desire to maximize the amount of positive human-experience moments. They convert the universe into computronium. When choosing the minds and experiences to simulate, they have no reason to add any suffering. It isn't buying them anything.
Does this solve the problem? No, but it redirects it in a productive way. The repugnance still lives here: If you are maximizing the number of happy lives, smaller minds (by some definition of small related to the cost of computing them) will let you get more. So these robots would pick the cheapest computations that meet their definitions of sentient and human. So we have moved the problem and not killed it utterly, but removing suffering moments seems no small win, and so we should query our intuitions with this frame in mind.
There may be more pathological implications of this line of thought that have not occurred to me, but thinking in this way does dull much of the bite of mere addition for me.
David Pearce argues that coherent and effective motivational structures which rely only on "gradients of bliss" rather than a pleasure-pain axis are possible. I am slightly suspicious of this notion but can't rule it out.