# 106

The following is a dialogue intended to illustrate what I think may be a serious logical flaw in some of the conclusions drawn from the famous Mere Addition Paradox

EDIT:  To make this clearer, the interpretation of the Mere Addition Paradox this post is intended to criticize is the belief that a world consisting of a large population full of lives barely worth living is the optimal world. That is, I am disagreeing with the idea that the best way for a society to use the resources available to it is to create as many lives barely worth living as possible.  Several commenters have argued that another interpretation of the Mere Addition Paradox is that a sufficiently large population with a lower quality of life will always be better than a smaller population with a higher quality of life, even if such a society is far from optimal.  I agree that my argument does not necessarily refute this interpretation, but think the other interpretation is common enough that it is worth arguing against.

EDIT: On the advice of some of the commenters I have added a shorter summary of my argument in non-dialogue form at the end.  Since it is shorter I do not think it summarizes my argument as completely as the dialogue, but feel free to read it instead if pressed for time.

Bob:  Hi, I'm with R&P cable.  We're selling premium cable packages to interested customers.  We have two packages to start out with that we're sure you love.  Package A+ offers a larger selection of basic cable channels and costs $50. Package B offers a larger variety of exotic channels for connoisseurs, it costs$100.  If you buy package A+, however, you'll get a 50% discount on B.

Alice:  That's very nice, but looking at the channel selection, I just don't think that it will provide me with enough utilons.

Bob: Utilons?  What are those?

Alice: They're the unit I use to measure the utility I get from something.  I'm really good at shopping, so if I spend my money on the things I usually spend it on I usually get 1.5 utilons for every dollar I spend.  Now, looking at your cable channels, I've calculated that I will get 10 utilons from buying Package A+ and 100 utilons from buying Package B.  Obviously the total is 110, significantly less than the 150 utilons I'd get from spending $100 on other things. It's just not a good deal for me. Bob: You think so? Well it so happens that I've met people like you in the past and have managed to convince them. Let me tell you about something called the "Mere Cable Channel Addition Paradox." Alice: Alright, I've got time, make your case. Bob: Imagine that the government is going to give you$50.  Sounds like a good thing, right?

Alice:  It depends on where it gets the $50 from. What if it defunds a program I think is important? Bob: Let's say that it would defund a program that you believe is entirely neutral. The harms the program causes are exactly outweighed by the benefits it brings, leaving a net utility of zero. Alice: I can't think of any program like that, but I'll pretend one exists for the sake of the argument. Yes, defunding it and giving me$50 would be a good thing.

Bob:  Okay, now imagine the program's beneficiaries put up a stink, and demand the program be re-instituted.  That would be bad for you, right?

Alice:  Sure.  I'd be out $50 that I could convert into 75 utilons. Bob: Now imagine that the CEO of R&P Cable Company sleeps with an important senator and arranges a deal. You get the$50, but you have to spend it on Package A+.  That would be better than not getting the money at all, right?

Alice: Sure.  10 utilons is better than zero.  But getting to spend the $50 however I wanted would be best of all. Bob: That's not an option in this thought experiment. Now, imagine that after you use the money you received to buy Package A+, you find out that the 50% discount for Package B still applies. You can get it for$50.  Good deal, right?

Alice:  Again, sure.  I'd get 100 utilons for $50. Normally I'd only get 75 utilons. Bob: Well, there you have it. By a mere addition I have demonstrated that a world where you have bought both Package A+ and Package B is better than one where you have neither. The only difference between the hypothetical world I imagined and the world we live in is that in one you are spending money on cable channels. A mere addition. Yet you have admitted that that world is better than this one. So what are you waiting for? Sign up for Package A+ and Package B! And that's not all. I can keep adding cable packages to get the same result. The end result of my logic, which I think you'll agree is impeccable, is that you purchase Package Z, a package where you spend all the money other than that you need for bare subsistence on cable television packages. Alice: That seems like a pretty repugnant conclusion. Bob: It still follows from the logic. For every world where you are spending your money on whatever you have calculated generates the most utilons there exists another, better world where you are spending all your money on premium cable channels. Alice: I think I found a flaw in your logic. You didn't perform a "mere addition." The hypothetical world differs from ours in two ways, not one. Namely, in this world the government isn't giving me$50.  So your world doesn't just differ from this one in terms of how many cable packages I've bought, it also differs in how much money I have to buy them.

Bob: So can I interest you in a special form of the package?  This one is in the form of a legally binding pledge.  You pledge that if you ever make an extra $50 in the future you will use it to buy Package A+. Alice: No. In the scenario you describe the only reason buying Package A+ has any value is that it is impossible to get utility out of that money any other way. If I just get$50 for some reason it's more efficient for me to spend it normally.

Bob:  Are you sure?  I've convinced a lot of people with my logic.

Alice:  Like who?

Bob:  Well, there were these two customers named Michael Huemer and Robin Hanson who both accepted my conclusion.  They've both mortgaged their homes and started sending as much money to R&P cable as they can.

Alice:  There must be some others who haven't.

Bob:  Well, there was this guy named Derek Parfit who seemed disturbed by my conclusion, but couldn't refute it.  The best he could do is mutter something about how the best things in his life would gradually be lost if he spent all his money on premium cable.  I'm working on him though, I think I'll be able to bring him around eventually.

Alice:  Funny you should mention Derek Parfit.  It so happens that the flaw in your "Mere Cable Channel Addition Paradox" is exactly the same as the flaw in a famous philosophical argument he made, which he called the "Mere Addition Paradox."

Bob:  Really? Do tell?

Alice:  Parfit posited a population he called "A" which had a moderately large population with large amounts of resources, giving them a very high level of utility per person.  Then he added a second population, which was totally isolated from the other population.  How they were isolated wasn't important, although Parfit suggested maybe they were on separate continents and can't sail across the ocean or something like that.  These people don't have nearly as many resources per person as the other population, so each person's level of utility is lower (their lack of resources is the only reason they have lower utility).  However, their lives are still just barely worth living.  He called the two populations "A+."

Parfit asked if "A+" was a better world than "A."  He thought it was, since the extra people were totally isolated from the original population they weren't hurting anyone over there by existing.  And their lives were worth living.  Follow me so far?

Bob: I guess I can see the point.

Alice: Next Parfit posited a population called "B," which was the same as A+. except that the two populations had merged together.  Maybe they got better at sailing across the ocean, it doesn't really matter how.  The people share their resources.  The result is that everyone in the original population had their utility lowered, while everyone in the second had it raised.

Parfit asked if population "B" was better than "A+" and argued that it was because it had a greater level of equality and total utility.

Bob: I think I see where this is going.  He's going to keep adding more people, isn't he?

Alice:  Yep.  He kept adding more and more people until he reached population "Z," a vast population where everyone had so few resources that their lives were barely worth living.  This, he argued, was a paradox, because he argued that most people would believe that Z is far worse than A, but he had made a convincing argument that it was better.

Bob:  Are you sure that sharing their resources like that would lower the standard of living for the original population?  Wouldn't there be economies of scale and such that would allow them to provide more utility even with less resources per person?

Alice: Please don't fight the hypothetical.  We're assuming that it would for the sake of the argument.

Now, Parfit argued that this argument led to the "Repugnant Conclusion," the idea that the best sort of world is one with a large population with lives barely worth living.  That confers on people a duty to reproduce as often as possible, even if doing so would lower the quality of their and everyone else's lives.

He claimed that the reason his argument showed this was that he had conducted "mere addition."  The populations in his paradox differed in no way other than their size.  By merely adding more people he had made the world "better," even if the level of utility per person plummetted.  He claimed that "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility."

Do you see the flaw in Parfit's argument?

Bob:  No, and that kind of disturbs me.  I have kids, and I agree that creating new people can add utility to the world.  But it seems to me that it's also important to enhance the utility of the people who already exist.

Alice: That's right.  Normal morality tells us that creating new people with lives worth living and enhancing the utility of people that already exist are both good things to use resources on.  Our common sense tells us that we should spend resources on both those things.  The disturbing thing about the Mere Addition Paradox is that it seems at first glance to indicate that that's not true, that we should only devote resources to creating more people with barely worthwhile lives.  I don't agree with that, of course.

Bob:  Neither do I. It seems to me that having a large number of worthwhile lives and a high average utility are both good things and that we should try to increase them both, not just maximize one.

Alice:  You're right, of course.  But don't say "having a high average utility."  Say "use resources to increase the utility of people who already exist."

Bob:  What's the difference? They're the same thing, aren't they?

Alice:  Not quite.  There are other ways to increase average utility than enhancing the utility of existing people.  You could kill all the depressed people, for instance.  Plus, if there was a world where everyone was tortured 24 hours a day, you could increase average utility by creating some new people who are only tortured 23 hours a day.

Bob:  That's insane!  Who could possibly be that literal-minded?

Alice:  You'd be surprised.  The point is, a better way to phrase it is "use resources to increase the utility of people who already exist," not "increase average utility."  Of course, that still leaves some stuff out, like the fact that it's probably better to increase everyone's utility equally, rather than focus on just one person.  But it doesn't lead to killing depressed people, or creating slightly less tortured people in a Hellworld.

Bob:  Okay, so what I'm trying to say is that resources should be used to create people, and to improve people's lives.  Also equality is good. And that none of these things should completely eclipse the other, they're each too valuable to maximize just one.  So a society that increases all of those values should be considered more efficient at generating value than a society that just maximizes one value.  Now that we're done getting our terminology straight, will you tell me what Parfit's mistake was?

Alice:  Population "A" and population "A+" differ in two ways, not one. Think about it.  Parfit is clear that the extra people in "A+" do not harm the existing people when they are added.  That means they do not use any of the original population's resources.  So how do they manage to live lives worth living?  How are they sustaining themselves?

Bob:  They must have their own resources.  To use Parfit's example of continents separated by an ocean;  each continent must have its own set of resources.

Alice:  Exactly.  So "A+" differs from "A" both in the size of its population, and the amount of resources it has access to.  Parfit was not "merely adding" people to the population.  He was also adding resources.

Bob: Aren't you the one who is fighting the hypothetical now?

Alice:  I'm not fighting the hypothetical.  Fighting the hypothetical consists of challenging the likelihood of the thought experiment happening, or trying to take another option than the ones presented.  What I'm doing is challenging the logical coherence of the hypothetical.  One of Parfit's unspoken premises is that you need some resources to live a life worth living, so by adding more worthwhile lives he's also implicitly adding resources.  If he had just added some extra people to population A without giving them their own continent full of extra resources to live on then "A+" would be worse than "A."

Bob:  So the Mere Addition Paradox doesn't confer on us a positive obligation to have as many children as possible, because the amount of resources we have access to doesn't automatically grow with them.  I get that.  But doesn't it imply that as soon as we get some more resources we have a duty to add some more people whose lives are barely worth living?

Alice: No.  Adding lives barely worth living uses the extra resources more efficiently than leaving Parfit's second continent empty for all eternity.  But, it's not the most efficient way.  Not if you believe that creating new people and enhancing the utility of existing people are both important values.

Let's take population "A+" again.  Now imagine that instead of having a population of people with lives barely worth living, the second continent is inhabited by a smaller population with the same very high percentage of resources and utility per person as the population of the first continent.  Call it "A++. " Would you say "A++" was better than "A+?"

Bob:  Sure, definitely.

Alice:  How about a world where the two continents exist, but the second one was never inhabited?  The people of the first continent then discover the second one and use its resources to improve their level of utility.

Bob:  I'm less sure about that one, but I think it might be better than "A+."

Alice:  So what Parfit actually proved was: "For every population, A, with a high average level of utility there exists another, better population, B, with more people, access to more resources and a lower average level of utility."

And I can add my own corollary to that:  "For every population, B, there exists another, better population, C, that has the same access to resources as B, but a smaller population and higher average utility."

Bob: Okay, I get it.  But how does this relate to my cable TV sales pitch?

Alice:  Well, my current situation, where I'm spending my money on normal things is analogous to Parfit's population "A."  High utility, and very efficient conversion of resources into utility, but not as many resources.  We're assuming, of course, that using resources to both create new people and improve the utility of existing people is more morally efficient than doing just one or the other.

The situation where the government gives me $50 to spend on Package A+ is analogous to Parfit's population A+. I have more resources and more utility. But the resources aren't being converted as efficiently as they could be. The situation where I take the 50% discount and buy Package B is equivalent to Parfit's population B. It's a better situation than A+, but not the most efficient way to use the money. The situation where I get the$50 from the government to spend on whatever I want is equivalent to my population C.  A world with more access to resources than A, but more efficient conversion of resources to utility than A+ or B.

Bob: So what would a world where the government kept the money be analogous to?

Alice: A world where Parfit's second continent was never settled and remained uninhabited for all eternity, its resources never used by anyone.

Bob: I get it.  So the Mere Addition Paradox doesn't prove what Parfit thought it did?  We don't have any moral obligation to tile the universe with people whose lives are barely worth living?

Alice:  Nope, we don't.  It's more morally efficient to use a large percentage of our resources to enhance the lives of those who already exist.

Bob:  This sure has been a fun conversation.  Would you like to buy a cable package from me?  We have some great deals.

Alice: NO!

SUMMARY:

My argument is that Parfit’s Mere Addition Paradox doesn’t prove what it seems to.  The argument behind the Mere Addition Paradox is that you can make the world a better place by the “mere addition” of extra people, even if their lives are barely worth living.  In other words : "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility." This supposedly leads to the Repugnant Conclusion, the belief that a world full of people whose lives are barely worth living is better than a world with a smaller population where the people lead extremely fulfilled and happy lives.

Parfit demonstrates this by moving from world A, consisting of a population full of people with lots of resources and high average utility, and moving to world A+.  World A+ has an addition population of people who are isolated from the original population and not even aware of the other’s existence. The extra people live lives just barely worth living.  Parfit argues that A+ is a better world than A because everyone in it has lives worth living, and the additional people aren’t hurting anyone by existing because they are isolated from the original population.

Parfit them moves from World A+ to World B, where the populations are merged and share resources.  This lowers the standard of living for the original people and raises it for the newer people.  Parfit argues that B must be better than A+, because it has higher total utility and equality. He then keeps adding people until he reaches Z, a world where everyones’ lives are barely worth living and the population is vast.  He argues that this is a paradox because most people would agree that Z is not a desirable world compared to A.

I argue that the Mere Addition Paradox is a flawed argument because it does not just add people, it also adds resources.  The fact that the extra people in A+ do not harm the original people of A by existing indicates that their population must have a decent amount of resources to live on, even if it is not as many per person as the population of A.  For this reason what the Mere Addition Paradox proves is not that you can make the world better by adding extra people, but rather that you can make it better by adding extra people and resources to support them.  I use a series of choices about purchasing cable television packages to illustrate this in concrete terms.

I further argue for a theory of population ethics that values both using resources to create lives worth living, and using resources to enhance the utility of already existing people, and considers the best sort of world to be one where neither of these two values totally dominate the other.  By this ethical standard A+ might be better than A because it has more people and resources, even if the average level of utility is lower.  However, a world with the same amount of resources as A+, but a lower population and the same, or higher average utility as A is better than A+.

The main unsatisfying thing about my argument is that while it avoids the Repugnant Conclusion in most cases, it might still lead to it, or something close to it, in situations where creating new people and getting new resources are, as one commenter noted, a “package deal.”   In other words, a situation where it is impossible to obtain new resources without creating some new people whose utility levels are below average.  However, even in this case, my argument holds that the best world of all is one where it would be possible to obtain the resources without creating new people, or creating a smaller amount of people with higher utility.

In other words, the Mere Addition Paradox does not prove that: "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility." Instead what the Mere Addition Paradox seems to demonstrate is that: "For every population, A, with a high average level of utility there exists another, better population, B, with more people, access to more resources and a lower average level of utility."  Furthermore, my own argument demonstrates that: "For every population, B, there exists another, better population, C, which has the same access to resources as B, but a smaller population and higher average utility."

# 106

Pingbacks
New Comment
Some comments are truncated due to high volume. Change truncation settings

Okay, so Parfit's paradox doesn't prove that we should make more people if our resources are constant. And it doesn't prove that we should make more people when we get more resources. But it might still prove that we should agree to make more people and more resources if it's a package deal.

More concretely, if you had a button that created (or made accessible) one additional unit of resource and a million people using that resource to live lives barely worth living, would you press that button? Grabbing only the resources and skipping the people isn't on the menu of the thought experiment. It seems to me that if you would press that button, and also press the next button that redistributes all existing resources equally among existing people, then the repugnant conclusion isn't completely dead...

But it might still prove that we should agree to make more people and more resources if it's a package deal.

You're right, my argument does not prohibit the particular hypothetical you offered up. The one quibble I have is that I'm not sure how much resources "one unit" is, but it would have to be a sizable amount for a million people to live lives barely worth living on it.

In fact, your hypothetical is pretty much structurally identical to the cable bill hypothetical that Bob offers up. And, if you recall, Alice does not disagree that buying Package A+ would be irrational if the government really was going to give her $50 if she did it. So I might have only killed the repugnant conclusion 99.9% dead. For now I'm content with that, I've eliminated it as a possibility from any situation that is remotely likely to happen in real life, and that's good enough for now. As for whether I'd push the button? I probably wouldn't, even though my argument doesn't exclude it. However, I don't know if that's because there is some other moral objection to the repugnant conclusion that I haven't articulated yet, or if it's just because I can be kind of selfish sometimes. I've eliminated it as a possibility from any situation that is remotely likely to happen in real life Hmm, I can imagine situations where you can't extract the resources without adding people. For example, should humans settle a place if it can support life, but only at a low level of comfort, and exporting resources from there isn't economically viable? 3Ghatanathoah9yIt seems to me that if the settlement is done voluntarily that it must fulfill some preference that the settlers value more than comfort. Freedom, adventure, or the feeling that you're part of something bigger, to name three possibilities. For that reason their lives couldn't really be said to have lowered in quality. If it's done involuntarily my first instinct is to say that no, we shouldn't do it, although you could probably get me to say yes by introducing some extenuating circumstance, like it being the only way to prevent extinction. Of course, this then brings up the issue of whether or not the settlers should have children who might not feel the same way they do. I'm much less sure about the morality of doing that. 0cousin_it9yYes, the scenario involves adding people, not just moving them around. That's what makes population ethics tricky. 3RichardKennaway9ySuch as, for example, the Moon or Mars? -1[anonymous]9yI would say yes, to the extent that it reduces species ex-risk to have those extra people. (For instance, having a Mars colony as per RichardKennaway's example would reduce Ex-Risk) However, it is possible that adding extra people in some cases might instead increase Ex-Risk (say, a slum outside of a city which might breed disease that spreads to the city) and in that case I might say No. That's a separate problem with the repugnant conclusion that bothers me sometimes. It appears to be the case that at some point the average function starts greatly increasing ex-risk at a later point even though it doesn't do that at the beginning. If you are down to Muzak and Potatoes, a potato famine wipes you out. So if you have "Potatoes, Carrots and Muzak" in Pop Y, and "Potatoes" in Pop Z, averaging it out to "Potatoes and Muzak" for everyone might increase average happiness, and Pop Z wouldn't mind, but it wouldn't be safe for Pop Y and Z together as a species, because they lose the safety of being able to come back from a potato famine. That also seems to come with a built in idea of what kind of averaging is acceptable and where there are limits on averaging. Taking from a richer populations status to improve a poorer populations health would be fine. Taking from a richer populations health or safety to improve a lower populations safety would be unreasonable. And a life where your health and safety are well guaranteed certainly sounds a hell of a lot better than "barely worth living." so it doesn't descend down into repugnance. Basically, if instead of just looking at Population and Utility, you look at Population, Utility, and Ex-risk, the problem seems to vanish. It seems to say "Yes, add" and "Yes average" when I want it to add and average and say "no, don't add" and "No don't average" when I want it to now add and not average. You could also just say "Well, Ex-Risk is part of my utility function" but that seems to lead to tricky calculation questions such as: A 0Ghatanathoah9yThis criticism has been made before. I think the standard reply was that it may indeed be the case that we would need to have a life somewhat above the level of "barely worth living" in order to guard against the possibility that some sort of disaster would lower the quality of the people's lives to such an extent that they were no longer worth living. However, such a standard of living would likely still be low enough for the Repugnant Conclusion to remain repugnant. 6shokwave9yIt does, but by definition. Let X and Y be populations. Each population has a number of people and an amount of resources. Resources are distributed evenly, so the average utility of a population and each individual's utility is given by: resources over people. We will say the "standard of living", the level at which a life is 'barely worth living', is a utility of 1. And we will say that Z is when the utility is below the standard of living. These are our definitions. For numbers, let's say X and Y start out with 100 people and 500 resources, giving each a utility of 5. This is good! In X, we will perform the false method: simply adding people. In one step, we go to 105 people (utility 4.7, still good), then 110 (utility 4.5) and in 80 steps we will have reached our repugnant Z, with 505 people and 500 resources giving us a utility of 0.99. Now in Y, we will perform the strengthened method: absorb a small population with bare minimum living standards, thus bringing everyone down slightly. In one step, we got to 105 people and 505 resources (4.8 utility, still good) then 110 and 510 (4.6, still good) and then Z arrives .... No, it doesn't. Utility in Y will asymptotically approach 1 from above and we will never reach Z. Thus, the repugnant conclusion is dead. You may argue that "just barely above the absolute bare minimum" is not worth living, but you won't get very far: previously, we defined any life above the minimum standard as worth living. So if you say that, instead, 2 utility is the minimum worth living for, Y will asymptotically approach 2. And you can hardly argue that "just above 2" isn't worth living for, because you just said before that 2 is the minimum! So yes, the repugnant conclusion is truly dead. (An analogy for this population Y is colonising new planets: the older planets will be affluent, but the frontier new colonies will be hardscrabble and just barely worth it. But this is not a repugnant conclusion! This is like Firefly, and that 0magfrump9yI used almost this exact line in a discussion with my girlfriend about a week ago (talking about Everything Matters! [http://www.amazon.com/Everything-Matters-Ron-Currie-Jr/dp/0670020923]. 0tgb9yI dislike this post. I don't mean this to be a personal attack and I don't want to come off as hostile, but I do want to make my objections known. I am choosing to state my reasons in lieu of downvoting. First, "It does, but by definition." is clearly false, otherwise you wouldn't spend 6 paragraphs explaining it. This is something of a pet peeve of mine from grading homework, but whatever, it's not important. More importantly, its not really addressing the problems being discussed here. The discussion is whether 100 people at 500 resources is better than your asymptotically-worthless massive population, which is something that you don't mention at all. Instead, you argue that if we have N+400 resources and N people and each person needs 1 resource to barely survive, then everyone survives when resources are evenly distributed, no matter what N you pick. Okay, but the conclusion is somehow "the repugnant conclusion is dead"? To be honest, I thought you were trying to argue in favor of the repugnant conclusion, at least in the specialized case of a universe that offers you N resources for every additional N people. But the only conclusion I see you really reaching is that a lot of people at a better-than-dead state is better than a world where there aren't people - this doesn't strike me as very exciting. It seems fairly clear to me that one way in which Y is better than Y+ is that Y has greater average utility. That said, I think most of my dislike for this post is caused by the tone and manner of expression. It was fairly disorganized and overly long. The tone was demeaning and combative: assuming the reader will disagree with basic premises and the use of phrases like "thus the student became enlightened". Note how the top-level post gives the opposing voice to a fictional character rather than forcing it upon the reader - this is a much friendlier approach. Lastly, can you tell me where you bought your Halting Machine? I wouldn't mind one for myself... ;) 0shokwave9yYeah, on reflection the post is very unclear. I agree with the quoted sentiment, but the point I should have made was that we get to Y+ by a process that reduces average utility (redistributing resources evenly), so it doesn't seem surprising or confusing that Y has greater average utility. 5steven04619yAs in, "human resources". 0asparisi9yOr any scenario where adding more people increases our capacity to take advantage of available resources. (such as most agricultural communities throughout history) 4shminux9yI find it repugnant to even consider creating people with lives worse than the current average. So some resources will just have to remain unused, if that's the condition. 1jefftk9yWhat do you find repugnant about it? 0shminux9yIntentionally creating people less happy than I am. Think about it from the parenting perspective. Would you want to bring unhappy children into the world (your personal happiness level being the baseline), if you could predict their happiness level with certainty? Intentionally creating people less happy than I am. That is, your life is the least happy life worth living? If you reflectively endorse that, we ought to have a talk on how we can make your life better. 4Benquo3yThis, in conjunction with some other stuff I've been working on, prompted me to rethink some things about my priorities in life. Thanks! -1shminux9yAgain, a misunderstanding. See my other reply [http://lesswrong.com/lw/dso/the_mere_cable_channel_addition_paradox/7407]. 4Vaniver9yIt's not clear to me that this is a misunderstanding. I think that my life is pretty dang awesome, and I would be willing to have children that are significantly less happy than I am (though, ceteris paribus, more happiness is better). If you aren't, reaching out with friendly concern seems appropriate. 0shminux9yRemember, not "provided I already have children, I'm OK with them being significantly less happy than I am", but "Knowing for sure that my children will be significantly less happy than I am, I will still have children". May not give you pause, but probably will to most (first-world) people. 0magfrump9yI suspect that most first-world people are significantly less happy than many happy people on LW, and that those people on LW would still be very happy to have children who were as happy as average first-worlders, though reasonably hoping to do better. 1TheOtherDave9yWell... hrm. I have evidence that if my current happiness level is the baseline, I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence. That is, when I go through depressive episodes in which I am significantly less happy than I am right now, I still want to keep existing. I suspect that generalizes, though it's really hard to have data about other people's happiness. It seems to me that if I endorse that choice (which I think I do), I ought not reject creating a new person whom I would otherwise create, simply because their existence is sub-baseline-happy. That said, it also seems to me that there's a level of unhappiness below which I would prefer to end my existence rather than continue my existence at that level. (I go through periods of those as well, which I get through by remembering that they are transient.) I'm much more inclined to treat that level as the baseline. 1shminux9yThis does not contradict what I said. Creation != continued existence, as emphasized in the OP. There is a significant hysteresis between the two. You don't want to have children less happy than you are, but you won't kill your own unhappy children. 1TheOtherDave9yAgreed that creation != continued existence. There are situations under which I would kill my own unhappy children. Indeed, there are even such situations where, were they happier, I would not kill them. However, "less happy than I am" does not describe those situations. 0shminux9yLooks like we agree, then. 0jefftk9yThis probably isn't the same as "creating people with lives worse than the current average". Why would that be the baseline? I'm lucky enough to have a high happiness set point, but that doesn't mean I think everyone else has lives that are not worth living. Unhappy as in net negative for their life? No. Unhappy as in "less happy than average"? Depends what the average is, but quite possibly. 1Ghatanathoah9yI've considered this possibility as well. One argument that's occurred to me is that adding more people in A+ might actually be harming the people in population A because the people in population A would presumably prefer that there not be a bunch of desperately poor people who need their help kept forever out of reach, and adding the people in A+ violates that preference. Of course, the populations are not aware of each others' existence, but it's possible to harm someone without their knowledge, if I spread dirty rumors about someone I'd say that I harmed them even if they never find out about it. However, I am not satisfied with this argument, it feels a little too much like a rationalization to me. It might also suggest that we ought to be careful about how we reproduce in case it turns out that there are aliens out there somewhere living lives far more fantastic than ours are. 1shminux9yInstrumentally, if there is absolutely no interaction, not even indirect, is possible between the two groups, there is no way one group can harm another. True, but only because rumors can harm people, so the "no interaction" rule is broken. 1Ghatanathoah9yI'm not sure about that. I don't think most people would want rumors spread about them, even if the rumors did nothing other than make some people think worse about them (but they never acted on those thoughts). Similarly, it seems to me that someone who cheats on their spouse and is never caught has wronged their spouse, even if their spouse is never aware of the affair's existence, and the cheater doesn't spend less money or time on the spouse because of it. Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven't my preferences been thwarted in some sense? 0shminux9yHow do you know it is not happening right now? Since there is no way to tell, by your assumption, you might as well assume the worst and be perpetually unhappy. I warmly recommend instrumentalism as a workable alternative. 4Ghatanathoah9yThere is no need to be unhappy over situations I can't control. I know that awful things are happening in other countries that I have no control over, but I don't let that make me unhappy, even though my preferences are being perpetually thwarted by those things happening. But the fact that it doesn't make me unhappy doesn't change the fact that it's not what I'd prefer. 4[anonymous]9yIndeed, I immediately thought “what's the difference between the government giving you$50 that you can only spend on cables, and it just giving you cables?”.
2Ghatanathoah9yThere isn't one. The reason I phrased it that way was to help keep the link between the various steps in the thought experiment as clear as possible.
0GLaDOS9yI think a button redistribution all existing resources equally among existing people is one I'd almost certainly not press.
0Kaj_Sotala9yThis might be getting into semantics, but I don't think your proposed dilemma really qualifies as the RC anymore. The RC was interesting because it seemed to derive an obviously unacceptable conclusion (a world full of people whose lives are barely worth living) from premises / steps that were all individually obviously acceptable. Yours employs a step (create people whose lives are barely worth living, without getting enough extra resources to make up for it) that's already ethically ambiguous, due to clearly leading to a world with a population dominated by people whose lives are barely worth living.
0cousin_it9yIn my argument the button could create people and resources leading to a standard of living just below the current average, like in the original RC.
0Kaj_Sotala9yPoint taken, though that's still a more morally ambiguous step than the equivalent in the original RC. There are already plenty of people today who think that people shouldn't have more children due to the Earth's resources being limited. That's not an exact mapping to "creating new people that gave us some small amount of extra resources", but it's close and brings to mind the same arguments.

Nice dialogue!

I think that the term "barely worth living" is a terrible source of equivocation that underlies a lot of the apparent paradoxicalness. "Barely worth living" can mean that, if you're already alive and don't want to die, your life is almost but not quite horrible enough that you would rather commit suicide than endure. But if you're told that somebody like this exists, it is sad news that you want to hear as little as possible. You may not want to kill them, but you also wouldn't have that child if you were told that was what your child's life would be like. What Parfit postulates should be called, to avoid equivocation, "A life barely worth celebrating" - it's good news and you say "Yay!" but very softly. I'd even argue that this should be a universal standard for all discussions of the Repugnant Conclusion.

I think 'barely worth living' is universally applicable. Anyone's life can be seen as 'barely worth living' by sufficiently advanced spoiled child. E.g. we would see all cavemen's lives as 'barely worth living' all while those guys say, ohh the hunting been great this year.

8Kaj_Sotala9yReading your comment (and others in this vein) and realizing that the RC isn't as bad as I'd thought it was, and therefore doesn't show human morals to be so inconsistent as I'd thought them to be, makes me update towards human morals in general maybe not being so inconsistent at all. (At least within an individual; not so much between cultures.)
3Ghatanathoah9yExcellent point. I'll try to remember to do that if I end up discussing this again.
9shminux9y"barely worth creating" is probably a less ambiguous term.
2[anonymous]9yYes, I had thought about setting the zero of the function to be summed across individuals to a higher level than “just barely good enough for them not to want to die”. The problem with that is that then there would be people who don't want to die but still have a negative utility, and even a total utilitarian would conclude they had better die (at least in “dry water” models when you neglect the grief of their friends and family, and the cessation of the externalities of their life). Edit: It looks like “dry water” has acquired a meaning totally unrelated to the one I had in mind. (It was the derogatory term John von Neumann used to refer to models of fluids without viscosity, whose proprieties are very different from those of real fluids.)

Parfit argued that this argument led to the "Repugnant Conclusion," the idea that the best sort of world is one with a large population with lives barely worth living.

So the Mere Addition Paradox doesn't prove what Parfit thought it did? We don't have any moral obligation to tile the universe with people whose lives are barely worth living?

I'm pretty sure that this is not what Parfit was arguing.

As I understand it, Parfit's Repugnant Conclusion was that, given any possible world (even one with billions of people who each have an extremely high quality of life), there is a better possible world in which everyone has a life that is barely worth living (better because the population is much larger, and "barely worth living" is better than nothing). The argument he made was that the Repugnant Conclusion followed from most theories of population ethics (that is, most attempts to define "better" in this context), but most people refused to accept it.

That does not mean that a high-population low-quality-of-life world is the best possible world; a possible world with the same high population and higher quality of life would be even better. And it doe... (read more)

6steven04619yExactly. The original post is straightforwardly wrong, and doesn't even do its readers the courtesy of including a one-line summary that lets them avoid having to read the whole thing. The fact that it's at +40 is a damning indictment of LessWrong's ability to tell good arguments from bad.

The only serious mistake I see in the original post is that it misinterprets Parfit. I agree with Unnamed that it does. But LessWrongers haven't necessarily read Parfit, and they may have seen his ideas misused to argue in the way the post criticizes, so they can't really be expected to detect the misinterpretation.

-1Ghatanathoah9yThe Mere Addition Paradox was the main argument Parfit used to argue that a possible world with a larger population and a lower quality of life was necessarily better. My argument is that the MAP doesn't show this at all. I am aware that it was not the only argument Parfit used, but it was the most effective, in my opinion, so I wanted to take it on. It helps that I am already using a somewhat abnormal theory of population ethics. Alice and Bob elucidate it to a limited extent, but it's somewhat similar to the "variable value principle" described in Stanford's page on the subject. Basically I argue that having high total and high average utility are both valuable and that it's morally good to increase both. I use the somewhat clunkier phrases "use resources to create lives worth living" and "use resources to enhance the utility of existing people" to avoid things like Ng's Sadistic Conclusion and Parfit's Absurd Conclusion. According to the theory I am using, possible World 1 is worse than hypothetical World 2, providing both worlds have access to the same level of resources. My solution to the Mere Addition Paradox seems to indicate that World 1 might be better than World 2 if it has access to many more resources to convert into utility. However, a world with a smaller population, higher average utility, and the same level of resources as World 1 would always be better (providing the higher average utility was obtained by spending resources enhancing existing people's utility, not by killing depressed people or something like that).
6Unnamed9yWhat Parfit argued is that, given any possible world, there is a better world with a larger population and a lower quality of life (according to most people's definitions of "better"). There is even a better world with a much larger population and a quality of life that is barely above zero. It sounds like you agree, but you're just noting that the higher-population, lower-quality-of-life, better world also differs in other ways; in particular, it has more resources. At least that's how I read it when you say: "For every population, A, with a high average level of utility there exists another, better population, B, with more people, access to more resources and a lower average level of utility." To me, that sounds like you are biting the bullet and accepting the Repugnant Conclusion. You just think that the conclusion isn't so repugnant, because those worlds also differ in amount of resources. Is the following a fair summary of your position?: When looking at the possible future worlds that are reachable from a given starting point, a barely-worth-living world will never be the best world to aim for, because there is always a better option which has higher quality of living (i.e., an option that makes better use of the resources available at the starting point).
1Ghatanathoah9yMy understanding of Parfit is that he believed the Mere Addition Paradox showed that a world that differed in no other way besides having a larger population size and a lower quality of life was better than one with a smaller population and a higher quality of life. That's why it's called the Mere Addition Paradox, because you arrive at the Paradox by adding more people, redistributing resources, and doing nothing else. That is what I understand to be the Repugnant Conclusion. What makes it especially repugnant is that it implies that people in the here and now have a duty to overpopulate the world. You seem to have understood the Repugnant Conclusion to be the belief that there is any possible society that has a larger population and lower quality of life than another society, but is also better than that society. To avoid quibbling over which of us has an accurate understanding of the topic I'll just call my understanding of it RC1 and your understanding RC2. I do not accept RC1. According to RC1 a world with a high population and low quality of life is better than a world that has the same amount of resources as the first world, a lower population, and a higher quality of life. I do not accept this. To me the second world is clearly better. I might accept RC2. If I get your meaning RC2 means that there is always a better population that is larger and with lower quality of life, but it might have to be quite a bit larger and have access to many more resources in order to be better. For instance according to RC2 a planet of 10 billion people with lives barely worth living might not be better than a planet of 8 billion people with wonderful lives. However, a galaxy full of 10 trillion people with lives barely worth living and huge amounts of resources might be better then the planet of 8 billion people with wonderful lives. Would you agree that I have effectively refuted RC1, even if you don't think I refuted RC2? Again, I think I might accept what you think t
1Nisan9yNo, the statement is that for any world with a sufficiently high quality of life, there is some world that differs in no other way besides having a larger population size and a lower quality of life which is better.
0Ghatanathoah9yI don't see how your phrasing is significantly different from mine. In any case, I completely disagree with that statement. I believe that for any world with a large population size and a very low quality of life there is some world that differs in no other way besides having a smaller population size and a higher quality of life which is better. The reason I believe this is that I have a pluralist theory of population ethics that holds that a world that devotes some of its efforts to creating lives worth living and some of its efforts to improve lives that already exist is better than a world that only does the former, all other things being equal.
0Ghatanathoah9yYou're right. It doesn't contradict it 100%. A world a trillion people with lives barely worth living might still be better than a world with a thousand people with great lives. However, it could well be worse than a world with half a trillion people with great lives. What my theory primarily deals with is finding the optimal, world, the world that converts resources into utility most efficiently. I believe that a world with a moderately sized population with a high standard of living is the best world, all other things being equal. However, you are quite correct that the Mere Addition Paradox could still apply if all things are not equal. A world with vastly more resources than the first one that converts all of its resources into building a titanic population of lives barely worth living might be better if its population is huge enough, because it might produce a greater amount of value in total, even if is less optimal (that is, it converts resources into value less efficiently). However, a world with the same amount of resources as that has a somewhat smaller population and a higher standard of living would be both better and more optimal. So I think that my statement does contradict the Mere Addition Paradox in ceteris parabis situations, even if it doesn't in situation where all things aren't equal. And I think that's something.
0[anonymous]9yNo. Your statement does not contradict the Mere Addition Paradox, even in, as you say, "ceteris paribus situations". This is really a matter of first-order logic.
0Ghatanathoah9yAlright, I think I found where we disagree. I am basically going to just repeat some things I just said in a reply to Thrasymachus, but that's because I think the sources of my disagreement with him are pretty much the same as the sources of my disagreement with you: I interpreted the Repugnant Conclusion to mean that a world with a large population with lives barely worth living is the optimal world, given the various constraints placed on it. In other words, given a world with a set amount of resources, the optimal way to convert those resources to value is to create a huge population with lives barely worth living. I totally disagree with this. You interpreted the Repugnant Conclusion to mean that a world with a huge population of lives barely worth living may be a better world, but not necessarily the optimal world. I may agree with this. To use a metaphor imagine a 25 horsepower engine that works at 100% efficiency, generating 25 horsepower. Then imagine a 100 horsepower engine that works at 50% efficiency, generating 50 horsepower. The second engine is better at generating horsepower than the first one, but it is less optimal at generating horsepower, it does not generate it the best it possibly could. So, if you accept my pluralist theory of value (that places value on both creating new people, and improving the lives of existing ones), we might also say that a population Z, consisting of a galaxy full of 3 quadrillion of people that uses there sources of the galaxy to give them lives barely worth living, would be better than A, a society consisting of planet full of ten billion people that uses the planet's resources to give its inhabitants very excellent lives. However, Z would be less morally optimal than A because A uses all the resources of the planet to give people excellent lives, while Z squanders its resources creating more people. We could then say that Y, a galaxy full of 1 quadrillion people with very excellent lives is both better than Z and

Front page worthy.

Not even close. The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.

The conclusion of what Partfit actually demonstrated goes something more like this:

For any coherent mathematical definition of utility such that there is some additive functions which allows you to sum the utility of many people to determine U(population), the following paradox exists:

Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).

Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered "better". This of course implies either more resources, or more utility efficient use of resources in the "better" world.

The cable channel analogy would be to say "As long as every extra cable channel I add provides at least some constant positive utility epsilon>0, even if it i... (read more)

4Ghatanathoah9yEven if that is the case I think that that strawman is commonly accepted enough that it needs to be taken down. I believe that creating a life worth living and enhancing the lives of existing people to both be contributory values that form Overall Value. Furthermore, these values have diminishing returns relative to each other, so in a world with low population creating new people is more valuable, but in a world with a high population improving the lives of existing people is of more value. Then I shut up and multiply and get the conclusion that the optimal society is one that has a moderately sized population and a high average quality of life. For every world with a large population leading lives barely worth living there exists another, better world with a lower population and higher quality of life. Now, there may be some "barely worth living" societies so huge that their contribution to overall value is larger than a much smaller society with a higher standard of living, even considering diminishing returns. However, that "barely worth living" society would in turn be much worse than a society with a somewhat smaller population and a higher standard of living. For instance, a planet full of lives barely worth living might be better than an island full of very high quality lives. However, it would be much worse than a planet with a somewhat smaller population, but a higher quality of life. I'm not interesting in maximizing total utility. I'm interested in maximizing overall value, of which total utility is only one part. To me it would, in many cases, be morally better to use the resources that would be used to create a "life that someone would choose to have" to instead improve the lives of existing people so that they are above that threshold. That would contribute more to overall value, and therefore make an even bigger improvement in the world. It's not that it wouldn't improve the world. It's that it would improve the world less than enhancing the
0koning_robot9yWhat is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you're just making something up to rationalize your preconceptions.
0Ghatanathoah9yOverall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other. When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress [http://lesswrong.com/lw/s9/whither_moral_progress/] which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account. The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn't distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism [http://en.wikipedia.org/wiki/Preference_utilitarianism] came along and proved far superior because it could take more values into account. I don't think it's perfected yet, but it's a step in the right direction. I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility. Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster [http://en
1Cyan9yDitto.
4Cyan9yYikes [http://lesswrong.com/r/discussion/lw/dso/the_mere_cable_channel_addition_paradox/73j0?context=1#comments] . I'm reverting to neutrality on this post until I assess it more carefully (if I ever get around to it).
0jsalvatier9yAgreed. This is an argument I haven't heard before.

I liked this, but found that the dialog format made the argument you're making excessively drawn out and hard to follow. I would have preferred there to be a five-paragraph (say) recap of your criticism after the dialogue.

0Kaj_Sotala9yAwesome, thanks. :-)
6FiftyTwo9yThats interesting, after reading this I thought it was one of the best examples I'd seen of the dialogue format being used to explain and argument and its objections. If our difference of opinion isn't just due to some arbitrary factors about our aesthetic preferences it might be that you are more familiar with the original argument, so didn't benefit as much from having it explained it detail. Do you think thats accurate?
2magfrump9yI found that the dialog format made it easy for me to follow, but still overly drawn out.

Why do I never have discussions like this with telemarketers?

8FiftyTwo9yThey've discovered long enlightening dialogues are not cost effective in time usage to sales made. Therefore they've established a rationalist blacklist who they avoid calling.
3Jayson_Virissimo9yHave you ever tried? As it turns out, I haven't. On the other hand, when I was a kid, I remember my dad once giving an entire sales lecture to a telemarketer (he was a sales manager at the time) who was demonstrating poor marketing skills.

I agree with Unnamed that this post misunderstands Parfit's argument by tying it empirical claims about resources that have no relevance.

Just imagine God is offering you choices between different universes with inhabitants of the stipulated level of wellbeing: he offers you A, then offers you to take A+, then B, then B+, etc. If you are interested in maximizing aggregate value you'll happily go along with each step to Z (indeed, if you are offered all the worlds from A to Z at once an aggregate maximizer will go straight for Z. This is what the repugnant conclusion is all about: it has nothing whatsoever to do with whether or not Z (or the 'mechanism' of mere addition to get from A to Z) is feasible under resource constraint, but that if this were possible, maximizing aggregate value obliges we take this repugnant conclusion. I don't want to be mean, but this is a really basic error.

The OP offers something much better when offering a pluralist view to try and get out of the mere addition paradox by saying we should have separate term in our utility function for average level of well-being (further, an average of currently existing people), and that will stop us reaching the repugn... (read more)

1Alicorn8yIt is traditionally held in ethics that "ought implies can" - that is, that you don't have to do any things that you cannot in fact do.
-2Ghatanathoah8yThat is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like "obligation," in a consequentialist discussion. I'll try to recast the language in a more consequentialist style: Instead of saying that, from a person-affecting perspective: "Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off." We can instead say: "An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them." Instead of saying: "It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it." We can instead say: "It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action." If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don't think that the second premise is at all controversial, but the first one might be. I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it's a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn't.
0Thrasymachus8yDon't have as much time as I would like, but short and (not particularly) sweet: I think there is a mix up between evaluative and normative concerns here. We could say that the repugnant conclusion world is evaluated better than the current world, but some fact about how we get there (via benign addition or similar) is normatively unacceptable. But even then that seems a big bullet to bite - most of us think the RC is worse than a smaller population with high happiness (even if lower aggregate), not that it is better but it would be immoral for us to get there. Another way of parsing your remarks is to say that when the 'levelling' option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well. This has the unfortunate side-effect of violating irrelevance of independent alternatives (if only A and A+ are on offer, we should say A+ > A, but once we introduce B, A > A+). Maybe that isn't too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick). That said, everything in population ethics has nasty conclusions... However, I don't buy the idea that we can rule out benign addition because the addition of moral obligation harms someone independent of the drop in utility they take for fulfilling it. It seems plausible that a fulfilled moral obligation makes the world a better place. There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners), or indeed, depending on the redistribution ethic you take, everyone who isn't the most well-off person. Even if you don't and say it is
0Ghatanathoah8yThat's right, my new argument doesn't avoid the RC for questions like "if two populations were to spontaneously appear at exactly the same time which would be better?" What I'm actually arguing is that A+ is [person-affecting] worse than A, even when B is unavailable. This is due to following the axiom of transitivity backwards instead of forwards. If A>B and A+<B then A+<A. If I was to give a more concrete reason for why A+<A I would say that the fact that the A people are unaware that the + people exist is irrelevant, they are still harmed. This is not without precedent in ethics, most people think that a person who has an affair harms their spouse, even if their spouse never finds out. However, after reading this essay by Eliezer [http://lesswrong.com/lw/t0/abstracted_idealized_dynamics/], (after I wrote my November 8th comment) I am beginning to think the intransitivity that the person affecting view seems to create in the Benign Addition Paradox is an illusion. "Is B better than A?" and "Is B better than A+" are not the same question if you adopt a person affecting view, because the persons being affected are different in each question. If you ask two different questions you shouldn't expect transitive answers. I know, the example I gave was that we all might be harming unimaginably affluent aliens by reproducing. I think you are right that even taking the objections to it that I gave into account, it's a pretty weird conclusion. I don't know, I've heard people complain about poor people reproducing and increasing the burden on the welfare system before. Most of the time I find these complainers repulsive, I think their complaints are motivated by ugly, mean-spirited snobbery and status signalling, rather than genuine ethical concerns. But I suspect that a tiny minority of the complainers might have been complaining out of genuine concern that they were being harmed. Again, I agree. My main point in making this argument was to try to demonstrate that a pu
0Thrasymachus8yIt seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus - why believe that? Intransitivity I didn't find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers ("is A better than A+, is B better than A+?"), and most person affecting views have big problems with transitivity: consider this example. World 1: A = 2, B = 1 World 2: B = 2, C = 1 World 3: C = 2, A = 1 By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that). One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn't such a big bullet to bite (you retain within-choice ordering). Synthesis I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the 'total term' has some weight (i
0Ghatanathoah8yThat is because I don't think the person affecting view asks the same question each time (that was the point of Eliezer's essay). The person-affecting view doesn't ask "Which society is better, in some abstract sense?" It asks "Does transitioning from one society to the other harm the collective self-interest of the people in the original society?" That's obviously going to result in intransitivity. I think I might have been conflating the "person affecting view" with the "prior existence" view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest. Basically, I find it unacceptable for ethics to conclude something like "It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life." This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all). On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way. I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it's wrong
5Thrasymachus9y1) I don't think anyone in the entire population ethics literature reads Parfit as you do: the moral problem is not one of feasibility via resource constraint, but rather just that Z is a morally preferable state of affairs to A, even if it is not feasible. Again, the paradoxical nature of the MAP is not harmed even if it demands utterly infeasible or even nomologically impossible, but that were we able to actualize Z we should do it. Regardless, I don't see how the 'resource constraint complaint' you make would trouble the reading of Parfit you make. Parfit could just stipulate that the 'gain' in resources required from A to A+ is just an efficiency gain, and so A -> Z (or A->B, A->Z) does not involve any increase in consumption. Or we could stipulate the original population in A, although giving up some resources are made happier by knowing there is this second group of people, etc. etc. So it hardly seems necessarily the case that A to A+ demands increased consumption. Denying these alternatives looks like hypothetical fighting. 2) I think the pluralist point stands independently of the resource constraint complaint. But you seem imply a fact you value efficient resource consumption independently: you prefer A because it is a more efficient use of resources, you note there might be diminishing returns to the value of 'added lives' so adding lives becomes a merely inefficient way of adding value, etc. Yet I don't think we should care about efficiency save as an instrument of getting value. All things equal a world with 50 Utils burning 2 million utils is better than one with 10 utils burning 10. So (again) objections to feasibility or efficiency shouldn't harm the MAP route to the repugnant conclusion. 3) I take your hope for escaping the MAP is getting some sort of weighted sum or combination of total utility, the utility of those who already exist, and possibly average utility of lives will get us our 'total value'. However, unless you hold that the 'average
-2Ghatanathoah9yThe view I am criticizing is not that Z may be preferable to A, under some circumstances. It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked. Again, my complaint with the paradox is not that, if Z and A are our only choices, that A is preferable to Z. Rather my complaint is the interpretation that if we were given some other alternative, Y that has a much larger population than A, but a smaller population and higher quality of life than Z, that Z would be preferable to Y as well. Again, I admitted [http://lesswrong.com/r/discussion/lw/dso/the_mere_cable_channel_addition_paradox/73e6] that my solution might allow a MAP route to the repugnant conclusion under some instances like the one you describe. My main argument is that under circumstances where our choices are not constrained in such a manner, it is better to pick a society with a higher quality of life and lower population. Again, my objection is not that going this route is preferable is the best choice if it is the only choice we are allowed. My objection is to people who interpret Parfit to mean that even under circumstances where we are not in such a hypothetical and have more option to choose from, we should still choose the world with lives barely worth living (i.e. Robin Hanson). Again, those people may be interpreting Parfit incorrectly, which in turn makes my criticism seem like an incorrect interpretation of Parfit. But I think it is a common enough view that it deserves criticism. In light of your and Unnamed's comments I have edited my post and added an explanatory paragraph at the beginning, which says: "EDIT: To make this clearer, the interpretation of the Mere Addition Paradox this post is intended to criticize is the belief that two societies that differ in no way other th
3Michael_Sullivan9y" It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked." Generally it's a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid. Your edit doesn't help much at all. You talk about what others "seem to claim", but the argument that you have claimed Parfit is making is so obviously nonsensical, that it would lead me to wonder why anyone cites his paper at all, or why any philosophers or mathematicians have bothered to refute or support it's conclusions with more than a passing snark. A quick google search on the term "Repugnant Conclusion" leads to a wikipedia page that is far more informative than anything you have written here.
0Ghatanathoah9yIt doesn't seem any less obviously stupid to me then the more moderate conclusion you claim that Parfit has drawn. If you really believe that creating a new lives barely worth living (or "lives someone would barely choose to live," in your words) is better than increasing the utility of existing lives then the next logical step is to confiscate all the resources people are using to live standards of life higher than "a life someone would barely choose to live" and use them to make more people instead. That would result in a society identical to the previous one except that it has a lower quality of life and a higher population. Perhaps it would have sounded a little better if I had said "It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A, providing that Z's larger population is large enough that it has higher total utility than A." I disagree with this of course, it seems to me that total and average utility are both valuable, and one shouldn't dominate the other. Also, I'm sorry to have retracted the comment you commented on, I did that before I noticed you had commented on it. I decided that I could explain my ideas more briefly and clearly in a new comment and posted that one in its place.
-4Ghatanathoah9yOkay, I think I finally see where our inferential differences are and why we seem to be talking past each other. I'm retracting my previous comment in favor of this one, which I think explains my view much more clearly. I interpreted the Repugnant Conclusion to mean that a world with a large population with lives barely worth living is the optimal world, given the various constraints placed on it. In other words, given a world with a set amount of resources, the optimal way to convert those resources to value is to create a huge population with lives barely worth living. I totally disagree with this. You interpreted the Repugnant Conclusion to mean that a world with a huge population of lives barely worth living may be a better world, but not necessarily the optimal world. I may agree with this. To use a metaphor imagine a 25 horsepower engine that works at 100% efficiency, generating 25 horsepower. Then imagine a 100 horsepower engine that works at 50% efficiency, generating 50 horsepower. The second engine is better at generating horsepower than the first one, but it is less optimal at generating horsepower, it does not generate it the best it possibly could. So when you say: We can agree say (if you accept my pluralist theory) that the first world is better, but the second one is more optimal. The first world has generated more value, but the second has done a more efficient job of it. So, if you accept my pluralist theory, we might also say that a population Z, consisting of a galaxy full of 3 quadrillion of people that uses there sources of the galaxy to give them lives barely worth living, would be better than A, a society consisting of planet full of ten billion people that uses the planet's resources to give its inhabitants very excellent lives. However, Z would be less morally optimal than A because A uses all the resources of the planet to give people excellent lives, while Z squanders its resources creating more people. We could then say that Y, a

I think people find the repugnant conclusion repugnant because they are using two different definitions for a "life barely worth living".

Society has very strong social norms against telling people to commit suicide. When someone's life is really miserable, almost no one tells them that killing themselves is the best thing they can do. Even euthanasia for people who are permanently and unavoidably suffering is controversial. So from a utilitarian perspective, you could say that people tend to have a strong "pro-life" bias, even when more life means more suffering.

But let's consider which lives actually have marginal benefit. Is it really actually a morally positive thing to bring in to the world someone who's life is going to be miserable?

Consider someone who is going to have an unpleasant childhood, an unpleasant adulthood, work too many hours at a job they don't enjoy as an adult, have their spouse die early, and finally die a lonely and isolated death themselves. Would you really bring someone like that in to the world given the choice? (Assuming no positive externalities from them working at their job.)

But let's say you meet this person in college and y... (read more)

5torekp9yAnother dimension to the ups and downs, as I mentioned [http://lesswrong.com/lw/4z3/put_yourself_in_manual_mode_aka_shut_up_and/3thh] elsewhere:
0Ghatanathoah9yI think a world where I felt a huge amount of awful downs over the course of my life is also pretty darn repugnant. Yes, I'd feel some ups as well, but it seems like a world where a smaller population felt almost no downs is probably better than a larger population with lots of downs.
-1aelephant9yOne problem I see is that there is no way to tell. You may have an idea, but there is no way to know with 100% certainty that they won't turn things around and lead a net happy life down the line.
0aelephant9yI'm not sure why I am getting voted down for the above comment. Is it because I am being perceived as "attacking the hypothetical"? In this case, maybe I just interpreted John_Maxwell_IV's comment differently. By "you can tell" does that mean that we have perfect knowledge of the entirety of the future of that person's life? Even if this were true, we are also an agent that can influence that future. I would prefer to act to alter the future (i.e. make that person happier) than to act to motivate them to commit suicide. Maybe I'm just weird in that I'd rather make people happy than make them dead.
2magfrump9yI didn't downvote, but to me it feels like attacking the hypothetical; that would be my guess. Obviously in real life most people (certainly, I think, most LWers) are VERY VERY HIGH above the "zero line" or whatever, so these sorts of questions feel pretty abstract to me.
1John_Maxwell9yI don't think that's obvious. Something like 1/5 of LW is depressed. I suspect commuting is below the zero line, i.e. people would fast-forward through their commutes even if it meant never getting those hours back.

So to summarise, we have A, where there's a large affluent population.
Then we move to A+, where there's a large affluent population and separately, a small poor population (whose lives are still just barely worth living). We intuit that A+ is better than A.
Then we move to B, where we combine A+'s large affluent population with the small poor population and get a very large, middle-upper-class population. We intuit that B is better than A+, and transitively, better than A.
This reasoning suggests that hypothetical C would be better than B or A, and so on unt... (read more)

0Thrasymachus9yWe can make the same dance of moves from B to B+ (more people, worthwhile lives) and then B+ to C (redistribution and aggregate value increase). So, unless you are willing to deny transitivity, then moving from B to C is what we should do. Rinse and repeat until Z. (This is assuming you mean resources as well being. However, the OPs resources point isn't responsive to Parfit's argument).
0shokwave9yThe thing is, you never actually get to Z. if you do add people and enough resources for their bare minimum, you approach Z from above but never actually reach it - the standard of living never drops below the bare minimum. It is perhaps cheating to say that Z is when average utility drops below the bare minimum. If the Repugnant Conclusion is that we prefer A to Z, even though all the lives in both are worth living, then that is another matter.
0Thrasymachus9yLives in Z are stipulated to be above the neutral level so they are better lived than not. The repugnancy is that they are barely worth living, so just above this level, and most people find that a very large population of lives barely worth living is not preferable to a smaller one with very good lives.
-1shokwave9ySure, so adding poor people to a rich world and averaging out the resources is bad, not good, and we shouldn't do it. It seems to me that the argument that the argument for adding people doesn't take into account this preference for a few rich over many poor. Also, there may be anthropic reasons for that preference: would you rather be born as one of 500 rich, or one of 10,000 poor? Now, would you rather a 5% chance of existing as a rich person (95% not-exist) or a 100% chance of existing as a poor person?
1Oscar_Cunningham9yWhich step(s) do you disagree with? Adding poor people or averaging the utility? Parfit defends the first step by saying that it's a "mere addition". Poor people on they're own are (somewhat) good. Rich people on their own are good. Therefore the combination of the two is better than either. The second step (averaging the resources) is supposed to be intuitively obvious. We can tweak the mathematics so that the quality of life of the rich only goes down a tiny amount to bring the poor up to their level. If the rich could end all poverty by giving a very small amount wouldn't that be the right thing to do?

Is this anything more than pointing out that Parfit's argument, as generally discussed, doesn't model resource constraints?

Are we assuming that he has never responded to this before? That this materially changes his conclusions?

At least in the wikipedia article, the lack of resource constraint was immediately obvious to me, but I don't think it materially changes the conclusions.

Solve for dU+/dN < 0, where U+=Total Utilons for person with net positive life (are some people only counting utilons above T+?), N = Number of persons, R = Resources, T+ = Ut... (read more)

I further argue for a theory of population ethics that values both using resources to create lives worth living, and using resources to enhance the utility of already existing people, and considers the best sort of world to be one where neither of these two values totally dominate the other.

If you try to formalize this theory of population ethics, I believe you will find that it's susceptible to Mere Addition type paradoxes. See, for example, articles about "population ethics" and "axiological impossibility theorems" on Gustav Arrhenius' website.

0Ghatanathoah9yIt is, but the important point is that the paradoxes that it is vulnerable to do not seem to mandate reproduction, as a more traditional theory would. For instance, my theory would say that Planet A with a moderate population of people living excellent lives is better than Planet B with a larger population whose lives are barely worth living. However, it might also say that Galaxy B full of people with lives barely worth living is better than Planet A, because it has so many people that it produces enough value to swamp planet A, even if it does so less efficiently. However, my theory would also say that Galaxy A, which has a somewhat smaller population and a higher quality of life than Galaxy B, is better than Galaxy B. My theory is not for finding the best population, it is about finding the optimal population. It is about finding what the best possible population is given whatever resource constraints a society has. It does not bother me that you are able to dream up a better society if you remove those resource constraints, such a society might be better, but it would also be using resources less optimally. The best sort of society is one that uses the resources it has available to both create lives worth living, and using resources to enhance the utility of already existing people.
1Nisan9yI've come up with a formalization of everything you've said here that I think is a steel man for your position. Somewhat vaguely: A population with homogeneous utility defines a point in a two-dimensional space — one dimension for population size, one for individual utility. Our preferences are represented by a total, transitive, binary relation on that space. The point of the Mere Addition Paradox is that a set of reasonable axioms rules out all preferences. The point you're making is that if we're restricted by resources to some region of that space, then we only need to think about our preferences on a one-dimensional Pareto frontier. And one can easily come up with preferences on that frontier that satisfy all the nice axioms. Very well. Just so long as the Pareto frontier doesn't change, there is no paradox.
0Ghatanathoah9yI think you've got it. Thanks for formalizing that for me, I think it will help me a lot in the future! If you're interested in where I got some of these ideas from, by the way, I derived most of my non-Less Wrong inspiration from the work of philosopher Alan Carter. [http://glasgow.academia.edu/AlanCarter/Papers]

Are you sure that sharing their resources like that would lower the average level of utility? Wouldn't there be economies of scale and such...

In the original, the A+ to B- transition RAISES the average. It's the A to A+ transition that lowers the average.

Also, I think it's rather aside from the point to begin talking about resource management.

0Ghatanathoah9yI edited the post to fix that. Thank you for pointing that out. It is, that's why Alice tells Bob to stop fighting the hypothetical and puts the conversation back on track. The reason I added that line was to illustrate a wrong way to approach the MAP.
2Luke_A_Somers9yI'm talking about Alice's challenge later - Your criticism of the hypothetical does not fill the standard you apply to it.
0Ghatanathoah9yAfter discussing this with Unnamed and Thrasymachus I think the main issue is that I was attacking the idea that the world of the Repugnant Conclusion represents the optimal society. That is, I was arguing that creating a RC-type world does not represent the most efficient way for a society to convert the resources it has into utility. However, I think I gave the impression that I was talking about the idea that an RC-type world can never be better (have more utility period, regardless of how efficiently it was obtained). I was not disputing this. I concede that a very small society that converts all the resources it has into utility as optimally as possible may still have less utility than a society that is so huge and has so many resources that it can produce more utility by pure brute force. Keep in mind that I regard utility as being generated most effectively by having a combo of high average and total wellbeing, rather than just maximizing one or the other. For instance, let's say a small island with a moderate-sized population who have wonderful lives converts resources into utility at 100 utilons per resource point, and has access to 10 resource points. Result: 1000 utilons. Then let's imagine a huge continent with a huge population that has somewhat less pleasant lives and converts resources into utility at 50 utilons per resource point, and has access to 30 resource points. Result, 1500 utilons. So the continent could be regarded as better, even though it is less optimal. I believe that talking about resource management is relevant when talking about optimality. You are right, however, that it is not very relevant when talking about betterness, since when postulating better possible societies you can postulate that they have any amount of resources you want.

I like the idea here, but it seems like the paradox survives.

Say that A consists of 2 populations and 2 sets of resources. (Px, Py) (Rx, Ry) Rx is enough to grant some high number of utilions per person in Px: say 100, while Ry is only enough to grant some very low number of utilions per person: say .0001.

In A+ we combine them. Assuming that Px=Rx for the moment, the average Utility lowers to 50.00005 per person. And with each group and resources added, you get closer to .0001.

And it doesn't have to stop there, if we imagine some future population Sx who h... (read more)

-1Ghatanathoah9yPart of the original premise of the paradox is that even the people of population Z, the vast population the paradox leads to, still have lives that are "barely worth living." So you can't go that far, the population has to have enough utilons per person that their lives are barely worth living. Yes, one weakness in this argument is that it still allows the Mere Addition Paradox to happen if the following criteria are met: 1. The addition of a new person also adds more resources. 2. The amount of new resources added is enough to give the new person a life "barely worth living." 3. The only way to obtain those new resources is to create that new person. The people currently existing would be unable to obtain the resources without creating that person. I think that my argument is still useful because of the low odds of encountering a situation that fulfills all those criteria, especially 3, in real life. It means that people no longer need to worry that they are committing some horrible moral wrong by not having as many children as possible.
0asparisi9yI think I mostly agree with you, although we'd have to define how many utilions per person were "worth living" for your criticism against example Sx to work. And actually, for most of human history, I think that adding a new person was, on the whole, more likely to add resources, particularly in agricultural communities and in times of war. (Which is why we've only seen the reversal of this trend relatively recently) I am not sure that your 3rd criteria is required: it would seem that as long as adding a new person added more utilions than not, adding a new person would be preferable. But in those cases, it might form a curve rather than a line, where you get diminishing returns after getting a population of a certain size, eliminating (at least) the paradoxical element. I do think that the insight of talking about resources to utility in a good insight here, but it's good to know where it is weak.
1gwern9yWell, yes; Malthusian models would even predict this, since if another person didn't add resources, that reduces resources per capita (the denominator increased, the numerator didn't), and this could continue until resources per capita fall below subsistence, at which point every additional person must cause an additional death/failure-to-reproduce/etc. and the population has reached a steady state. So every new additional person does allow new resources to be opened up or exploited - more marginal farmland farmed - but every new resource is (diminishing marginal returns, the best stuff is always used first) worse than the previous new resource...
0Ghatanathoah9yThat might be correct. However, my argument also deals with the most efficient way to create people who add resources (when I argued A++ was better than A+). For instance, suppose there are enough resources to sustain 100 people at a life barely worth living can be extracted from a mine, and you need to create some people to do it. A person working by hand can extract 1 person worth of resources, enough for their own subsistance. A person with mining equipment can extract 10 people worth of resources. You can either create 100 people who do it by hand, or you can create 10 people and make them mining equipment (assume that creating and maintaining the mining equipment is as expensive as creating 15 people with lives barely worth living). Which should you do? I would argue that, unless the human population is near extinction levels, you should create the 10 people with the mining equipment. This is because it will create a large surplus of 75 people worth of resources to enhance the lives of the 10 people, and other people who already exist.
0asparisi9yTechnology can alter these economies, and I am certainly not saying we should all go to subsistance farming to avoid the paradox. I think making the calculation "equipment=making X lives" is a little off the mark: typically, you'd subtract Utilions (if you are trading for the mining equipment) and add workers. (for repair/maintainence) so you might end up with, say, 12 people, 10 who mine and 2 who repair, and 85 utilions rather than 100. But the end math of who gets how much ends up about the same as your hypothetical.

This belongs on the front page. Very well done.

I have never agreed with the Repugnant Conclusion, but I have always had trouble putting my disagreement into words. Your dialog makes several important points very clearly:

But don't say "having a high average utility." Say "use resources to increase the utility of people who already exist."

...

So "A+" differs from "A" both in the size of its population, and the amount of resources it has access to. Parfit was not "merely adding" people to the population. He was also adding resources.

Upvoted for being the kind of post I want on LessWrong, but I agree with the posters above who say that you misunderstand the point of the paradox. Thrasymachus articulates why most clearly. You do however make a compelling argument that even if we accept that A<Z we should still spend some resources on increasing happiness. The hypothetical Z presumes more resources than we have. Given that we can't reach Z even by using all our resources, knowing A<Z isn't doesn't tell us anything because Z isn't one of our options. If we spent all our resources on... (read more)

0Thrasymachus9yThat's really interesting. Why? And would you also take A+ < A if we fiddled the numbers to get. A: P1 at 10 A+: P1 at 20, P2-20 at 8 B: P1-20 at 9 So we can still get to the RP, yet A+ seems a really good deal versus A.
2Oscar_Cunningham9yWhat I actually value is average happiness. All else being equal, I don't think adding people whose lives are just worth living is a good thing. (Often all else is not equal, I do support adding more people if it will create more interesting diversity, for example). I don't quite understand your example, what does "P2-20" mean? I'd also need to know the populations. Anyway, I think your point is that we can increase the happiness of P1 as we go from A to A+. In that case we might well have A<A+, but then we would have B<A+ also.
2Thrasymachus9y[Second anti-average util example]: It also means that if the average value of a population is below zero, adding more lives that are below zero (but not as far below zero as the average of the population) is a good thing to do.
2Thrasymachus9ySorry, P2-20 means 19 persons all at 8 units of welfare. The idea was to intuition pump the person affecting restriction: A+ is now strictly better for everyone, including the person who was in A, and so it might be more intuitively costly to say it is in fact A>A+ You may well have thought about all the 'standard' objections to average util in population ethics cases, but just in case not: Average util seems to me implausible, particularly in different number cases: for example why bringing lives into existence which are positive (even really positive) would be wrong just because they would be below the average of the lives who already exist. Related to averaging is dealing with separability: if we're just averaging all-person happiness, than whether it is a good thing to bring a person on earth will depend on the wellbeing of aliens in the next super-cluster (if they're happier than us, then anti-natalism seems to follow). Biting the bullet here seems really costly, and I'm not sure what other answers one could give. If you have some in mind, please let me know!
2wedrifid9yie. Death To All The Whiners! Be happy or die!
1Oscar_Cunningham9yEach death adds it's own negative utility. Death is worse than the difference in utilities between the situations before and after the death.
1wedrifid9yIt sounds like you may have similar actual preference as I. (I just wouldn't dream of calling it "average happiness".)
1Oscar_Cunningham9yCool. I don't really believe average happiness either (but I'm a lot closer to it than valuing total happiness). I wouldn't steal from the poor to give to the rich, even if the rich are more effective at using resources.
0Ghatanathoah9yI think that saying that "I value improving the lives of those who already exist," is a good way to articulate your desire to increase average utility, but also spell out the fact that you find it bad to increase it by other means, like killing unhappy people. It also articulates the fact that you would (I assume) be opposed to creating a person who is tortured 23 hours a day in a world filled completely with people being tortured 24 hours a day, even though that would increase average utility. I also assume that while you believe in something like average utility, you don't think that a universe with only one person with a utility of 100 is just as morally good as a universe with a trillion people who each have a utility of 100. So you probably also value having more people to some extent, even if you value it incrementally much less than average utility (I refer to this value as "number of worthwhile lives"). It sounds like you must also value equality for its own sake, rather than as a side-effect of diminishing marginal utility. I think I am also coming around to this way of thinking. I don't think equality is infinitely valuable, of course, it needs to be traded off against other values. But I do think that, for example, a world where people are enslaved to a utility monster is probably worse than one where they are free, even if that diminishes total aggregate utility. In fact, I'm starting to wonder if total utility is a terminal value, or if increasing it is just a side effect of wanting to simultaneously increase average utility and the number of worthwhile lives.
2Oscar_Cunningham9yAgreed on all counts. (Apart from: I wouldn't say that I was maximising others utilty. I'd say I was maximising their happiness, freedom, fulfilment, etc. A utility function is an abstract mathematical thing. We can prove that rational agents behave as if they were trying to maximise some utility function. Since I'm trying to be a rational agent I try and make sure my ideas are consistent with a utility function, and so I sometimes talk of "my utility function". But when I consider other people I don't value their utility functions. I just directly value their happiness, freedom, fulfilment, and so on. I don't value their utility functions because, One, they're not rational and so they don't have utility functions. Two, valuing each other's utility would lead to difficult self-reference. But mostly Three, on introspection I really do just value their happiness, freedom, fulfilment, etc. and not their utility. The sense in which they do have utility is that each contributes utility to me. But then there's no such thing as "an individual's utility" because (as we've seen) the utility other people give to me is a combined function of all of their happiness, freedom, fulfilment, and so on.)
1Ghatanathoah9yI think I understand. I tend to use the word "utility" to mean something like "the sum total of everything a person values." Your use is probably more precise, and closer to the original meaning. I also get very nervous of the idea of maximizing utility because I believe wholeheartedly that value is complex. [http://wiki.lesswrong.com/wiki/Complexity_of_value] So if we define utility too narrowly and then try to maximize it we might lose something important. So right now I try to "increase" or "improve" utility rather than maximize it.

Parfit was not "merely adding" people to the population. He was also adding resources.

Parfit could easily reply that in world A, there are unused resources beyond the reach of the population, and in world A+ these resources are used by the extra people.

0Ghatanathoah9yFrom the perspective of the thought experiment adding resources that didn't exist before and making resources that are already there available are not different in any significant way.

Can I try my own summary?

According to Parfit's premises, the bestest imaginable world is one with an enormous number of extremely happy people. This world isn't physically possible though due to resource constraints.

The mere addition thing shows that in general we are indifferent between small numbers of happy people and large numbers of unhappy people (actually the argument just shows "no worse than" - you'd need to run a similar argument in the other direction to show you're actually indifferent).

Now consider the (presumably finite?) space of a... (read more)

Alice: Let's take population "A+" again. Now imagine that instead of having a population of people with lives barely worth living, the second continent is inhabited by a smaller population with the same very high percentage of resources and utility per person as the population of the first continent. Call it "A++. " Would you say "A++" was better than "A+?"

Bob: Sure, definitely.

I don't find this obvious. I also don't find it obvious that A+ is better than A, or even that some people existing is better than no... (read more)

4[anonymous]9yWhat's the difference between "On a purely selfish basis, is it better for me, personally, to exist or not to exist?" and "Would I commit suicide, all other things being equal?"?
3CronoDAS9y"Would I commit suicide, all other things being equal?"? My suicide affects other people. I have both selfish and altruistic desires; "not wanting other people to grieve for me" is a good enough reason not to kill myself.
2[anonymous]9yI read "not existing" as "not ever existing", so the difference is everything that happened between when you would have started existing and when you would have committed suicide.
2[anonymous]9y(English badly needs separate words for ‘physically exist at a particular time’ and ‘exist, in an abstract timeless sense’. Lots of philosophical discussion such as A-theory vs B-theory would then be shown to be meaningless: does the past exist? Taboo “exist” [http://www.youtube.com/watch?v=j4XT-l-_3y0]: the past no longer exists_1, but it exists_2 nevertheless.)
0Nisan9yYou can tell whether a timeless decision agent would prefer to have been born by giving it opportunities to make decisions that acausally increase its probability of being born. EDIT: For example, you can convince the agent that it was created because its creator believed that the agent would probably make paperclips. If the TDT agent values its existence, it will make paperclips. I don't think a causal decision agent has anything that can be called a "preference to have been born".
2Adele_L9ySo once your misery goes below one unit, you get insane gains in utility for small reductions in misery?
0CronoDAS9yI don't think my actual utility in real life follows that equation, but it's an example that has properties that make the example work. (Another analogy would be that the utility of being dead comes out to the square root of minus one, which can't be directly compared with real numbers.)

"use resources to increase the utility of people who already exist," not "increase average utility."

I agree, and I'd like to see a formal treatment of this idea.

OK... I have another criticism of the repugnant conclusion not based on resource constraints.

We can imagine a sequence of worlds A, B, C, D... each with a greater population, lower average happiness and greater utility than the previous. But did anyone say that the happiness has to converge to zero?

If we're indifferent between worlds with the same number of people and the same average happiness then yes, it does converge to zero. But if we choose some other averaging function then not necessarily. When going from A+ to B- we might lower the left column only a tiny amount, and when we repeat the process all the way to Z then people might be only modestly less happy than they were in A.

what the Mere Addition Paradox proves is not that you can make the world better by adding extra people, but rather that you can make it better by adding extra people and resources to support them.

Your conclusion seems to be that the "repugnant conclusion" is not actually repugnant. That is, the "dystopian" world of a large population leading barely worthwhile lives is better than the original world of a smaller population leading fulfilled lives. You argue that this is possible because the larger world has more resources.