Non-personal preferences of never-existed people

Some people see never-existed people as moral agents, and claim that we can talk about their preferences. Generally this means their personal preference in existing versus non-existing. Formulations such "it is better for someone to have existed than not" reflect this way of thinking.

But if the preferences of never-existed are relevant, then their non-personal perferences are also relevant. Do they perfer a blue world or a pink one? Would they want us to change our political systems? Would they want us to not bring into existence some never-existent people they don't like?

It seems that those who are advocating bringing never-existent people into being in order to satisfy those people's preferences should be focusing their attention on their non-personal preferences instead. After all, we can only bring into being so many trillions of trillions of trillions; but there is no theoretical limit to the number of never-existent people whose non-personal preferences we can satisfy. Just get some reasonable measure across the preferences of never-existent people, and see if there's anything that sticks out from the mass.

69 comments, sorted by
magical algorithm
Highlighting new comments since Today at 8:50 AM
Select new highlight date

So I haven't read this because there's all this other stuff higher in my queue that's based on the assumption that he's wrong and life is good, but David Benatar wrote "Better Never to Have Been: The Harm of Coming into Existence" claiming, roughly, that because humans seem to innately experience a loss of X as three times as bad as a gain of X, it would be better on average for a person not to be brought into existence.

From the reviews it seems Benatar's practical upshot is that people shouldn't kill themselves right away, nor kill their existing children, but neither should any more children be brought into existence.

If anyone is really into this topic it would be awesome for someone to read through the book and write a review that assumes the audience understands cognitive biases, astronomical waste, and the coming possibility of revising human nature so that "human nature" arguments aren't acceptable unless they cover "all possible human natures we could eventually edit ourselves into".

Still haven't had time to read the book, and probably never will, but John Danaher has been covering some arguments over the book on his (excellent) blog: Part One, Part Two, Part Three.

I have the book. Maybe I'll publish a reply from the transhumanist perspective if I ever find the time.

I'm a classical utilitarian, so I don't have this problem.

If I were to accept preference utilitarianism, I'd say that fulfilled preferences are worth utility, and by bringing them into being I'd allow them to have fulfilled preferences.

Of course, I'd also say that you should lock people in small, brightly lit spaces to make them prefer big, empty, dark spaces, like most of the universe. Then they'd have really fulfilled preferences. Perhaps I just don't understand preference utilitarianism.

In general, I think that most desires aren't fulfilled on a viscerally emotional level by the mere existence of something so much as actually receiving it. I'm not nearly as fulfilled by ice-cream's existence as I am fulfilled when I'm eating it.

I don't think those people would prefer having their preferences changed in that way.

If you mean they have to get the emotion of a preference being fulfilled, isn't that happiness?

Care to specify that utility function that you claim to follow? :-)

Now I have two undefined terms, rather than one.

I'm not trying to be sophist here, I'm just pointing out that "classical utilitarians" are following a complicated, mostly unspecified utility function. This is ok! There is nothing wrong with it.

But there's also nothing wrong with having a different, complicated utility function, that captures more of your values. Classical utilitarians do not have some special utility, selected on some abstract simplicity criteria; they're in there with the rest of us (as long as we are utilitarians of some type).

Most people's ethics are based on their desires. People's desires are based on what makes them happy. That's as far down as it goes.

A somewhat simplistic definition of happiness is positive reinforcement. If you alter your preferences towards what's happening now, you're happy. If you alter them away, you're sad.

A utility function is quantitative, not qualitative.

How would you go about transforming these vague statements into precise mathematical definition?

(I'll grant you "black box rights"; you can use terms - anger, doubt, etc... - that humans can understand, without having to define them mathematically. So if you come up with a scale of anger with generally understandable anecdotes attached to each level, that will be enough to classify the "anger" term in your overall utility function. Which we will need when we start talking quantitatively about trading anger off against pain, love, pleasure, embarrassment...). Indirect ways of measuring utility - "utility is money" being the most trivial - are also valid if you don't want to wade into the mess of human psychology, but they come with their own drawbacks (instrumental versus terminal goal, eg).

Utility is the dot product of the derivative of desires and the observations. Desires are what you attempt to make happen.

If you start trying to make what's currently happening happen more often, then you're happy.

I don't think most utilitarians claim to follow (or even know) their utility function so much as assert that utility maximization is the proper way to resolve moral conflicts.

Kind of how like physicists claim that there would be a theory of everything without actually knowing what it is.

I perfectly agree that utility maximisation is indeed the proper way to resolve common moral conflicts.

But utility functions can be as complex as you need them to be! Saying you have a utility function does not constrain you virtually at all. But sometimes total utilitarians like to claim that their version is better because it is "simpler" or "more intuitive".

First of all, simplicity is not a virtue comparable with, say, human lives or happines, secondly I have different intuitions to them, and thirdly, their actual real utility function, if it were specified, would be unbelievably complex anyway.

I don't want to pour important moral insights down the drain, based on specious simplicity arguments....

Some people see never-existed people as moral agents, and claim that we can talk about their preferences. Generally this means their personal preference in existing versus non-existing. Formulations such "it is better for someone to have existed than not" reflect this way of thinking.

It might just reflect the speaker's preference for people to exist rather than not exist rather than referencing the preferences of the potentially hypothetical person.

Well yes, it does, in my opinion. But it's not often phrased honestly.

What might be going on is that people are tempted to use a person's preference for existing as a proxy for the value of their life, in the same way that a person's preferences for birthday presents can inform us about what kinds of birthday presents will make them happier.

I would certainly think twice about having a child if I knew the child would grow up to express a wish to never have been born. And I'm not even a preference utilitarian. But this approach seems problematic, and it's probably better to just ask ourselves what kind of people we want to bring into existence.

I personally don't feel compelled to help non-existing people accomplish their goals.

I suspect that this has something to do with the fact that I seem to mostly care about things which activate my empathy, all of which have physically instantiated qualia (happiness, pain, etc.) that I care about, or are highly anthropomorphic and fake that effectively (like Wall-E).

Since non-existing people don't physically exist, I have yet to feel bad for them. This could just be a failure of my moral circuitry though, sort of how if I never found out about starving people in Africa, they wouldn't interact with me in way that would make me feel bad about them. However, I am confused about how mathematically specified but non-existing people work.

This is how I imagine the general form of utilitarianism's utility function: assign all states of being a quantity of fun, which can be positive or negative. Multiply all fun-values times the amount of time over which they are being experienced times the number of beings experiencing them.

This utility function could be optimized by bringing people into the universe when the total fun-impact that they would have on the universe (including themselves) is higher than would be the total fun-impact of not bringing them into the universe.

However, this utility function only cares about people's preferences if they either already exist or may be brought into existence. It could act on data which suggested that sentiences with certain preferences were more likely to exist than sentiences with other preferences, but I don't know if we have any strong data in that area. (I would suppose not, but I don't think I've thought about it long enough to make a positive claim.)

As to why I'm discussing utilitarianism: all utility functions which assign utility to the fulfillment of others' preferences is a form of utilitarianism -- if you object to valuing all sentiences equally, then add a multiplier in front of each sentience indicating how much you value fun it has over that which other sentiences have. Either way, I think that the above conclusions still apply.

Do you have any candidates for a "reasonable measure"? It occurs to me that if you use something like the universal distribution, you introduce a circularity in your decision algorithm, because your decisions influence which persons have more measure, which in turn influences the preferences that go into making those decisions.

Are non-existent people simply extensions of the existent people who created them? Is not the reason for their creation to forward a point supported or opposed by their creator?

I would advocate the ethical principle that we should effectively take into account the values of non-existent people only to the extent that we can expect them to effectively take our own interests into account.

So, for example, as a recent retiree, I should take into account the preferences of people to be born next year to the extent that I expect them to keep Social Security solvent. On the other hand, I have a much lower obligation to people to be born next century, since I don't expect them to contribute much to me.

As for my obligations to counter-factual people - those flow through counterparts. I have obligations to the Downs-syndrome child next-door because his healthy, but counterfactual, counterpart would have had some obligations to me and to my counterfactual Downs-syndrome counterpart. But neither of us has particularly strong obligations to (nor claims on) some poor peasant in Kerala, because of the length of the chain of counterfactual assumptions and common acquaintances that connect us.

If I understand correctly, you're saying that if you had Down Syndrome and your neighbor were healthy then you would want your neighbor to help you; so therefore in reality you are healthy and help your neighbor who has Down Syndrome; and this constitutes your obligation to them.

Is this correct?

Yes. Roughly speaking, a Nash bargain could-have/should-have been made to that effect in the "Original Position" when we were both operating under Rawls's "Veil of Ignorance". I don't completely buy Rawls's "Theory of Justice", but it makes a lot more sense to me than straight utilitarianism.

I want to bring people into existence to satisfy my own preferences. Of course, everything I want tautologically satisfies my own preferences, but I decide that bringing people into existence is good because of the value of their lives, not because they would have chosen to exist.

Good

I'd somewhat disagree with you (at least in the strong, repugnant conclusion form of your argument), but this is a much more defensible argument that ones that implicitly rely on the preferences of non-existent people.

at least in the strong, repugnant conclusion form of your argument

What do you mean by this? When I talked about the value of people lives, I was referring to peoples lives, insofar as they have value, not implying that all lives inherently have value just by existing.

I was referring to this type of argument: http://en.wikipedia.org/wiki/Repugnant_conclusion and making unwarranted assumptions about how you would handle these cases.

Oh, the original repugnant conclusion. I though you were just drawing an analogy to it. Anyway, I think that people only find this conclusion repugnant because of scope insensitivity.

I find it repugnant because I find it repugant. Any population ethic that is utilitarian is as good as any other; mine is of a type that rejects the repugnant conclusion. Average utilitarianism, to pick one example, is not scope insensitive, but rejects the RP (I personally think you need to be a bit more sophisitcated).

I find it repugnant because I find it repugant.

You sound a bit like Self-PA here. You do realize that it is possible to misjudge your preferences due to factual mistakes? That's what the people in Eliezer's examples of scope insensitivity were doing. I don't see how you could determine the utility of one billion happy lives just by asking a human brain how it feels about the matter (ie without more complex introspection, preferably involving math).

Average utilitarianism leads to the conclusion that if someone of below average personal experiential utility, meaning the utility that they experience rather than the utility function that describes their preferences, can be removed from the world without affecting anyone else's personal experiential utility, then this should be done. My mind can understand one person's experiences, and I think that, as long as their personal experiential utility is positive*, doing so is wrong.

* Since personal experiential utility must be integrated over time, it must have a zero, unlike the utility functions that describe preferences.

Average utilitarianism leads to the conclusion that if someone of below average personal experiential utility, meaning the utility that they experience rather than the utility function that describes their preferences, can be removed from the world without affecting anyone else's personal experiential utility, then this should be done.

I suspect you've allowed yourself to be confused by the semantics of the scenario. If you rule out externalities, removing someone from the world of the thought experiment can't be consequentially equivalent to killing them (which leaves a mess of dangling emotional pointers, has a variety of knock-on effects, and introduces additional complications if you're using a term for preference satisfaction, to say nothing of timeless approaches); it's more accurately modeled with a comparison between worlds where the person in question does and doesn't exist, Wonderful Life-style.

With that in mind, it's not at all self-evident to me that the world where the less-satisfied-than-average individual exists is more pleasant or morally perfect than the one in which they don't. Why not bite that bullet?

No, I was not making that confusion. I based my decision on a consideration of just that person's mental state. I find a `good' life valuable, though I don't know the specifics of what a good life is, and ceteris paribus, I prefer its existence to its nonexistence.

As evidence to me clearly differentiating killing and `deleting' someone, I am surprised by how much emphasis Eliezer puts on preserving life, rather than making sure that good life exist. Actually, thinking about that article, I am becoming less surprised that he takes this position because he focuses on the rights of conscious beings rather than on some additional value possessed by already-existing life relative to nonexistent life.

Hmm. Yes, it does appear that an less-happy-than-average person presented with a device that would remove them from existence without externalities would be compelled to use it if they are an average utilitarian with a utility function defined in terms of subjective quality of life, regardless of the value of their experiential utility.

The problem is diminished, though not eliminated, if we use a utility function defined in terms of expected preference satisfaction (people generally prefer to continue existing), and I'm really more of a preference than a pleasure/pain utilitarian, but you can overcome that by making the gap between your and the average preference satisfaction large enough that it overcomes your preference for existing in the future. Unlikely, perhaps, but there's nothing in the definition of the scenario that appears to forbid it.

That's the trouble, though; for any given utility function except one dominated by an existence term, it seems possible to construct a scenario where nonexistence is preferable to existence: Utility Monsters for pleasure/pain utilitarians, et cetera. A world populated by average-type preference utilitarians with a dominant preference for existing in the future does seem immune to this problem, but I probably just haven't thought of a sufficiently weird dilemma yet. The only saving grace is that most of the possibilities are pretty far-fetched. Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?

I don't think that existence is always preferable to nonexistence. A good life seems to have value, but a bad life is, ceteris paribus, not preferable to nonexistence. The problem with average utilitarianism is that a good life is declared worse than nonexistence if it is below average, but I want to preserve the value in that life, not eliminate it. In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.

Have you actually found a knockdown argument, or just an area where our ethical intuitions go out of scope and stop returning good values?

Argument for what exactly? We've touched on a few different issues and it's not clear to me what you mean here.

One can use a simple hack to get round this problem: set any being that has existed but no longer does as having utility zero (for some zero level). In that case, average utilitarians won't want to bring them into existence, but won't want to elminate them afterwards.

There are other ways of acheiving the same thing. I think that people are far too inclinded to over-simplify their moral intuitions, based on over-simple mathematical models. Once your preferences are utility functions, and with some decent way of dealing with copies/death, then there are no further reasons to expect simplicity.

There are so many problems with this.

  1. Relativity: people do not have experiences before or after each other.

  2. On another planet, a happy civilization with our values existed billions of years ago. Over the course of its existence, there were quadrillions of people. Everything we do now is almost worthless by comparison, because they are all bringing down the average.

  3. Oog, a member of the first sentient tribe, which has just come into existence, is going to be hit very hard with a club. Thag, a future member of the same tribe who does not yet exist, is going to be hit very hard with a club three times 100 years from now. If you can prevent one of these, it depends on the number of people who live in tribe between now and then, even though they would no longer exist.

2 is irrelevant, because it doesn't matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.

1 and 3 can be solved by taking average utility over all agents, past and future. But that's irrelevant to me because... I'm not an average utilitarian :-)

I'm waiting until I can capture my moral values in some (complicated) utility function. Until then, I'm refining my position.

2 is irrelevant, because it doesn't matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.

I was more thinking of people who needed to chose between benefiting our society and benefiting the ancient society, for whom this distinction would be relevant. I guess this is mathematically equivalent to 3.

1 and 3 can be solved by taking average utility over all agents, past and future. But that's irrelevant to me because... I'm not an average utilitarian :-)

This brings back the original problem of killing people to bring up the average.

This brings back the original problem of killing people to bring up the average.

That can be dealt with in some usual way, by setting the utility of someone dead to zero but keeping them in the average.

It feels odd to take an average over a possibly infinite future in this manner. It might work, but I feel like how well it matches our preferences will depend on the specifics of physics.

EDIT: This also implies that a world with 10 happy immortal people and 10 happy people who die is much worse than one with just 10 immortal people. Would you agree with that and all similar statements of that type implied by this solution?

I agree with that to some extent (as in I disagree, but replace both 10 with 10 trillion and I'd agree). But I'm still firming up my intuition at the moment.

One can use a simple hack to get round this problem: set any being that has existed but no longer does as having utility zero (for some zero level). In that case, average utilitarians won't want to bring them into existence, but won't want to elminate them afterwards.

I'm sympathetic to this line of thought, but setting the number to zero creates unnecessary problems. I think it would make more sense to set the utility of a dead person to be whatever the total amount of utility they experienced over their lifetime was. This has the same result of removing the incentive to kill unhappy people, since (for instance) people who died in their 20s would normally have much lower utility than people who lived to be 80. And it would remove certain counterintuitive results that zeroing produces, such as having someone who was tortured to death end up making the same contribution to the average as someone who died from excessive sex.

In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.

What do you mean by "actual preference"? I can't think of many interpretations of that comment that don't implicitly define a utility function (making the statement tautological): even evaluating everything in terms of how well it matches our ethical intuitions constitutes a utility function, albeit one that'll almost certainly end up being inconsistent outside a fairly narrow domain.

Our intuitions evolved to make decisions about existing entities in a world dense with externalities. I'd expect them to choke on problems that don't deal with either one in a conventional way, and I don't trust intuition pumps that rely on those problems.

I do mean a utility function, but one not necessarily known to the agent. If one values all good lives but wants to consider average utilitarianism, they could make average utilitarianism less different from their real utility function by adding existence terms. However, if their real utility function says that a life should exist, ceteris paribus, if it meets a certain standard of value, regardless of the value of other lives, than they are not really an average utilitarian; average utilitarianism is incompatible with that statement.

That seems logically valid, but it doesn't tell us very much about those utility functions that we don't already know.

Well we must already know something about our utility functions or it is meaningless to say that we desire for them to be maximized. I feel considerably more confidant that I want people to live good lives than I feel toward any of the arguments for average utilitarianism that I have seen.

I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I'm sticking with them :-)

Btw, total utilitarianism has a problem with death as well. Most total utilitarians do not consider "kill this person, and replace them with a completely different person who is happier/has easier to satisfy preferences" as an improvement. But if it's not an improvement, then something is happening that is not captured by the standard total utility. And if total utilitarianism has to have an extra module that deals with death, I see no problem for other utility functions to have a similar module.

I may misjudge my preferences, but unless someone else has convincing reasons to claim they know my preferences better than me, I'm sticking with them :-)

Do you think that Eliezer's arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn't your average utilitarianism based on the same intuition?

I am neither a classical nor a preference utilitarian, but I am reasonably confidant that my utility function is a sum over individuals, so I consider myself a total utilitarian. Ceteris paribus, I would consider the situation that you describe an improvement,

Do you think that Eliezer's arguments about scope insensitivity here should have convinced the Israelis donating to sick children to reevaluate their preferences? Isn't your average utilitarianism based on the same intuition?

Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say "actually, I care about warm fuzzies to do with saving children, not saving children per see", then they are monsterous people, but consistent.

You can't say that people have the wrong utility by pointing out scope insensitivity, unless you can convince them that scope insensitivity is morally wrong. I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don't count as moral agents - just as normal humans aren't worried about the scope insensitivity over the feelings of sand.

I find the repugnant conclusion repugnant. Rejecting it, is however, non-trivial, so I'm working towards an improved utility that has more of my moral values and less problems.

Only if they value saving more children in the first place. If the flaw is pointed out, if they understand fully the problem, and then say "actually, I care about warm fuzzies to do with saving children, not saving children per see", then they are monsterous people, but consistent.

Would that actually be the best way of getting warm fuzzies? Anyways, any set of actions is consistent with maximizing a utility function; sets of preferences are the things that can be inconsistent with utility maximization. I'm not saying that I could convince any possible being that scope insensitivity is wrong. What I do think is that the humans are not acting according to their `real' preferences, and that they would realize this if they understood Eliezer's arguments.

I think that scope insensitivity for existent humans is wrong, but fine over non-existent humans, which I don't count as moral agents.

What moral status do you attach to humans who do not currently exist, but definitely will exist in the future?

I'm working towards an improved utility that has more of my moral values and less problems.

Good luck!

What I do think is that the humans are not acting according to their `real' preferences, and that they would realize this if they understood Eliezer's arguments.

Human real preferences aren't utility based, not even close, and this is a big potential problem. So they have to make their preferences closer to a utility function, using some methods or other. But humans never should act according to their messy 'real' preferences.

What moral status do you attach to humans who do not currently exist, but definitely will exist in the future?

Same as I do to people today. Simple heuristic: any choice that causes increased utility to any agent that exists at any time is always positive - giving a dollar to somebody in two generation is good, whoever they are.

On the other hand, choices that increase or decrease the number of agents - giving birth to that person in two generations or not - are more complicated.

Good luck!

Thanks!

Have you seen http://meteuphoric.wordpress.com/2011/03/13/if-birth-is-worth-nothing-births-are-worth-anything/ ? It may help you notice any inconsistencies between possible utility functions and your values.

Oh yes, I've seen it - I think the author pointed it out to me. It's a nice point, but it doesn't even undermine average utilitarianism. It only undermines particularly naive "birth means nothing" arguments.

I simply take the position that "only the preferences of people currently existing at the time they have those preferences are relevant" (this means that your current preferences about what happens after you die are relevant, but not your preferences "before you were born"). That leaves a lot of flexibility...

Of course it doesn't apply to many forms of average utilitarianism. It just struck me as a useful consistency check.

  • It is unknown whether or not we should treat nonexistent people as moral agents (like people rather than like trees), but it's an interesting idea to consider.

  • If we do this, we should focus on non-personal preferences rather than personal ones, because we can satisfy infinitely more preferences that way.

  • This contradicts the way most people reason when they treat nonexistent people as moral agents.

  • However, there is a problem: we need to try and figure out the preferences of nonexistant people to see what treating them as moral agents implies.

presumably that it is an error to take non-existing persons' preferences into account.

I was not aware that anyone actually does that.

Counterexamples:

1) All beings that act as if they were persuing a goal of (pseudo)-self-replication are also acting as if they were taking non-existing beings' preferences into account (specifically, the preference of their future pseudo-copies to exist once they exist).

2) Beings that attempt to withhold resources from entropisation ("consumption") in anticipation of exchanging them later on terms causally influenced by the preferences of not-yet-existing beings ("speculators").

All beings that act as if they were pursuing a goal of (pseudo)-self-replication are also acting as if they were taking non-existing beings' preferences into account (specifically, the preference of their future pseudo-copies to exist once they exist).

I was under the impression that you were arguing here that the goal of self-replication is adequately justified by the "clippiness" of the prospective replica - with the most important component of the property 'clippiness' being a propensity to advance Clippy's values. That is, you weren't concerned with providing utility to the replicas - you were concerned with providing utility to yourself.

My point was that the distinction between "selves" is spurious. Clippys support all processes that instantiate paperclip-maximizing, differentiating between them only only by their clippy-effectiveness and the certainty of this assessment of them.

My point here is that different utility functions can explain a certain class of being's behavior, and one such utility function is one that places value on not-yet-existing beings -- even though the replicator may not, on self-reflection, regard this as the value it is pursuing.

Robin Hanson often does when arguing we should have more people in the world.

People not only argue from fictional evidence, they think from it.

Edit: Could the downvoter please explain their dispute?

As long as there is thinking involved....

I fail to see the cases the OP is working from.

For some value of "thinking". I see what the OP is talking about and it isn't pretty.

If I understand the orthogonality thesis properly then it is possible to have any utility function. So if we were to try to satisfy the nonpersonal preferences of nonexistant people we would be paralyzed with indecision, because for every nonexistant creature with a utility function saying "Do X" there would be another nonexistant creature with a utility function saying "Don't do X."

This also means that your suggestion to "get some reasonable measure across the preferences of never-existent people, and see if there's anything that sticks out from the mass" probably wouldn't work. For everything that stuck out there would be another thing that stuck out saying to do the opposite.

Now Robin Hanson, who I think was responsible for starting this whole line of thought, replied to similar objections by suggesting that maybe not all non-existant people's preferences should count morally. But if I recall, one of his main arguments for taking nonexistant people's preferences into account in the first place was that if you started ignoring people's preferences, where would you stop?

So overall, I think that regarding the preferences of people who don't exist, and never will exist, as morally relevant, isn't a good idea.