Two kinds of population ethics, and Current-Population Utilitarianism

by AlexMennen 3 min read17th Jun 201421 comments


There are two different kinds of questions that could be considered to fall under the subject of population ethics: “What sorts of altruistic preferences do I have about the well-being of others?”, and “Given all the preferences of each individual, how should we compromise?”. In other words, the first question asks how everyone's experiential utility functions (which measure quality of life) contribute to my (or your) decision-theoretic utility function (which takes into account everything that I or you, respectively, care about), and the second asks how we should agree to aggregate our decision-theoretic utility functions into something that we can jointly optimize for. When people talk about population ethics, they often do not make it clear which of these they are referring to, but they are different questions, and I think the difference is important.


For example, suppose Alice, Bob, and Charlie are collaborating on a project to create an artificial superintelligence that will take over the universe and optimize it according to their preferences. But they face a problem: they have different preferences. Alice is a total utilitarian, so she wants to maximize the sum of everyone's experiential utility. Bob is an average utilitarian, so he wants to maximize the average of everyone's experiential utility. Charlie is an egoist, so he wants to maximize his own experiential utility. As a result, Alice, Bob, and Charlie have some disagreements over how their AI should handle decisions that affect the number of people in existence, or which involve tradeoffs between Charlie and people other than Charlie. They at first try to convince each other of the correctness of their view, but they eventually realize that they don't actually have any factual disagreement; they just value different things. As a compromise, They program their AI to maximize the average of everyone's experiential utility, plus half of Charlie's experiential utility, plus a trillionth of the sum of everyone's experiential utility.


Of course, there are other ways for utility functions to differ than average versus total utilitarianism and altruism versus egoism. Maybe you care about something other than the experiences of yourself and others. Or maybe your altruistic preferences about someone else's experiences differs from their selfish preferences, like how a crack addict wants to get more crack while their family wants them not to.


Anyway, the point is, there are many ways to aggregate everyone's experiential utility functions, and not everyone will agree on one of them. In fact, since people can care about things other than experiences, many people might not like any of them. It seems silly to suggest that we would want a Friendly AI to maximize an aggregation of everyone's experiential utility functions; there would be potentially irresolvable disagreements over which aggregation to use, and any of them would exclude non-experiential preferences. Since decision-theoretic utility functions actually take into account all of an agent's preferences, it makes much more sense to try to get a superintelligence to maximize an aggregation of decision-theoretic utility functions.


The obvious next question is which aggregation of decision-theoretic utility functions to use. One might think that average and total utilitarianism could both be applied to decision-theoretic utility functions, but that is actually not so easy. Decision-theoretic utility functions take into account everything the agent cares about, which can include things that happen in the far future, after the agent dies. With a dynamic population, it is unclear which utility functions should be included in the aggregation. Should every agent that does or ever will exist have their utility function included? If so, then the aggregation would indicate that humans should be replaced with large numbers of agents whose preferences are easier to satisfy1 (this is true even for average utilitarianism, because there needs to be enough of these agents to drown out the difficult-to-satisfy human preferences in the aggregation). Should the aggregation be dynamic with the population, so that at time t, the preferences of agents who exist at time t are taken into account? That would be dynamically inconsistent. In a population of sadists who want to torture people (but only people who don't want to be tortured), the aggregation would indicate that they should create some people and then torture them. But then once the new people are created, the aggregation would take their preferences into account and indicate that they should not be tortured.


I suggest a variant that I'm tentatively calling current-population utilitarianism: Aggregate the preferences of the people who are alive right now, and then leave this aggregated utility function fixed even as the population and their preferences change. By “right now”, I don't mean June 17, 2014 at 10:26 pm GMT; I mean the late pre-singularity era as a whole. Why? Because this is when the people who have the power to affect the creation of the AGI that we will want to maximize said aggregated utility function live. If it were just up to me, I would program an AGI to maximize my own utility function2, but one person cannot do that on their own, and I don't expect I'd be able to get very many other people to go along with that. But all the people who will be contributing to an FAI project, and everyone whose support they can seek, all live in the near-present. No one else can support or undermine an FAI project, so why make any sacrifices for them for any reason other than that you (or someone who can support or undermine you) care about them (in which case their preferences will show up in the aggregation through your utility function)? Now I'll address some anticipated objections.


Objection: Doesn't that mean that people created post-singularity will be discriminated against?

Answer: To the extent that you want people created post-singularity not to be discriminated against, this will be included in your utility function.


Objection: What about social progress? Cultural values change over time, and only taking into account the preferences of people alive now would force cultural values to stagnate.

Answer: To the extent that you want cultural values to be able to drift, this will be included in your utility function.


Objection: What if my utility function changes in the future?

Answer: To the extent that you want your future utility function to be satisfied, this will be included in your utility function.


Objection: Poor third-worlders also cannot support or undermine an FAI project. Why include them but not people created post-singularity?

Answer: Getting public support requires some degree of political correctness. If we tried to rally people around the cause of creating a superintelligence that will maximize the preferences of rich first-worlders, I don't think that would go over very well.


One utility function being easier to satisfy than another doesn't actually mean anything without some way of normalizing the utility function, but since aggregations require somehow normalizing the utility functions anyway, I'll ignore that problem.

This is not a proclamation of extreme selfishness. I'm still talking about my decision-theoretic utility function, which is defined, roughly speaking, as what I would maximize if I had godlike powers, and is at least somewhat altruistic.