This post is mostly propaganda for the Nash Bargaining solution, but also sets up some useful philosophical orientation. This post is also the first post in my geometric rationality sequence.
Let's pretend that you are a utilitarian. You want to satisfy everyone's goals, and so you go behind the veil of ignorance. You forget who you are. Now, you could be anybody. You now want to maximize expected expected utility. The outer (first) expectation is over your uncertainty about who you are. The inner (second) expectation is over your uncertainty about the world, as well as any probabilities that comes from you choosing to include randomness in your action.
There is a problem. Actually, there are two problems, but they disguise themselves as one problem. The first problem is that it is not clear where you should get your distribution over your identity from. It does not make sense to just take the uniform distribution; there are many people you can be, and they exist to different extents, especially if you include potential future people whose existences are uncertain.
The second problem is that interpersonal utility comparisons don't make sense. Utility functions are not a real thing. Instead, there are preferences over uncertain worlds. If a person's preferences satisfy the VNM axioms, then we can treat that person as having a utility function, but the real thing is more like their preference ordering. When we get utility functions this way, they are only defined up to affine transformation. If you add a constant to a utility function, or multiply a utility function by a positive constant, you get the same preferences. Before you can talk about maximizing the expectation over your uncertainty about who you are, you need to put all the different possible utility functions into comparable units. This involves making a two dimensional choice. You have to choose a zero point for each person, together with a scaling factor for how much their utility goes up as their preferences are satisfied.
Luckily, to implement the procedure of maximizing expected expected utility, you don't actually need to know the zero points, since these only shift expected expected utility by a constant. You do, however need to know the scaling factors. This is not an easy task. You cannot just say something like "Make all the scaling factors 1." You don't actually start with utility functions, you start with equivalence classes of utility functions.
Thus, to implement utilitarianism, we need to know two things: What is the distribution on people, and how do you scale each person's utilities? This gets disguised as one problem, since the thing you do with these numbers is just multiply them together to get a single weight, but it is actually two things you need to decide. What can we do?
Now, let's pretend you are an egalitarian. You still want to satisfy everyone's goals, and so you go behind the veil of ignorance, and forget who you are. The difference is that now you are not trying to maximize expected expected utility, and instead are trying to maximize worst-case expected utility. Again, the expectation contains uncertainty about the world as well as any randomness in your action. The "worst-case" part is about your uncertainty about who you are. You would like to have reasonably high expected utility, regardless of who you might be.
When I say maximize worst-case expected utility, I am sweeping some details under the rug about what to do if you manage to max out someone's utility. The actual proposal is to maximize the minimum utility over all people. Then if there are multiple ways to do this, consider the set of all people for which it is still possible to increase their utility without bringing anyone below this minimum. Repeat the proposal with only those people, subject to the constraint that you only consider actions that don't bring anyone below the current minimum. (Yeah, yeah, this isn't obviously well defined for infinitely many people. I am ignoring those details right now.)
This is called egalitarianism, because assuming you have the ability to randomize, and ignoring complications related to maxing out someone's utility, you will tend to give everyone the same expected utility. (For example, in the two person case, it will always be the case that either it is not possible to increase the expected utility of the person with lower expected utility, or the two people have the same expected utility.
Unfortunately, there are also two problems with defining egalitarianism. We no longer have to worry about a distribution on people. However, now we have to worry about what the zero point of each person's utility function is, and also what the scaling factor is for each person's utility function.
Unlike utilitarianism, egalitarianism will sometimes recommend randomizing between different outcomes for the sake of fairness.
Utilitarianism and egalitarianism each have their own type of utility monster.
For utilitarianism, imagine Cookie Monster. Cookie Monster gets a bazillion utility for every cookie he gets. This dwarfs everyone's utility, and you should devote almost all your resources to giving cookies to Cookie Monster.
For egalitarianism, imagine Oscar the Grouch. Oscar hates everything. Worlds range from giving Oscar zero utility to giving Oscar one bazillionth of a utility. Assuming it is possible to give everyone else much more than a bazillionth of a utility simultaneously, you should devote almost all of your resources to maximizing Oscar's utility.
For both utilitarianism and egalitarianism, it is possible to translate and rescale utilities to create arbitrarily powerful utility monsters, which is to say that the choice of how to normalize utility really matters a lot.
Filling in the Gaps
For defining either utilitarianism or egalitarianism, there are three hard to define parameters we need to consider:
1) The probability (from behind the veil of ignorance) that you expect to be each person,
2) The zero point of each person's utility function, and
3) The scaling factor of each person's utility function.
Utilitarianism requires both 1 and 3. Egalitarianism requires both 2 and 3. Unfortunately, I think that 1 and 2 are the two we have the most traction on.
1 feels more like an empirical question. It is mixed in with the question of where the priors come from. 1 is like asking "With what prior probability would you expect to have observed being any these people?"
2 feels like it is trying to define a default world. Something that is achievable, so it is possible to give everyone non-negative utility simultaneously. Maybe we can use something like understanding boundaries to figure out what 2 should be.
On 3, I got nothing, which is unfortunate, because we need 3 to define either of the two proposals. Is there anything reasonable we can do if we only have answers to 1 and 2?
Also, people have intuitions pointing towards both Utilitarianism and Egalitarianism. How are we supposed to decide between them?
Why not Both?
Assume that we magically had an answer to both 1 and 2 above, so we both have a distribution over who we are behind the the veil of ignorance, and we also have a zero point for everyone's utility function. Assume further we are allowed to randomize in our action, and that it is possible to give everyone positive utility simultaneously. Then, there exists an answer to 3 such that utilitarianism and egalitarianism recommend the same action.
If we take the weakest notion of egalitarianism, which is just that the minimum utility is maximized, then there might be more than one such scaling. However, if we take the strongest notion of egalitarianism, that also everyone ends up with the same utility (arguably the true spirit of egalitarianism), then we will get existence and uniqueness of the scaling factors and the utilities. (I am not sure what the uniqueness situation is for the tiered egalitarianism proposal I gave above.)
Here is a proof sketch of the existence part:
Start with some arbitrary scaling factor on everyone's utility functions.
Consider the action which maximizes the expected logarithm of expected utility, where the outer expectation is over who you are, and the inner expectation is over randomness in the world or in your action. This point will be unique up to utility because of the convexity of the logarithm. Note that everyone will get positive utility.
For each person, rescaling their utility function will only add a constant to the logarithm of their expected utility, and will thus have no effect on maximizing the expected logarithm of expected utility.
Thus, we can rescale everyone's utilities so that everyone gets expected utility 1 when we maximize the expected logarithm of expected utility.
First, we need to see that given this rescaling, the utilitarian choice is to give everyone expected utility 1. Assume for the purpose of contradiction that there was some way to achieve expected expected utility greater than 1. Let be the (randomized) action that gets everyone expected utility 1, and let be a better action that gets expected expected utility . If you consider the parameterized action , and look at the derivative of expected expected utility respect to at , you get . However, when everyone gets expected utility 1, the expected logarithm of expected utility will have the same derivative as expected expected utility. Thus this derivative will also be , contradicting the fact that the policy maximizes the expected logarithm of expected utility at the action that you get when .
Next, let us see that given this rescaling, the egalitarian choice is to give everyone utility 1. If it were possible to give anyone expected utility greater than 1 without decreasing anyone's expected utility to less than 1, this would be a utilitarian improvement, which we already said was impossible. Thus, the only way to achieve a worst-case expected utility of 1 is to give everyone expected utility 1.
The above policy is an alternate characterization of the Nash bargaining solution, generalized to many players with different weights.
Given a zero point and a feasible set of options closed under random mixtures, The Nash bargaining solution gives a way of combining two utility functions into a single option.
The arguments in this post are not the most standard arguments for Nash bargaining. Nash bargaining can also be uniquely characterized with some simple axioms like Pareto optimality and independence of irrelevant alternatives.
There is a lot of reason to consider the Nash bargaining solution as the default way to combine utility functions when you don't have a principled way to do interpersonal utility comparisons. Even if you had a principled way of doing interpersonal utility comparisons, you might want to do Nash bargaining anyway for the sake of fairness.