First of all, I am not a utilitarian. I'm doing this because I think it is valuable to see how different moral intuitions connect to each other, and it's also fun. 

My argument is based on the following claims:

Is claim: A person’s pleasure/pain can be quantified and compared to another person’s pleasure/pain. 

Ought claim 1: A person ought to follow the laws of rational choice theory.

Ought claim 2: A person's decisions ought to only be informed by how they affect the pain/pleasure of other people and themselves.

Ought claim 3: If a person's decision only affects themself, they ought to increase their own pleasure.

Ought claim 4: Morality treats everyone the same. More precisely, a person ought to make the same decision if they knew they would switch places with another person afterward. 

Ought claim 5: The fairness of an outcome ought to be irrelevant (this is probably the most interesting and contentious assumption).

In its present form, this argument only applies to universes where there are two people living in them. It could probably be generalized to n people, but I haven't had time to look into this.

Here is the setup:

There are two people in a room. Person 1 can choose to take an apple or a banana. Whatever person 1 decides not to take is given to person 2. I will prove that person A should make their decision based on total hedonistic utilitarianism, which I make more clear in a moment.

Obviously, there is nothing special about this apple-banana conundrum. You can apply the reasoning I am going to give to any decision which affects two people.

Is claim: A person’s pleasure/pain can be quantified and compared to another person’s pleasure/pain. Here's what person 1's pleasure axis might look like for different experiences:

There are 4 pleasure/pain numbers involved in this scenario: 

  • The pleasure of person 1 when they get an apple (A1)
  • The pleasure of person 2 when they get a banana (B2)
  • The pleasure of person 1 when they get a banana (B1)
  • The pleasure of person 2 when they get an apple (A2)

I will show that if:

Then person 1 ought to take the apple. Otherwise, they ought to take the banana.

Ought claim 2: A person's decisions ought to only be informed by how they affect the pain/pleasure of other people and themselves.

This implies that person 1's decision should only depend on A1, B2, B1, and A2. Let's define a function:

Ought claim 1: A person ought to follow the laws of rational choice theory.

By the Von Neumann–Morgenstern utility theorem, this implies that there exists a function:

that assigns a number to a scenario that represents person 1’s preference for that scenario. Person 1 will choose scenario s1 over scenario s2 when u(s1) > u(s2).

There are two scenarios that person 1 must decide between:

 By ought claim 2, the only thing that is relevant here is pleasure and pain. So, u is a function of x and y where x is the pleasure of person 1 and y is the pleasure of person 2.

Writing this in math:

 

Ought claim 4: Morality treats everyone the same. More precisely, a person ought to make the same decision if they know they would switch places with another person afterward. 

Person 1 shouldn't care about the distinction between themself and person 2, which means that:

Therefore, there exists a function g such that:

This can be shown by using a rotated and scaled coordinate system, parameterized by:

 

Let’s take a step back and think about what this result means. |x - y| is the amount by which person 1 and person 2’s pleasure differ. This is similar to the concept of fairness or justice.  x + y is the sum of their pleasures. This means that person 1’s decision ought to only depend on the fairness of the result and the total pleasure produced.


This is a surprisingly good model of common moral intuitions.

Let's finish the proof.

Ought claim 5: The fairness of an outcome ought to be irrelevant.

This is by far the most contentious claim, but I can't see a way around it. Let me know in the comments if you can think of a more agreeable axiom that would complete the proof.

More formally, this implies that there exists a function h such that:

We are almost done.

Ought claim 3: If a decision affects no one but yourself, you ought to increase your own pleasure.

This means that if y is held fixed, then u(x,y) ought to be higher if x is higher:

Therefore, h is monotonic increasing:

Bringing back the original definition of u(x,y) and using the fact that h is monotonic increasing:

So person 1 ought to choose the apple if A1 + B2 > B1 + A2. That's total utilitarianism. QED.
 

2

11 comments, sorted by Click to highlight new comments since: Today at 7:38 AM
New Comment

I doubt all of your ought claims.

I doubt all of the claims, including the "is" claim.

Me too. The claims are doing all the work, while the argument is a triviality.

I agree that the claims are doing all of the work and that this is not a convincing argument for utilitarianism. I often hear arguments for moral philosophies that make a ton of implicit assumptions. I think that once you make them explicit and actually try to be rigorous the argument always seems less impressive, and less convincing.

I think a key principle involves selecting the right set of ought claims as assumptions. Some are more convincing than others. E.g. I believe "The fairness of an outcome ought to be irrelevant (this is probably the most interesting and contentious assumption)." can be replaced with something like "Frequencies and stochasticities are interchangable; X% chance of affecting everyone's utility is equivalent to 100% chance of affect X% of people's utility".

This is a much more agreeable assumption. When I get a chance, I'll make sure it can replace the fairness one and add it to the proof and give you credit.

Another issue:

By the Von Neumann–Morgenstern utility theorem, this implies that there exists a function:

It implies that there exists some such function. It does not imply there exists a single unique function. And indeed the resulting function is not unique.

If I have two choices  and , and I rank  might be one valid function (Effective value of  for  and  for ). But  might be another (Effective value of  for  and  for .)

Since for any two VNM-agents X and Y, their VNM-utility functions uX and uY are only determined up to additive constants and multiplicative positive scalars, the theorem does not provide any canonical way to compare the two. Hence expressions like uX(L) + uY(L) and uX(L) − uY(L) are not canonically defined, nor are comparisons like uX(L) < uY(L) canonically true or false. In particular, the aforementioned "total VNM-utility" and "average VNM-utility" of a population are not canonically meaningful without normalization assumptions[1].

This, unfortunately, rather undermines the rest of your argument.

  1. ^

    https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem#Incomparability_between_agents

I don't think I agree that this undermines my argument. I showed that the utility function of person 1 is of the form h(x + y) where h is monotonic increasing. This respects the fact that the utility function is not unique. 2(x + y) + 1 would qualify, as would 3 log(x + y), etc.

Showing that the utility function must have this form is enough to prove total utilitarianism in this case since when you compare h(x + y) to h(x'+ y'), h becomes irrelevant. It is the same as comparing x + y to x' + y'.

I have three agents   and , each with the following preferences between two outcomes  and :

  1. Agents  and  prefers 
    1. Agent  prefers 
  2. For any two lottos <, with an  chance of getting , otherwise > and <, with an  chance of getting , otherwise >:
    1. if 
      1.  and   prefer 
      2.  prefers .
    2. If , all three agents are indifferent between  and 
    3. if :
      1.  and   prefer 
      2.  prefers .

(2 is redundant given 1, but I figured it was best to spell it out.)

This satisfies the axioms of the VNM theorem.

I'll give you a freebee here: I am declaring that agent 's utility function is:  as part of the problem. This is compatible with the definition of agent 's preferences, above.

As for agents  and , I'll give you less of a freebee:
I am declaring as part of the problem that one of the two agents, agent [redacted alpha] has the following utility function: . This is compatible with the definition of agent [redacted alpha]'s preferences, above.
I am declaring as part of the problem that the other of the two agents, agent [redacted beta] has the following utility function: : . This is compatible with the definition of agent [redacted beta]'s preferences, above.

Now, consider the following scenarios:

  1. Agent [redacted alpha] and agent  are choosing between  and :
    1. The resulting utility function is 
    2. The resulting optimal outcome is outcome .
  2. Agent [redacted beta] and agent  are choosing between  and :
    1. The resulting utility function is 
    2. The resulting optimal outcome is outcome .
  3. Agent   and agent  are choosing between  and :
    1. Is this the same as scenario 1? Or scenario 2?
  4. Agent   and agent  are choosing between  and :
    1. Is this the same as scenario 1? Or scenario 2?

Please tell me the optimal outcome for 3 and 4.

This assumes that the act of evaluating a utility function has no utility cost.

I do not agree with this (implicit) assumption.

Good point, I overlooked this.