First of all, I am not a utilitarian. I'm doing this because I think it is valuable to see how different moral intuitions connect to each other, and it's also fun.

My argument is based on the following claims:

**Is claim: **A person’s pleasure/pain can be quantified and compared to another person’s pleasure/pain.

**Ought claim 1: **A person ought to follow the laws of rational choice theory.

**Ought claim 2: **A person's decisions ought to only be informed by how they affect the pain/pleasure of other people and themselves.

**Ought claim 3:** If a person's decision only affects themself, they ought to increase their own pleasure.

**Ought claim 4: **Morality treats everyone the same. More precisely, a person ought to make the same decision if they knew they would switch places with another person afterward.

**Ought claim 5: **The fairness of an outcome ought to be irrelevant (this is probably the most interesting and contentious assumption).

In its present form, this argument only applies to universes where there are two people living in them. It could probably be generalized to n people, but I haven't had time to look into this.

Here is the setup:

There are two people in a room. Person 1 can choose to take an apple or a banana. Whatever person 1 decides not to take is given to person 2. I will prove that person A should make their decision based on total hedonistic utilitarianism, which I make more clear in a moment.

Obviously, there is nothing special about this apple-banana conundrum. You can apply the reasoning I am going to give to any decision which affects two people.

**Is claim: **A person’s pleasure/pain can be quantified and compared to another person’s pleasure/pain. Here's what person 1's pleasure axis might look like for different experiences:

There are 4 pleasure/pain numbers involved in this scenario:

- The pleasure of person 1 when they get an apple (A1)
- The pleasure of person 2 when they get a banana (B2)
- The pleasure of person 1 when they get a banana (B1)
- The pleasure of person 2 when they get an apple (A2)

I will show that if:

Then person 1 ought to take the apple. Otherwise, they ought to take the banana.

**Ought claim 2: **A person's decisions ought to only be informed by how they affect the pain/pleasure of other people and themselves.

This implies that person 1's decision should only depend on A1, B2, B1, and A2. Let's define a function:

**Ought claim 1: **A person ought to follow the laws of rational choice theory.

By the __Von Neumann–Morgenstern utility theorem__, this implies that there exists a function:

that assigns a number to a scenario that represents person 1’s preference for that scenario. Person 1 will choose scenario s1 over scenario s2 when u(s1) > u(s2).

There are two scenarios that person 1 must decide between:

By ought claim 2, the only thing that is relevant here is pleasure and pain. So, u is a function of x and y where x is the pleasure of person 1 and y is the pleasure of person 2.

Writing this in math:

**Ought claim 4: **Morality treats everyone the same. More precisely, a person ought to make the same decision if they know they would switch places with another person afterward.

Person 1 shouldn't care about the distinction between themself and person 2, which means that:

Therefore, there exists a function g such that:

This can be shown by using a rotated and scaled coordinate system, parameterized by:

Let’s take a step back and think about what this result means. |x - y| is the amount by which person 1 and person 2’s pleasure differ. This is similar to the concept of *fairness *or *justice*. x + y is the sum of their pleasures. This means that person 1’s decision ought to only depend on the fairness of the result and the total pleasure produced.

This is a surprisingly good model of common moral intuitions.

Let's finish the proof.

**Ought claim 5: **The fairness of an outcome ought to be irrelevant.

This is by far the most contentious claim, but I can't see a way around it. Let me know in the comments if you can think of a more agreeable axiom that would complete the proof.

More formally, this implies that there exists a function h such that:

We are almost done.

**Ought claim 3:** If a decision affects no one but yourself, you ought to increase your own pleasure.

This means that if y is held fixed, then u(x,y) ought to be higher if x is higher:

Therefore, h is monotonic increasing:

Bringing back the original definition of u(x,y) and using the fact that h is monotonic increasing:

So person 1 ought to choose the apple if A1 + B2 > B1 + A2. That's total utilitarianism. QED.

I doubt all of your ought claims.

I doubt all of the claims, including the "is" claim.

Me too. The claims are doing all the work, while the argument is a triviality.

I agree that the claims are doing all of the work and that this is not a convincing argument for utilitarianism. I often hear arguments for moral philosophies that make a ton of implicit assumptions. I think that once you make them explicit and actually try to be rigorous the argument always seems less impressive, and less convincing.

I think a key principle involves selecting the right set of ought claims as assumptions. Some are more convincing than others. E.g. I believe "The fairness of an outcome ought to be irrelevant (this is probably the most interesting and contentious assumption)." can be replaced with something like "Frequencies and stochasticities are interchangable; X% chance of affecting everyone's utility is equivalent to 100% chance of affect X% of people's utility".

This is a much more agreeable assumption. When I get a chance, I'll make sure it can replace the fairness one and add it to the proof and give you credit.

Another issue:

It implies that there exists some such function. It does

notimply there exists a single unique function. And indeed the resulting function isnotunique.If I have two choices A and B, and I rank A>B, u(PA)=10PA might be one valid function (Effective value of 10 for A and 0 for B). But u(PA)=2PA+1 might be another (Effective value of 3 for A and 1 for B.)

This, unfortunately, rather undermines the rest of your argument.

^{^}https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem#Incomparability_between_agents

I don't think I agree that this undermines my argument. I showed that the utility function of person 1 is of the form h(x + y) where h is monotonic increasing. This respects the fact that the utility function is not unique. 2(x + y) + 1 would qualify, as would 3 log(x + y), etc.

Showing that the utility function must have this form is enough to prove total utilitarianism in this case since when you compare h(x + y) to h(x'+ y'), h becomes irrelevant. It is the same as comparing x + y to x' + y'.

I have three agents A B and C, each with the following preferences between two outcomes a and b:

(2 is redundant given 1, but I figured it was best to spell it out.)

This satisfies the axioms of the VNM theorem.

I'll give you a freebee here: I am declaring that agent C's utility function is: uC(Pa)=−2Pa as part of the problem. This is compatible with the definition of agent C's preferences, above.

As for agents A and B, I'll give you less of a freebee:

I am declaring as part of the problem that one of the two agents, agent [redacted alpha] has the following utility function: u[Redacted Alpha](Pa)=3Pa. This is compatible with the definition of agent [redacted alpha]'s preferences, above.

I am declaring as part of the problem that the other of the two agents, agent [redacted beta] has the following utility function: : u[Redacted Beta](Pa)=Pa. This is compatible with the definition of agent [redacted beta]'s preferences, above.

Now, consider the following scenarios:

Please tell me the optimal outcome for 3 and 4.

This assumes that the act of evaluating a utility function has no utility cost.

I do not agree with this (implicit) assumption.

Good point, I overlooked this.