This is a pretty obvious point, which is sort of made here but I haven't seen anyone lay out explicitly.

Under average utilitarianism the morality of having a child depends on whether a billion light years away there's any sentient aliens, how many there are, and whether their average happiness is greater or less than your child's would be. This despite your actions having zero impact on them, and vice versa.

This, far more than the sadistic conclusion, seems to me to sound the death knell of average utilitarianism. If your formalisation of utilitarianism both completely diverges from intuition in a straightforward situation (should I have kids), and is incalculable not just practically, but even in theory (it requires knowing the contents of the whole universe), what's the point?

New Comment
13 comments, sorted by Click to highlight new comments since:

All consequentialism is kinda non-local: The morality of an action depends on its far-future effects. If you can reason about far-future effects, you should be able to reason about the average happiness of aliens.

[-]Ben74

At least with consequentialism the morality of an action only depends on its forward light cone. With average utilitarianism the problem is significantly more extreme.

(I haven't read or thought deeply about details of utilitarianism, this might be a 101 level question.)

Does it work to have a variant where, whether one action is better than another depends on "the average utility of your future light cone conditional on each action"?

Then it would be likely bad to have a kid who'd have lower utility than the average human alive today, because (I would guess) that's likely to lower the average utility of your future light cone.

Sure it would work. But why? Why on earth would you say that? It's just a completely random definition of a utility function.

The universe had no concept of ethics. Ethics are purely in the mind.

The purpose of utilitarianism is to try and formalise some of our intuitions about ethics so that we can act consistently. If the formalised utility function didn't match our intuitions, why bother?

Fwiw it doesn't feel random to me. It feels like what I get if I think (briefly, shallowly) about, like...

  • What are the intuitions that seem to lead to someone advocating average utilitarianism?
  • Okay, and average utilitarianism as naively described clearly doesn't match those for the reasons you describe.
  • This adjustment feels like maybe it gets closer to matching those intuitions?

(But also I think a formalized utility function is never going to match all our intuitions and that's not necessarily a problem.)

As Joe Rocca implied, since utilitarianism is a version of consequentialism, we only care about the consequences of our actions given what we know now, that is our forward lightcone, given our past lightcone, so 

Under average utilitarianism the morality of having a child depends on whether a billion light years away there's any sentient aliens

is straightforwardly false.

Average utilitarianism is consequentialist, but not locally consequentialist. Your actions affect the global average utility of the universe, but you have no way of measuring the global average utility of the universe. Trying to make any version of utilitarianism local is even more doomed than utilitarianism in general.

For example you can aggregate utility only over your own future light-cone, which does address the locally consequentialist objection. Now you have an unknown number of agents that all have different future light-cones, and therefore different aggregated measures of utility for the same consequences. That subjectivity throws away the one positive trait that any form of utilitarianism offers in the first place: that the aggregated utility (assuming complete information, and using whatever aggregation rule happens to apply) is objective.

Well, that is a very good point, huh. There is no such thing as non-local consequentialism in the world we live in, so global consequentialism ought to be discarded as an unworkable model in normative ethics. I assume that local consequentialism is still meaningful. 

subjectivity throws away the one positive trait that any form of utilitarianism offers in the first place: that the aggregated utility (assuming complete information, and using whatever aggregation rule happens to apply) is objective.

Huh, where does this "objective" part come from? One can be locally consequentialist and act based on all available information, subject to the laws of physics, no?

Not all consequentialism is utilitarianism.

The main principle of utilitarianism (beyond basic consequentialism) is that the consequences can be measured by some form of utility that can be objectively aggregated. That is, that there is some way to combine these utilities into a totally ordered "greatest good for the greatest number" instead of just considering all points on the Pareto frontier to be mutually incomparable.

The various types of utilitarianism can be viewed as specific means to define and measure the utility of various types of consequences, and variations in how to carry out the aggregation.

I'm not arguing for any particular view, but the causal reach of possible policies/actions should determine the reach of the averaging function, right? It makes as little sense to include other possible universes in our calculation of optimal policies for this universe, as it does to try to coordinate policies between two space-time points (within the same universe) that can't causally interact with one another.

If we're trying to find and implement good policies (according to whatever definition of "good"), then in deciding on goodness of a policy, we should only care about the things that we can actually affect, and in proportion to the degree to which we can affect them.

That wipes out the only good thing about any form of utilitarianism: that whatever aggregated measure of utility you have, it is objective. Anyone with complete information of the universe could in principle collect the individual utilities for all inhabitants, aggregate it according to the given rules, and get the same result.

[...] the only good thing about any form of utilitarianism: [...] Anyone with complete information of the universe could [...]

It doesn't make sense to use an impossibility as part of the judging criteria for the goodness of something.

An action/utility function can only ever be a function over (at most) all information available to the agent.