*Restatement of: **If you don't know the name of the game, just tell me what I mean to you. **Alternative to: Why you must maximize expected utility. Related to: Harsanyi's Social Aggregation Theorem.*

*Summary: This article describes a theorem, previously described by Stuart Armstrong, that tells you to maximize the expectation of a linear aggregation of your values. Unlike the von Neumann-Morgenstern theorem, this theorem gives you a reason to behave rationally. ^{1}*

The von Neumann-Morgenstern theorem is great, but it is descriptive rather than prescriptive. It tells you that if you obey four axioms, then you are an optimizer. (Let us call an "optimizer" any agent that always chooses an action that maximizes the expected value of some function of outcomes.) But you are a human and you don't obey the axioms; the VNM theorem doesn't say anything about you.

There are Dutch-book theorems that give us reason to want to obey the four VNM axioms: E.g., if we violate the axiom of transitivity, then we can be money-pumped, and we don't want that; therefore we shouldn't want to violate the axiom of transitivity. The VNM theorem is somewhat helpful here: It tells us that the *only* way to obey the four axioms is to be an optimizer.^{2}

So now you have a reason to become an optimizer. But there are an infinitude of decision-theoretic utility functions^{3} to adopt — which, if any, ought you adopt? And there is an even bigger problem: If you are not already an optimizer, than any utility function that you're considering will recommend actions that run counter to your preferences!

To give a silly example, suppose you'd rather be an astronaut when you grow up than a mermaid, and you'd rather be a dinosaur than an astronaut, and you'd rather be a mermaid than a dinosaur. You have circular preferences. There's a decision-theoretic utility function that says

which preserves some of your preferences, but if you have to choose between being a mermaid and being a dinosaur, it will tell you to become a dinosaur, even though you really really want to choose the mermaid. There's another decision-theoretic utility function that will tell you to pass up being a dinosaur in favor of being an astronaut even though you really really don't want to. Not being an optimizer means that any rational decision theory will tell you to do things you don't want to do.

So why would you ever want to be an optimizer? What theorem could possibly convince you to become one?

# Stuart Armstrong's theorem

Suppose there is a set (for "policies") and some functions ("values") from to . We want these functions to satisfy the following **convexity property**:

For any policies and any , there is a policy such that for all , we have .

For policies , say that is a *Pareto improvement* over if for all , we have . Say that it is a *strong Pareto improvement* if in addition there is some for which . Call a *Pareto optimum* if no policy is a strong Pareto improvement over it.

**Theorem.** Suppose and satisfy the convexity property. If a policy in is a Pareto optimum, then it is a maximum of the function for some nonnegative constants .

This theorem previously appeared in If you don't know the name of the game, just tell me what I mean to you. I don't know whether there is a source prior to that post that uses the hyperplane separation theorem to justify being an optimizer. The proof is basically the same as the proof for the complete class theorem and the hyperplane separation theorem and the second fundamental theorem of welfare economics. Harsanyi's utilitarian theorem has a similar conclusion, but it assumes that you already have a decision-theoretic utility function. The second fundamental theorem of welfare economics is virtually the same theorem, but it's interpreted in a different way.

# What does the theorem mean?

Suppose you are a consequentialist who subscribes to Bayesian epistemology. And in violation of the VNM axioms, you are torn between multiple incompatible decision-theoretic utility functions. Suppose you can list all the things you care about, and the list looks like this:

- Your welfare
- Your family's welfare
- Everyone's total welfare
- The continued existence of human civilization
- All mammals' total welfare
- Your life satisfaction
- Everyone's average welfare
- ...

Suppose further that you can quantify each item on that list with a function from world-histories to real numbers, and you want to optimize for each function, all other things being equal. E.g., is large if is a world-history where your welfare is great; and somehow counts up the welfare of all mammals in world-history . If the expected value of is at stake (but none of the other values are at stake), then you want to act so as to maximize the expected value of .^{4} And if only is at stake, you want to act so as to maximize the expected value of . What I've said so far doesn't specify what you do when you're forced to trade off value 1 against value 5.

If you're VNM-rational, then you are an optimizer whose decision-theoretic utility function is a linear aggregation of your values and you just optimize for that function. (The are nonnegative constants.) But suppose you make decisions in a way that does not optimize for any such aggregation.

You will make many decisions throughout your life, depending on the observations you make and on random chance. If you're capable of making precommitments and we don't worry about computational difficulties, it is as if today you get to choose a policy for the rest of your life that specifies a distribution of actions for each sequence of observations you can make.^{5} Let be the set of all possible policies. If , and for any , let us say that is the expected value of given that we adopt policy . Let's assume that these expected values are all finite. Note that if is a policy where you make every decision by maximizing a decision-theoretic utility function , then the policy itself maximizes the expected value of , compared to other policies.

In order to apply the theorem, we must check that the convexity property holds. That's easy: If and are two policies and , the mixed policy where today you randomly choose policy with probability and policy with probability , is also a policy.

What the theorem says is that if you really care about the values on that list (and the other assumptions in this post hold), then there are linear aggregations that you have reason to start optimizing for. That is, there are a set of linear aggregations and if you choose one of them and start optimizing for it, you will get *more* expected welfare for yourself, *more* expected welfare for others, *less* risk of the fall of civilization, ....

Adopting one of these decision-theoretic utility functions in the sense that doing so will get you more of the things you value without sacrificing any of them.

What's more, once you've chosen a linear aggregation, optimizing for it is easy. The ratio is a price at which you should be willing to trade off value against value . E.g., a particular hour of your time should be worth some number of marginal dollars to you.

*Addendum: Wei_Dai and other commenters point out that the set of decision-theoretic utility functions that will Pareto dominate your current policy very much depends on your beliefs. So a policy that seems Pareto dominant today will not have seemed Pareto dominant yesterday. It's not clear if you should use your current (posterior) beliefs for this purpose or your past (prior) beliefs.*

# More applications

There's a lot more that could be said about the applications of this theorem. Each of the following bullet points could be expanded into a post of its own:

- Philanthropy: There's a good reason to not split your charitable donations among charities.
- Moral uncertainty: There's a good reason to linearly aggregate conflicting desires or moral theories that you endorse.
- Population ethics: There's a good reason to aggregate the welfare or decision-theoretic utility functions of a population, even though there's no canonical way of doing so.
- Population ethics: It's difficult to sidestep Parfit's Repugnant Conclusion if your only desiderata are total welfare and average welfare.

^{ 1}This post evolved out of discussions with Andrew Critch and Julia Galef. They are not responsible for any deficiencies in the content of this post. The theorem appeared previously in Stuart Armstrong's post If you don't know the name of the game, just tell me what I mean to you.

^{ 2}That is, the VNM theorem says that being an optimizer is *necessary* for obeying the axioms. The easier-to-prove converse of the VNM theorem says that being an optimizer is *sufficient*.

^{ 3}Decision-theoretic utility functions are completely unrelated to hedonistic utilitarianism.

^{ 4}More specifically, if you have to choose between a bunch of actions and for all the expected value of is independent of which actions you take, then you'll choose an action that maximizes the expected value of .

^{ 5}We could formalize this by saying that for each sequence of observations , the policy determines a distribution over the possible actions at time .