Apr 17, 2010
When interpreted convservatively, the von Neumann-Morgenstern rationality axioms and utility theorem are an indispensible tool for the normative study of rationality, deserving of many thought experiments and attentive decision theory. It's one more reason I'm glad to be born after the 1940s. Yet there is apprehension about its validity, aside from merely confusing it with Bentham utilitarianism (as highlighted by Matt Simpson). I want to describe not only what VNM utility is really meant for, but a contextual reinterpretation of its meaning, so that it may hopefully be used more frequently, confidently, and appropriately.
The idea of John von Neumann and Oskar Mogernstern is that, if you behave a certain way, then it turns out you're maximizing the expected value of a particular function. Very cool! And their description of "a certain way" is very compelling: a list of four, reasonable-seeming axioms. If you haven't already, check out the Von Neumann-Morgenstern utility theorem, a mathematical result which makes their claim rigorous, and true.
VNM utility is a decision utility, in that it aims to characterize the decision-making of a rational agent. One great feature is that it implicitly accounts for risk aversion: not risking $100 for a 10% chance to win $1000 and 90% chance to win $0 just means that for you, utility($100) > 10%utility($1000) + 90%utility($0).
But as the Wikipedia article explains nicely, VNM utility is:
[ETA] Additionally, in the VNM theorem the probabilities are understood to be known to the agent as they are presented, and to come from a source of randomness whose outcomes are not significant to the agent. Without these assumptions, its proof doesn't work.
Because of (4), one often considers marginal utilities of the form U(X)-U(Y), to cancel the ambiguity in the additive constant b. This is totally legitimate, and faithful to the mathematical conception of VNM utility.
Because of (5), people often "normalize" VNM utility to eliminate ambiguity in both constants, so that utilities are unique numbers that can be added accross multiple agents. One way is to declare that every person in some situation values $1 at 1 utilon (a fictional unit of measure of utility), and $0 at 0. I think a more meaningful and applicable normalization is to fix mean and variance with respect to certain outcomes (next section).
Because of (6), characterizing the altruism of a VNM-rational agent by how he sacrifices his own VNM utility is the wrong approach. Indeed, such a sacrifice is a contradiction. Kahneman suggests1, and I agree, that something else should be added or substracted to determine the total, comparative, or average well-being of individuals. I'd call it "welfare", to avoid confusing it with VNM utility. Kahneman calls it E-utility, for "experienced utility", a connotation I'll avoid. Intuitively, this is certainly something you could sacrifice for others, or have more of compared to others. True, a given person's VNM utility is likely highly correlated with her personal "welfare", but I wouldn't consider it an accurate approximation.
So if not collective welfare, then what could cross-agent comparisons or sums of VNM utilities indicate? Well, they're meant to characterize decisions, so one meaningful application is to collective decision-making:
Suppose decisions are to be made by or on behalf of a group. The decision could equally be about the welfare of group members, or something else. E.g.,
Say each member expresses a VNM utility value—a decision utility—for each outcome, and the decision is made to maximize the total. Over time, mandating or adjusting each member's expressed VNM utilities to have a given mean and variance could ensure that no one person dominates all the decisions by shouting giant numbers all the time. Incidentally, this is a way of normalizing their utilities: it will eliminate ambiguity in the constants ''a'' and ''b'' in (4) of section 1, which is exactly what we need for cross-agent comparisons and sums to make sense.
Without thought as to whether this is a good system, the two decision examples illustrate how allotment of normalized VNM utility signifies sharing power in a collective decision, rather than sharing well-being. As such, the latter is better described by other metrics, in my opinion and in Kahneman's.
As a normative thory, I think VNM utility's biggest shortcomming is in its Archimedian (or "Continuity") axiom, which as we'll see, actually isn't very limiting. In its harshest interpretation, it says that if you won't sacrifice a small chance at X in order to get Y over Z, then you're not allowed to prefer Y over Z. For example, if you prefer green socks over red socks, then you must be willing to sacrifice some small, real probability of fulfilling immortality to favor that outcome. I wouldn't say this is necessary to be considered rational. Eliezer has noted implicitly in this post (excerpt below) that he also has a problem with the Archimedean requirement.
I think this can be fixed directly with reinterpretation. For a given context C of possible outcomes, let's intuitively define a "strong preference" in that context to be one which is comparable in some non-zero ratio to the strongest preferences in the context. For example, other things being equal, you might consistently prefer green socks to red socks, but this may be completely undetectable on a scale that includes immortal hapiness, making it not a "strong preference" in that context. You might think of the socks as "infinitely less significant", but infinity is confusing. Perhaps less daunting is to think of them as a "strictly secondary concern" (see next section).
I suggest that the four VNM axioms can work more broadly as axioms for strong preference in a given context. That is, we consider VNM-preference and VNM-utility
Then VNM-indifference, which they denote by equality, would simply mean a lack of strong preference in the given context, i.e. not caring enough to sacrifice likelihoods of important things. This is a Contextual Strength (CS) interpretation of VNM utility theory: in bigger contexts, VNM-preference indicates stronger preferences and weaker indifferences.
(CS) Henceforth, I explicitly distinguish the terms VNM-preference and VNM-indifference as those axiomatized by VNM, interpreted as above.
[ETA] To see the broad applicability of VNM utility, let's examine the flexibility of a theory without the Archimedean axiom, and see that they differ only mildly in result:
In the socks vs. immortality example, we could suppose that context "Big" includes such possible outcomes as immortal happiness, human extinction, getting socks, and ice-cream, and context "Small" includes only getting socks and ice-cream. You could have two VNM-like utility functions: USmall for evaluating gambles in the Small context, and UBig for the Big context. You could act to maximize EUBig whenever possible (EU=expected utility), and when two gambles have the same EUBig, you could default to choosing between them by their EUSmall values. This is essentially acting to maximize the pair (EUBig, EUSmall), ordered lexicographically, meaning that a difference in the former value EUBig trumps a difference in the latter value. We thus have a sensible numerical way to treat EUBig as "infinitely more valuable" without really involving infinities in the calculations; there is no need for that interpretation if you don't like it, though.
Since we have the VNM axioms to imply when someone is maximizing one expectation value, you might ask, can we give some nice weaker axioms under which someone is maximizing a lexicographic tuple of expectations?
Hearteningly, this has been taken care of, too. By weakening—indeed, effectively eliminating— the Archimedean axiom, Melvin Hausner2 developed this theory in 1952 for Rand Corporation, and Peter Fishburn3 provides a nice exposition of Hausner's axioms. So now we have Hausner-rational agents maximizing Hausner utility.
[ETA] But the difference between Hausner and VNM utility comes into effect only in the rare event when you know you can't distinguish EUBig values, otherwise the Hausner-rational behavior is to "keep thinking" to make sure you're not sacrificing EUBig. The most plausible scenario I can imagine where this might actually happen to a human is when making a decision on a precisely known time limit, like say sniping on one of two simultaneous ebay auctions for socks. CronoDAS might say the time limit creates "noise in your expectations". If the time runs out and you have failed to distinguish which sock color results in higher chances of immortality or other EUBig concerns, then I'd say it wouldn't be irrational to make the choice according to some secondary utility EUSmall that any detectable difference in EUBig would otherwise trump.
Moreover, it turns out3 that the primary, i.e. most dominant, function in the Hausner utility tuple behaves almost exactly like VNM utility, and has the same uniqueness property (up to the constants ''a'' and ''b''). So except in rare circumstances, you can just think in terms of VNM utility and get the same answer, and even the rare exceptions involve considerations that are necessarily "unimportant" relative to the context.
Thus, a lot of apparent flexibility in Hausner utility theory might simply demonstrate that VNM utility is more applicable to you than it fist appeared. This situation favors the (CS) interpretation: even when the Archimedean axiom isn't quite satisfied, we can use VNM utility liberally as indicating "strong" preferences in a given context.
"A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom." (Wikipedia) But I think the independence axiom (which Hausner also assumes) is a non-issue if we're talking about "strong preferences". The following, in various forms, is what seems to be the best argument against it:
Suppose a parent has no VNM preference between S: her son or her daughter gets a free car, and D: her daughter gets it. In the original VNM formulation, this is written "S=D". She is also presented with a third option, F=.5S+.5D. Descriptively, a fair coin would be flipped, and her son or daughter gets a car accordingly.
By writing S=.5S+.5S and D=.5D+.5D, the original independence axiom says that S=D implies S=F=D, so she must be VNM-indfferent between F and the others. However, a desire for "fair chances" might result in preferring F, which we might want to allow as "rational".
[ETA] I think the most natural fix within the VNM theory is to just say S' and D' are the events "car is awarded so son/daughter based on a coin toss", which are slightly better than S and D themselves, and that F is really 0.5S' + 0.5D'. Unfortunately, such modifications undermine the applicability of the VNM theorem, which implicitly assumes that the source of probabilities itself is insignificant to the outcomes for the agent. Luckily, Bolker4 has divised an axiomatic theory whose theorems will apply without such assumptions, at the expense of some uniqueness results. I'll have another occasion to post on this later.
Anyway, under the (CS) interpretation, the requirement "S=F=D" just means the parent lacks a VNM-preference, i.e. a strong preference, so it's not too big of a problem. Assuming she's VNM-rational just means that, in the implicit context, she is unwilling to make certain probabilitstic sacrifices to favor F over S and D.
You might say VNM tells you to "Be the fairness that you want to see in the world."
This contextual strength interpretation of VNM utility is directly relevant to resolving Eliezer's point linked above:
"... The utility function is not up for grabs. I love life without limit or upper bound: There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever."
This could just indicate that Eliezer ranks immortality on a scale that trumps finite lifespan preferences, a-la-Hausner utility theory. In a context of differing positive likelihoods of immortality, these other factors are not strong enough to constitute VNM-preferences.
As well, Stuart Armstrong has written a thoughtful article "Extreme risks: when not to use expected utility", and argues against Independence. I'd like to recast his ideas context-relatively, which I think alleviates the difficulty:
In his paragraph 5, he considers various existential disasters. In my view, this is a case for a "Big" context utility function, not a case against independence. If you were gambling only between eistential distasters, then you have might have an "existential-context utility function", UExistential. For example, would you prefer
If you prefer the latter enough to make some comparable sacrifice in the «nothing» term, contextual VNM just says you assign a higher UExistential to «extinction by asteroids» than to «extinction by nuclear war».5 There's no need to be freaked out by assigning finite numbers here, since for example Hausner would allow the value of UExistential to completely trump the value of UEveryday if you started worrying about socks or ice cream. You could be both extremely risk averse regarding existential outcomes, and absolutely unwilling to gamble with them for more trivial gains.
In his paragraph 6, Stuart talks about giving out (necessarily normalized) VNM utility to people, which I described in section 2 as a model for sharing power rather than well-being. I think he gives a good argument against blindly maximizing the total normalized VNM utility of a collective in a one-shot decision:
"...imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else. If there were trillions of such projects, then it wouldn’t matter what option you chose. But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical."
(Indeed, practically, the mean and variance normalization I described doesn't apply to provide the same "fairness" in a one-shot deal.)
I'd call the latter of Stuart's projects an unfair distribution of power in a collective decision process, something you might personally assign a low VNM utility to, and therefore avoid. Thus I wouldn't consider it an argument not to use expected utility, but an argument not to blindly favor total normalized VNM utility of a population in your own decision utility function. The same argument—Parfit's Repugnant Conclusion—is made against total normalized welfare.
The expected utility model of rationality is alive and normatively kicking, and is highly adaptable to modelling very weak assumptions of rationality. I hope this post can serve to marginally persuade others in that direction.
References, notes, and further reading:
1 Kahneman, Wakker and Sarin, 1997, Back to Bentham? Explorations of experienced utility, The quarterly journal of economics.
4 Bolker, 1967, A simultaneous axiomatization of utility and probability, Philosophy of Science Association.
5 As wedrifid pointed out, you might instead just prefer uncertainty in your impending doom. Just as in section 5, neither VNM nor Hausner can model this usefully (i.e. in way that allows calculating utilities), though I don't consider this much of a limitation. In fact, I'd consider it a normative step backward to admit "rational" agents who actually prefer uncertainty in itself.