Followup to: The Savage theorem and the Ellsberg paradox
In the previous post, I presented a simple version of Savage's theorem, and I introduced the Ellsberg paradox. At the end of the post, I mentioned a strong Bayesian thesis, which can be summarised: "There is always a price to pay for leaving the Bayesian Way."1 But not always, it turns out. I claimed that there was a method that is Ellsberg-paradoxical, therefore non-Bayesian, but can't be money-pumped (or "Dutch booked"). I will present the method in this post.
I'm afraid this is another long post. There's a short summary of the method at the very end, if you want to skip the jibba jabba and get right to the specification. Before trying to money-pump it, I'd suggest reading at least the two highlighted dialogues.
To recap the Ellsberg paradox: there's an urn with 30 red balls and 60 other balls that are either green or blue, in unknown proportions. Most people, when asked to choose between betting on red or on green, choose red, but, when asked to choose between betting on red-or-blue or on green-or-blue, choose green-or-blue. For some people this behaviour persists even after due calculation and reflection. This behaviour is non-Bayesian, and is the prototypical example of ambiguity aversion.
There were some major themes that came out in the comments on that post. One theme was that I Fail Technical Writing Forever. I'll try to redeem myself.
Another theme was that the setup given may be a bit too symmetrical. The Bayesian answer would be indifference, and really, you can break ties however you want. However the paradoxical preferences are typically strict, rather than just tie-breaking behaviour. (And when it's not strict, we shouldn't call it ambiguity aversion.) One suggestion was to add or remove a couple of red balls. Speaking for myself, I would still make the paradoxical choices.
A third theme was that ambiguity aversion might be a good heuristic if betting against someone who may know something you don't. Now, no such opponent was specified, and speaking for myself, I'm not inferring one when I make the paradoxical choices. Still, let me admit that it's not contrived to infer a mischievous experimenter from the Ellsberg setup. One commentator puts it better than me:
Betting generally includes an adversary who wants you to lose money so they win in. Possibly in psychology experiments [this might not apply] ... But generally, ignoring the possibility of someone wanting to win money off you when they offer you a bet is a bad idea.
Now betting is supposed to be a metaphor for options with possibly unknown results. In which case sometimes you still need to account for the possibility that the options were made available by an adversary who wants you to choose badly, but less often. And you should also account for the possibility that they were from other people who wanted you to choose well, or that the options were not determined by any intelligent being or process trying to predict your choices, so you don't need to account for an anticorrelation between your choice and the best choice. Except for your own biases.
We can take betting on the Ellsberg urn as a stand-in for various decisions under ambiguous circumstances. Ambiguity aversion can be Bayesian if we assume the right sort of correlation between the options offered and the state of the world, or the right sort of correlation between the choice made and the state of the world. In that case just about anything can Bayesian. But sometimes the opponent will not have extra information, nor extra power. There might not even be any opponent as such. If we assume there are no such correlations, then ambiguity aversion is non-Bayesian.
The final theme was: so what? Ambiguity aversion is just another cognitive bias. One commentator specifically complained that I spent too much time talking about various abstractions and not enough time talking about how ambiguity aversion could be money-pumped. I will fix that now: I claim that ambiguity aversion cannot be money-pumped, and the rest of this post is about my claim.
I'll start with a bit of name-dropping and some whig history, to make myself sound more credible than I really am2. In the last twenty years or so many models of ambiguity averse reasoning have been constructed. Choquet expected utility3 and maxmin expected utility4 were early proposed models of ambiguity aversion. Later multiplier preferences5 were the result of applying the ideas of robust control to macroeconomic models. This results in ambiguity aversion, though it was not explicitly motivated by the Ellsberg paradox. More recently, variational preferences6 generalises both multiplier preferences and maxmin expected utility. What I'm going to present is a finitary case of variational preferences, with some of my own amateur mathematical fiddling for rhetorical purposes.
The starting idea is simple enough, and may have already occurred to some LW readers. Instead of using a prior probability for events, can we not use an interval of probabilities? What should our betting behaviour be for an event with probability 50%, plus or minus 10%?
There are some different ways of filling in the details. So to be quite clear, I'm not proposing the following as the One True Probability Theory, and I am not claiming that the following is descriptive of many people's behaviour. What follows is just one way of making ambiguity aversion work, and perhaps the simplest way. This makes sense, given my aim: I should just describe a simple method that leaves the Bayesian Way, but does not pay.
Now, sometimes disjoint ambiguous events together make an event with known probability. Or even a certainty, as in an event and its negation. If we want probability intervals to be additive (and let's say that we do) then what we really want are oriented intervals. I'll use +- or -+ (pronounced: plus-or-minus, minus-or-plus) to indicate two opposite orientations. So, if P(X) = 1/2 +- 1/10, then P(not X) = 1/2 -+ 1/10, and these add up to 1 exactly.
Such oriented intervals are equivalent to ordered pairs of numbers. Sometimes it's more helpful to think of them as oriented intervals, but sometimes it's more helpful to think of them as pairs. So 1/2 +- 1/10 is the pair (3/5,2/5). And 1/2 -+ 1/10 is (2/5,3/5), the same numbers in the opposite order. The sum of these is (1,1), which is 1 exactly.
You may wonder, if we can use ordered pairs, can we use triples, or longer lists? Yes, this method can be made to work with those too. And we can still think in terms of centre, length, and orientation. The orientation can go off in all sorts of directions, instead of just two. But for my purposes, I'll just stick with two.
You might also ask, can we set P(X) = 1/2 +- 1/2? No, this method just won't handle it. A restriction of this method is that neither of the pair can be 0 or 1, except when they're both 0 or both 1. The way we will be using these intervals, 1/2 +- 1/2 would be the extreme case of ambiguity aversion. 1/2 +- 1/10 represents a lesser amount of ambiguity aversion, a sort of compromise between worst-case and average-case behaviour.
To decide among bets (having the same two outcomes), compute their probability intervals. Sometimes, the intervals will not overlap. Then it's unambiguous which is more likely, so it's clear what to pick. In general, whether they overlap or not, pick the one with the largest minimum -- though we will see there are three caveats when they do overlap. If P(X) = 1/2 +- 1/10, we would be indifferent between a bet on X and on not X: the minimum is 2/5 in either case. If P(Y) = 1/2 exactly, then we would strictly prefer a bet on Y to a bet on X.
Which leads to the first caveat: sometimes, given two options, it's strictly better to randomise. Let's suppose Y represents a fair coin. So P(Y) = 1/2 exactly, as we said. But also, Y is independent of X. P(X and Y) = 1/4 +- 1/20, and so on. This means that P((X and not Y) or (Y and not X)) = 1/2 exactly also. So we're indifferent between a bet on X and a bet on not X, but we strictly prefer the randomised bet.
In general, randomisation will be strictly better if you have two choices with overlapping intervals of opposite orientations. The best randomisation ratio will be the one that gives a bet with zero-length interval.
Now let us reconsider the Ellsberg urn. We did say the urn can be a metaphor for various situations. Generally these situations will not be symmetrical. But, even in symmetrical scenarios, we should still re-think how we apply the principle of indifference. I argue that the underlying idea is really this: if our information has a symmetry, then our decisions should have that same symmetry. If we switch green and blue, our information about the Ellsberg urn doesn't change. The situation is indistinguishable, so we should behave the same way. It follows that we should be indifferent between a bet on green and a bet on blue. Then, for the Bayesian, it follows that P(red) = P(green) = P(blue) = 1/3. Period.
But for us, there is a degree of freedom, even in this symmetrical situation. We know what the probability of red is, so of course P(red) = 1/3 exactly. But we can set, say7, P(green) = 1/3 +- 1/9, and P(blue) = 1/3 -+ 1/9. So we get P(red or green) = 2/3 +- 1/9, P(red or blue) = 2/3 -+ 1/9, P(green or blue) = 2/3 exactly, and of course P(red or green or blue) = 1 exactly.
So: red is 1/3 exactly, but the minimum of green is 2/9. (green or blue) is 2/3 exactly, but the minimum of (red or blue) is 5/9. So choose red over green, and (green or blue) over (red or blue). That's the paradoxical behaviour. Note that neither pair of choices offered in the Ellsberg paradox has the type of overlap that favours randomisation.
Once we have a decision procedure for the two-outcome case, then we can tack on any utility function, as I explained in the previous post. The result here is what you would expect: we get oriented expected utility intervals, obtained by multiplying the oriented probability intervals by the utility. When deciding, we pick the one whose interval has the largest minimum. So for example, a bet which pays 15U on red (using U for "utils", the abstract units of measurement of the utility function) has expected utility 5U exactly. A bet which pays 18U on green has expected utility 6U +- 2U, the minimum is 4U. So pick the bet on red over that.
Operationally, probability is associated with the "fair price" at which we are willing to bet. A probability interval indicates that there is no fair price. Instead we have a spread: we buy bets at their low price and sell at their high price. At least, we do that if we have no outstanding bets, or more generally, if the expected utility interval on our outstanding bets has zero-length. The second caveat is that if this interval has length, then it affects our price: we also sell bets of the same orientation at their low price, and buy bets of the opposite orientation at their high price, until the length of this interval is used up. The midpoint of the expected utility interval on our outstanding bets will be irrelevant.
This can be confusing, so it's time for an analogy.
If you are Bayesian and risk-neutral (and if bets pay in "utils" rather than cash, you are risk-neutral by definition) then outstanding bets have no effect on further betting behaviour. However, if you are risk-averse, as is the most common case, then this is no longer true. The more money you've already got on the line, the less willing you will be to bet.
But besides risk attitude, there could also be interference effects from non-monetary payouts. For example, if you are dealing in boots, then you wouldn't buy a single boot for half the price of a pair, and neither would you sell one of your boots for half the price of a pair. Unless you happened to already have unmatched boots, then you would sell those at a lower price, or buy boots of the opposite orientation at a higher price, until you had no more unmatched boots. If you were otherwise risk-neutral with respect to boots, then your behaviour would not depend on the number of pairs you have, just on the number and orientation of your unmatched boots.
This closely resembles the non-Bayesian behaviour above. In fact, for the Ellsberg urn, we could just say that a bet on red is worth a pair of boots, a bet on green is worth two left boots, and a bet on blue is worth two right boots. Without saying anything further, it's clear that we would strictly prefer red (a pair) over green (two lefts), but we would also strictly prefer green-or-blue (two pairs) over red-or-blue (one left and three rights). That's the paradoxical behaviour, but you know you can't money-pump boots.
A: I'll buy that pair of boots for 30 zorkmids.
So much for the static case. But what do we do with new information? How do we handle conditional probabilities?
We still get P(A|B) by dividing P(A and B) by P(B). It will be easier to think in terms of pairs here. So for example P(red) = 1/3 exactly = (1/3,1/3) and P(red or green) = 2/3 +- 1/9 = (7/9,5/9), so P(red|red or green) = (3/7,3/5) = 18/35 -+ 3/35. And similarly P(green|red or green) = (1/3 +- 1/9)/(2/3 +- 1/9) = 17/35 +- 3/35.
This rule covers the dynamic passive case, where we update probabilities based on what we observe, before betting. The third and final caveat is in the active case, when information comes in between bets. Now, we saw that the length and orientation of the interval on expected utility of outstanding bets affects further betting behaviour. There is actually a separate update rule for this quantity. It is about as simple as it gets: do nothing. The interval can change when we make choices, and its midpoint can shift due to external events, but its length and orientation do not update.
You might expect the update rule for this quantity to follow from the way the expected utility updates, which follows from the way probability updates. But it has a mind of its own. So even if we are keeping track of our bets, we'd still need to keep track of this extra variable separately.
Sometimes it may be easier to think in terms of the total expected utility interval of our outstanding bets, but sometimes it may be easier to think of this in terms of having a "virtual" interval that cancels the change in the length and orientation of the "real" expected utility interval. The midpoint of this virtual interval is irrelevant and can be taken to always be zero. So, on update, compute the prior expected utility interval of outstanding bets, subtract the posterior expected utility interval from it, and add this difference to the virtual interval. Reset its midpoint to zero, keeping only the length and orientation.
That can also be confusing, so let's have another analogy.
Yo' mama's so illogical...
I recently came across this example by Mark Machina:
M: Children, I only have one treat, I can only give it to one of you.
Instead of giving the treat to either child, she strictly prefers to toss a coin and give the treat to the winner. But after the coin is tossed, she strictly prefers to give the treat to the winner rather than toss again.
This cannot be explained in terms of maximising expected utility, in the typical sense of "utility". And of course only known probabilities are involved here, so there's no question as to whether her beliefs are probabilistically sophisticated or not. But it could be said that she is still maximising the expected value of an extended objective function. This extended objective function does not just consider who gets a treat, but also considers who "had a fair chance". She is unfair if she gives the treat to either child outright, but fair if she tosses a coin. That fairness doesn't go away when the result of the coin toss is known.
Or something like that. There are surely other ways of dissecting the mother's behaviour. But no matter what, it's going to have to take the coin toss into account, even though the coin, in and of itself, has no relevance to the situation.
Let's go back to the urn. Green and blue have the type of overlap that favours randomisation: P((green and heads) or (blue and tails)) = 1/3 exactly. A bet paying 9U on this event has expected utility of 3U exactly. Let's say we took this bet. Now say the coin comes up heads. We can update the probabilities as per above. The answer is that P(green) = 1/3 +- 1/9 as it was before. That makes sense because it's an independent event: knowing the result of the coin toss gives no information about the urn. The difference is that we now have an outstanding bet that pays 9U if the ball is green. The expected utility would therefore be 3U +- 1U. Except, the expected utility interval was zero-length before the coin was tossed, so it remains zero-length. Equivalently, the virtual interval becomes -+ 1U, so that the effective total is 3U exactly. (In this example, the midpoint of the expected utility interval didn't change either. That's not generally the case.) A bet randomised on a new coin toss would have expected utility 3U, plus the virtual interval of -+ 1U, for an effective total of 3U -+ 1U. So we would strictly prefer to keep the bet on green rather than re-randomise.
Let's compare this with a trivial example: let's say we took a bet that pays 9U if the ball drawn from the urn is green. The expected utility of this bet is 3U +- 1U. For some unrelated reason, a coin is tossed, and it comes up heads. The coin has also nothing to do with the urn or my bet. I still have a bet of 9U on green, and its expected utility is still 3U +- 1U.
But the difference between these two examples is just in the counterfactual: if the coin had come up tails, in the first example I would have had a bet of 9U on blue, and in the second example I would have had a bet of 9U on green. But the coin came up heads, and in both examples I end up with a bet of 9U on green. The virtual interval has some spooky dependency on what could have happened, just like "had a fair chance". It is the ghost of a departed bet.
I expect many on LW are wondering what happened. There was supposed to be a proof that anything that isn't Bayesian can be punished. Actually, this threat comes with some hidden assumptions, which I hope these analogies have helped to illustrate. A boot is an example of something which has no fair price, even if a pair of boots has one. A mother with two children and one treat is an example where some counterfactuals are not forgotten. The hidden assumptions fail in our case, just as they can fail in these other contexts where Bayesianism is not at issue. This can be stated more rigorously8, but that is basically how it's possible. Now We Know. And Knowing is Half the Battle.
Appendix A: method summary