Followup to: A summary of Savage's foundation for probability and utility.
In 1961, Daniel Ellsberg, most famous for leaking the Pentagon Papers, published the decisiontheoretic paradox which is now named after him ^{1}. It is a cousin to the Allais paradox. They both involve violations of an independence or separability principle. But they go off in different directions: one is a violation of expected utility, while the other is a violation of subjective probability. The Allais paradox has been discussed on LW before, but when I do a search it seems that the first discussion of the Ellsberg paradox on LW was my comments on the previous post ^{2}. It seems to me that from a Bayesian point of view, the Ellsberg paradox is the greater evil.
But I should first explain what I mean by a violation of expected utility versus subjective probability, and for that matter, what I mean by Bayesian. I will explain a special case of Savage's representation theorem, which focuses on the subjective probability side only. Then I will describe Ellsberg's paradox. In the next episode, I will give an example of how not to be Bayesian. If I don't get voted off the island at the end of this episode.
Rationality and Bayesianism
Bayesianism is often taken to involve the maximisation of expected utility with respect to a subjective probability distribution. I would argue this label only sticks to the subjective probability side. But mainly, I wish to make a clear division between the two sides, so I can focus on one.
Subjective probability and expected utility are certainly related, but they're still independent. You could be perfectly willing and able to assign belief numbers to all possible events as if they were probabilities. That is, your belief assignment obeys all the laws of probability, including Bayes' rule, which is, after all, what the ism is named for. You could do all that, but still maximise something other than expected utility. In particular, you could combine subjective probabilities with prospect theory, which has also been discussed on LW before. In that case you may display Allaisparadoxical behaviour but, as we will see, not Ellsbergparadoxical behaviour. The rationalists might excommunicate you, but it seems to me you should keep your Bayesianist card.
On the other hand your behaviour could be incompatible with any subjective probability distribution. But you could still maximise utility with respect to something other than subjective probability. In particular, when faced with known probabilities, you would be maximising expected utility in the normal sense. So you can not exhibit any Allaisparadoxical behaviour, because the Allais paradox involves only objective lotteries. But you may exhibit, as we will see, Ellsbergparadoxical behaviour. I would say you are not Bayesian.
So a nonBayesian, even the strictest frequentist, can still be an expected utility maximiser, and a perfect Bayesian need not be an expected utility maximiser. What I'm calling Bayesianist is just the idea that we should reason with our subjective beliefs the same way that we reason with objective probabilities. This also has been called having "probabilistically sophisticated" beliefs, if you prefer to avoid the Bword, or don't like the way I'm using it.
In a lot of what follows, I will bypass utility by only considering two outcomes. Utility functions are only unique up to a constant offset and a positive scale factor. With two outcomes, they evaporate entirely. The question of maximising expected utility with respect to a subjective probability distribution reduces to the question of maximising the probability, according to that distribution, of getting the better of the two outcomes. (And if the two outcomes are equal, there is nothing to maximise.)
And on the flip side, if we have a decision method for the twooutcome case, Bayesian or otherwise, then we can always tack on a utility function. The idea of utility is just that any intermediate outcome is equivalent to an objective lottery between better and worse outcomes. So if we want, we can use a utility function to reduce a decision problem with any (finite) number of outcomes to a decision problem over the best and worst outcomes in question.
Savage's representation theorem
Let me recap some of the previous post on Savage's theorem. How might we defend Bayesianism? We could invoke Cox's theorem. This starts by assuming possible events can be assigned real numbers corresponding to some sort of belief level on someone's part, and that there are certain functions over these numbers corresponding to logical operations. It can be proven that, if someone's belief functions obey some simple rules, then that person acts as if they were reasoning with subjective probability. Now, while the rules for belief functions are intuitive, the background assumptions are pretty sketchy. It is not at all clear why these mathematical constructs are requirements of rationality.
One way to justify those constructs is to argue in terms of choices a rational person must make. We imagine someone is presented with choices among various bets on uncertain events. Their level of belief in these events can be gauged by which bets they choose. But if we're going to do that anyway, then, as it turns out, we can just give some simple rules directly about these choices, and bypass the belief functions entirely. This was Leonard Savage's approach ^{3}. To quote a comment on the previous post: "This is important because agents in general don't have to use beliefs or goals, but they do all have to choose actions."
Savage's approach actually covers both subjective probability and expected utility. The previous post discusses both, whereas I am focusing on the former. This lets me give a shorter exposition, and I think a clearer one.
We start by assuming some abstract collection of possible bets. We suppose that when you are offered two bets from this collection, you will choose one over the other, or express indifference.
As discussed, we will only consider two outcomes. So all bets have the same payout, the difference among them is just their winning conditions. It is not specified what it is that you win. But it is assumed that, given the choice between winning unconditionally and losing unconditionally, you would choose to win.
It is assumed that the collection of bets form what is called a boolean algebra. This just means we can consider combinations of bets under boolean operators like "and", "or", or "not". Here I will use brackets to indicate these combinations. (A or B) is a bet that wins under the conditions that make either A win, or B win, or both win. (A but not B) wins whenever A wins but B doesn't. And so on.
If you are rational, your choices must, it is claimed, obey some simple rules. If so, it can be proven that you are choosing as if you had a assigned subjective probabilities to bets. Savage's axioms for choosing among bets are ^{4}:
 If you choose A over B, you shall not choose B over A; and, if you do not choose A over B, and do not choose B over C, you shall not choose A over C.
 If you choose A over B, you shall also choose (A but not B) over (B but not A); and conversely, if you choose (A but not B) over (B but not A), you shall also choose A over B.
 You shall not choose A over (A or B).
 If you choose A over B, then you shall be able to specify a finite sequence of bets C_{1}, C_{2}, ..., C_{n}, such that it is guaranteed that one and only one of the C's will win, and such that, for any one of the C's, you shall still choose (A but not C) over (B or C).
Rule 1 is a coherence requirement on rational choice. It is requires your preferences to be a total preorder. One objection to Cox's theorem is that levels of belief could be incomparable. This objection does not apply to rule 1 in this context because, as we discussed above, we're talking about choices of bets, not beliefs. Faced with choices, we choose. A rational person's choices must be noncircular.
Rule 2 is an independence requirement. It demands that when you compare two bets, you ignore the possibilty that they could both win. In those circumstances you would be indifferent between the two anyway. The only possibilities that are relevant to the comparison are the ones where one bet wins and the other doesn't. So, you ought to compare A to B the same way you compare (A but not B) to (B but not A). Savage called this rule the Surething principle.
Rule 3 is a dominance requirement on rational choice. It demands that you not choose something that cannot do better under any circumstance: whenever A would win, so would (A or B). Note that you might judge (B but not A) to be impossible a priori. So, you might legitimately express indifference between A and (A or B). We can only say it is never legitimate to choose A over (A or B).
Rule 4 is the most complicated. Luckily it's not going to be relevant to the Ellsberg paradox. Call it Mostly Harmless and forget this bit if you want.
What rule 4 says is that if you choose A over B, you must be willing to pay a premium for your choice. Now, we said there are only two outcomes in this context. Here, the premium is paid in terms of other bets. Rule 4 demands that you give a finite list of mutually exclusive and exhaustive events, and still be willing to choose A over B if we take any event on your list, cut it from A, and paste it to B. You can list as many events as you need to, but it must be a finite list.
For example, if you thought A was much more likely than B, you might pull out a die, and list the 6 possible outcomes of one roll. You would also be willing to choose (A but not a roll of 1) over (B or a roll of 1), (A but not a roll of 2) over (B or a roll of 2), and so on. If not, you might list the 36 possible outcomes of two consecutive rolls, and be willing to choose (A but not two rolls of 1) over (B or two rolls of 1), and so on. You could go to any finite number of rolls.
In fact rule 4 is pretty liberal, it doesn't even demand that every event on your list be equiprobable, or even independent of the A and B in question. It just demands that the events be mutually exclusive and exhaustive. If you are not willing to specify some such list of events, then you ought to express indifference between A and B.
If you obey rules 13, then that is sufficient for us construct a sort of qualitative subjective probability out of your choices. It might not be quantitative: for one thing, there could be infinitessimally likely beliefs. Another thing is that there might be more than one way to assign numbers to beliefs. Rule 4 takes care of these things. If you obey rule 4 also, then we can assign a subjective probability to every possible bet, prove that you choose among bets as if you were using those probabilities, and also prove that it is the only probability assignment that matches your choices. And, on the flip side, if you are choosing among bets based on a subjective probability assignment, then it is easy to prove you obey rules 13, as well as rule 4 if the collection of bets is suitably infinite, like if a fair die is avaialble to bet on.
Savage's theorem is impressive. The background assumptions involve just the concept of choice, and no numbers at all. There are only a few simple rules. Even rule 4 isn't really all that hard to understand and accept. A subjective probability distribution appears seemingly out of nowhere. In the full version, a utility function appears out of nowhere too. This theorem has been called the crowning glory of decision theory.
The Ellsberg paradox
Let's imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. We will draw a ball from the urn at random. Let us bet on the colour of this ball. As above, all bets have the same payout. To be specific, let's say you get pie if you win, and a boot to the head if you lose. The first question is: do you prefer to bet that the colour will be red, or that it will be green? The second question is: do you prefer to bet that it will be (red or blue), or that it will be (green or blue)?
The most common response^{5} is to choose red over green, and (green or blue) over (red or blue). And that's all there is to it. Paradox! ^{6}
30  60  
Red  Green  Blue  


A  pie  BOOT  BOOT  A is preferred to B  

B  BOOT  pie  BOOT  


C  pie  BOOT  pie  D is preferred to C  
D  BOOT  pie  pie  
Paradox! 
If choices were based solely on an assignment of subjective probability, then because the three colours are mutually exclusive, P(red or blue) = P(red) + P(blue), and P(green or blue) = P(green) + P(blue). So, since P(red) > P(green) then P (red or blue) > P(green or blue), but instead we have P(red or blue) < P(green or blue).
Knowing Savage's representation theorem, we expect to get a formal contradiction from the 4 rules above plus the 2 expressed choices. Something has to give, so we'd like to know which rules are really involved. You can see that we are talking only about rule 2, the Surething principle. It says we shall compare (red or blue) to (green or blue) the same way as we compare red to green.
This behaviour has been called ambiguity aversion. Now, perhaps this is just a cognitive bias. It wouldn't be the first time that people behave a certain way, but the analysis of their decisions shows a clear error. And indeed, when explained, some people do repent of their sins against Bayes. They change their choices to obey rule 2. But others don't. To quote Ellsberg:
...after rethinking all their 'offending' decisions in light of [Savage's] axioms, a number of people who are not only sophisticated but reasonable decide that they wish to persist in their choices. This includes people who previously felt a 'first order commitment' to the axioms, many of them surprised and some dismayed to find that they wished, in these situations, to violate the Surething Principle. Since this group included L.J. Savage, when last tested by me (I have been reluctant to try him again), it seems to deserve respectful consideration.
I include myself in the group that thinks rule 2 is what should be dropped. But I don't have any dramatic (de)conversion story to tell. I was somewhat surprised, but not at all dismayed, and I can't say I felt much if any prior commitment to the rules. And as to whether I'm sophisticated or reasonable, well never mind! Even if there are a number of other people who are all of the above, and even if Savage himself may have been one of them for a while, I do realise that smart people can be Just Plain Wrong. So I'd better have something more to say for myself.
Well, red obviously has a probability of 1/3. Our best guess is to apply the principle of indifference to also assign probability 1/3 to green or blue. But our best guess is not necessarily a good guess. The probabilities we assign to red, and to (green or blue), are objective. We're guessing the probability of green, and of (red or blue). It seems wise to take this difference into account when choosing what to bet on, doesn't it? And surely it will be all the more wise when dealing with reallife, non symetrical situations where we can't even appeal to the principle of indifference.
Or maybe I'm just some fool talking jibba jabba. Against this sort of talk, the LW post on the Allais paradox presents a version of Howard Raiffa's dynamic inconsistency argument. This makes no references to internal thought processes, it is a purely external argument about the decisions themselves. As stated in that post, "There is always a price to pay for leaving the Bayesian Way." ^{7} This is expanded upon in an earlier post:
Sometimes you must seek an approximation; often, indeed. This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atombyatom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation  and fails to the extent that it departs.
Bayesianism's coherence and uniqueness proofs cut both ways ... anything that is not Bayesian must fail one of the coherency tests. This, in turn, opens you to punishments like Dutchbooking (accepting combinations of bets that are sure losses, or rejecting combinations of bets that are sure gains).
Now even if you believe this about the Allais paradox, I've argued that this doesn't really have much to do with Bayesianism one way or the other. The Ellsberg paradox is what actually strays from the Path. So, does God also punish ambiguity aversion?
Tune in next time^{8}, when I present a twooutcome decision method that obeys rules 1, 3, and 4, and even a weaker form of rule 2. But it exhibits ambiguity aversion, in gross violation of the original rule 2, so that it's not even approximately Bayesian. I will try to present it in a way that advocates for its internal cognitive merit. But the main thing ^{9} is that, externally, it is dynamically consistent. We do not get booked, by the Dutch or any other nationality.
Notes
 Ellsberg's original paper is: Risk, ambiguity, and the Savage axioms, Quarterly Journal of Economics 75 (1961) pp 643669
 Some discussion followed, in which I did rather poorly. Actually I had to admit defeat. Twice. But, as they say: fool me once, shame on me; fool me twice, won't get fooled again!
 Savage presents his theorem in his book: The Foundations of Statistics, Wiley, New York, 1954.
 To compare to Savage's setup: for the two outcome case, we deal directly with "actions" or equivalently "events", here called "bets". We can dispense with "states"; in particular we don't have to demand that the collection of bets be countably complete, or even a powerset algebra of states, just that it be some boolean algebra. Savage's axioms of course have a descriptive interpretation, but it is their normativity that is at issue here, so I state them as "you shall". Rules 13 are his P1P3, and 4 is P6. P4 and P7 are irrelevant in the two outcome case. P5 is included in the background assumption that you would choose to win. I do not call this normative, because the payoff wasn't specified.
 Ellsberg originally proposed this just as a thought experiment, and canvassed various victims for their thoughts under what he called "absolutely nonexpiremental conditions". He used $100 and $0 instead of pie and a boot to the head. Which is dull of course, but it shouldn't make a difference^{10}. The experiment has since been repeated under more experimental conditions. The expirementers also invariably opt for the more boring cash payouts.
 Some people will say this isn't "really" a paradox. Meh.
 Actually, I inserted "to pay". It wasn't in the original post. But it should have been.
 Sneak preview
 As a great decision theorist once said, "Stupid is as stupid does."
 ...or should it? Savage's rule P4 demands that it shall not. And the method I have in mind obeys this rule. But it turns out this is another rule that God won't enforce. And that's yet another post, if I get to it at all.
A style note: the beginning of your post is really long and boring, I ended up skipping most of it so I could get to the damned paradox already.
... and, on reading the problem description, my first reaction was "both bets are obviously worth the same to me", so  as your footnote notes  I don't see any paradox here, just an anecdote that most people are bad at Maths (or rather, bad at abstraction, it's a very understandable "mistake").
Ambiguity aversion makes sense as a heuristic in more realistic situations: in real life, it's stupid to bet when someone else may know more than you. So you should be cautious when there's information someone might know (like how many of each balls are in the jar). Real life betting situations are often a question of who really has the most information (on horses, or stock, or trivia).
There are many thought experiments like this that are just built to break a heuristic that works in real life (trolley problem, I'm looking at you); and since most people don't investigate the deep reasons for all their heuristics (which can be hard to figure out), they apply them incorrectly in the thought experiment. Nothing mysterious about that.
(edit: reworded a bit)
Heh :) I'm okay with people being more interested in the Ellsberg paradox than the Savage theorem. Sections headers are there for skipping ahead. There's even colour :)
I think it would be unfair to ask me to make the Savage theorem as readable as the Ellsberg paradox. For starters, the Ellsberg paradox can be described really quickly. The Savage theorem, even redux, can't. Second, just about everyone here agrees with the conclusion of the Savage theorem, and disagrees with the Ellsbergparadoxical behaviour.
My goal was just to make it clearer than the previous post  and this is not an insult against the previous author, he presented the full theorem and I presented a redux version covering only the relevant part, as I explained in the boring rationality section before the boring representation theorem. I'd be happy if some people who did not understand the previous set of axioms understood the four rules here.
As for the rest, yes, consensus here so far (only a few hours in, of course, but still impressively unanimous) seems to be that it's a bias. Of course, in that case, it's a very famous bias, and it hasn't been covered on LW before. I can still claim to have accomplished something I think, no? And if it turns out it's not so irrational after all, well!
This is not a reply to this comment. I wanted to comment on the article itself, but I can't find the comment box under the article itself.
According to Robin Pope's article, "Attractions to and Repulsions from Chance," section VII, Savage's surething principle is not his P2, although it is easily confused with it. The surething principle says that if you do prefer (A but not B) over (B but not A), then you ought to prefer A over B. That is, in case of a violation of P2, you should resolve it by revising the latter preference (the one between bets with overlapping outcomes), not the former. This is apparently how Savage revised his preferences on the Allais paradox to align them with EU theory.
The article:section is in the book "Game Theory, Experience, and Rationality, Foundations of Social Sciences, Economics and Ethics, in honor of J.C. Harsanyi," pp 102103.
I wonder if people's responses will change if they can verify that the unknown proportion of green/blue was chosen using "fair" randomness. When I imagine a bastard experimenter in the loop, I lean toward Nash equilibrium considerations like "choosing red is less exploitable than choosing green" and "choosing green+blue is less exploitable than choosing red+blue".
If you know the experimenter is trying to exploit you, then the fact that they posed the question as "red or green" decreases the expected number of green balls. On the other hand if they posed the question "green+blue or red+blue", it increases your expected number of green balls. So this is entirely consistent with Bayesian probability, conditioned on which question the evil experimenter asked you. This is the same reason why you shouldn't necessarily want to bet either way on a proposition if the person offering the bet might have information about it that you don't have.
If the experimenter knows what distribution you expect, they may decide not to use that distribution. And unless you're the first person in the experiment, they have in fact been learning what distributions people expect, though not you in particular.
What you could do is run your own similar experiment on regular subjects first so you know what the experimenter is likely to expect, and then impersonate a regular subject when you are called into the experiment, up until the point you get offered a bet. I don't think they would have accounted for that possibility, and even if they did it would be rare enough to still be unexpected.
But make sure the financial incentives to do this aren't enough that other people do the same thing or it will ruin your plan. You have to be satisfied with outwitting the experimenter.
And no matter how small a probability someone assigns to "the randomness is unfair 'cause the experimenter is a dick", picking red will yield epsilon more expected money than picking green. You need a probability of exactly zero for the choices to be equivalent.
Of course, in an abstract thought experiment as described, a probability of zero is indeed implied, but people don't pay attention to instructions, as anybody doing tech support will tell you  they invent stuff that was never said, and ignore other bits (I'm guilty of that myself  we all are, I think).
If the experimenter is a dick, then both boxes contain a dagger; or, as it may be, a boot.
The obvious type of fair randomness is a symmetrical distribution (equally likely to give N more blue than green, as to give N more green than blue), and this gives equal chances of blue or green to come out of the bag. If I knew it was a double blind experiment, so the experimenter doesn't know the contents of the bag, I would treat red, blue and green as each having known probability of one third. If the offers might depend on the experimenter's knowledge of the bag contents I would not.
This is exactly my reaction too: when faced with any situation where another agent might influence outcomes, people naturally think more in terms of game theory and minimaxing than probabilities. (Of course, here minimaxing is applied to probabilistic gambles, but the volunteer presumes the "30 red balls" rule to be less subject to manipulation than the balance of green vs. blue.)
Under the obvious assumptions, P(red) equals P(blue), and P(green or blue) equals P(red or blue), and I can break ties in whichever way the hell I want and it doesn't bloody matter, so this is not much of a paradox. On the other hand, if there were 29 (or 31) red balls and people still chose red over blue and greenorblue over redorblue...
Hmm! I don't know if that's been tried. Speaking for myself, 31 red balls wouldn't reverse my preferences.
But you could also have said, "On the other hand, if people were willing to pay a premium to choose red over green and greenorblue over redorblue..." I'm quite sure things along those lines have been tried.
As a general rule, all long posts should start with short abstract explaining what the post is about.
This post's introduction does not say that as all.
I'll wait until the followup post to see if this is going anywhere, but at the moment this looks like a bogstandard bias. Indifference is correct for each choice.
The article uses the terminology "subjective probability" and "objective probability". I'm aware that the boundary between the subjective and the objective can be drawn in two different places and some problems in probability theory are due to flipflopping on which boundary one uses. So I'm reading the article, trying to work out which subjective/objective boundary is being used and whether it is being used consistently.
Provisionally I think the article flipflops. Early on probabilities are Bayesian, that is situational, but at the end the "paradox" arises due to treating probabilities as individual.
What cost, exactly, do I pay for "leaving the path" here? It seems like I am not losing anything  all these bets have equal expected payout. It seems like the "paradox" is more of a sin against Savage than against Bayes, and who the hell is Savage?
If ambiguity aversion is a paradox and not just a cognitive bias, does this mean that all irrational things people systematically do are also paradoxes?
What particular definition of "paradox" are you using? E.g, which one of the definitions in the Paradox wikipedia article?
Meh. It should not really affect what I've said or what I intend to say later if you substitute "Violation of the rules of probability" or "of utility" for "paradox" (Ellsberg and Allais resp.) However paradox is what they're generally called. And it's shorter.
This post is way too long. It spends too much time on vague generalities; at least the "Rationality and Bayesianism" paragraph is useless. It also spends too little time explaining how one can be moneypumped (one cannot, in this exact setup, but small variations would allow it) by this paradox.
re: Ellsberg paradox.
Sequence of events is important:
person is presented with first bet question
person calculates answer to first question (red, very straightforward  you assume that experimenter himself is expected utility maximizer, then he didn't put any green balls into urn. It's so straightforward and obvious and automatic that it's a single step of reasoning and you can't quite see how you make steps)
person is presented with second bet question.
The second bet question removes the reason to prefer red over green (edit: I.e. the experimenter's act of proposing second bet tells you that the utility maximizing experimenter didn't have any reason to do nogreen). Now it is a simple matter of breaking a tie.
It can be immediately seen that (green or blue) is 2/3 of the win (which is clearly the most win you can get if experimenter did not manage to trick you). But it is going to take a fair bit of reasoning to see that the choices red,(red or blue) do not allow the experimenter to screw you over somehow (and any reasoning has probability of error so the reasoning would have to be repeated to make that probability negligible).
You have cake as reward. The reason you like cake is that it provides glucose for the brain to burn. Glucose maximization is fundamentally the same reason why you'd rather choose straightforwardly correct option rather than the option correctness of which has to be shown with quite a bit of glucoseburning reasoning (leaving you with a: less glucose, and b: with less time for other thoughts).
edit: paragraphs, clarity
It requires more than that, I think.
(I substituted "%" for their symbol, since markdown doesn't translate their symbol.) Let "A %B" represent "I am indifferent between, or prefer, A to B". Then I can A % B and B % C and A % C while violating rule 1 as you've written it.
ETA: To wit, I am indifferent between A and B, and between B and C, but I prefer A to C. This satisfies the total preorder, but violates Rule 1.
I don't think Rule 1 is a requirement of rationality, for basically this very reason. That is: a semiorder may be sufficient for rational preferences.
Your definition of total preorder:
Looks to me like it's equivalent to what I wrote for rule 1. In particular, you say:
No, this violates total preorder, as you've written it.
Since you are indifferent between A and B, and between B and C: A%B, B%A, B%C, C%B. By transitivity, A%C and C%A. Therefore, you are indifferent between A and C.
The "other" type of indifference, you have neither A%B nor B%A (I called this incomparability). But it violates totality.
Hope you'll forgive me if I set this aside. I want to grant absolutely every hypothesis to the Bayesian, except the specific thing I intend to challenge.
Oops, good catch. My formulation of "A % B" as "I am indifferent between or prefer A to B" won't work. I think my doubts center on the totality requirement.
If the ballpicking was repeated a number of times after you chose your colour scheme, then riskaversion (or the declining marginal utility of not being kicked in the head) would make the "intuitive" choice the correct one. I wonder if people's choices in the oneshot version are what they are because of this.
As kim0 notes, the choices make sense if you are riskaverse; the standard choices are the ones with the least variance. (This seems to be identified as "ambiguity aversion" rather than riskaversion, but I think it is clearly a sort of risk aversion). However, risk aversion isn't even necessary to support choices similar to these.
Let me propose a different game that should give rise to the same choices.
Let's imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. Whatever color you specify, you get a dollar for each ball of that color that is in the urn. The first question is: A) do you prefer to pick red, or B) do you prefer to pick green? The second question is: C) do you prefer to pick (red or blue), or D) do you prefer to pick (green or blue)?
Here, it is clear that the expected values work out the same as above. Picking any one color has an expected payout of $30, and picking 2 colors has an expected payout of $60. However, option A is a sure $30, while option B is between $0 and $60. Option C is between $30 and $90, while option D is a sure $60.
If you have the usual diminishing marginal utility over money, then A and D is actually the rational choice for expected utility maximization, without even taking into account risk/ambiguity aversion.
But you're extracting one ball. AFAICT, for any probability distribution such that P(there are n blue balls and 60  n green balls) = P(there are 60  n blue balls and n green balls) for all n, and for any utility function, E(utilityI bet “red”) equals E(utilityI bet “green”) and E(utilityI bet “red or blue”) equals E(utilityI bet “green or blue”). (The only values of your utility function that matter are those for “you win the bet” and “you lose the bet”, and given that utility functions are only defined up to positivescale affine transformations, you can normalize them to 1 and 0 respectively.)
I'm confused.
Is this supposed to refer to my game, or the game in the OP? In my game, you examine all the balls.
The rest seems rather rushed  I'm having trouble parsing it. If it helps, I was not claiming that in the original game, an expected utility maximizer would strictly prefer A and D without taking into account risk/ambiguity aversion.
I was talking about the OP's game. (The “the choices make sense if you are riskaverse” in the grandparent seems to be about it. Are you using risk aversion with a meaning other than “downwardconcave utility function” by any chance?)
Sort of. I was referring to ambiguity aversion, as you can see in the clarification in that very sentence. But I would argue that ambiguity aversion is just the same thing as risk aversion, at a different metalevel.
Though it might take an insane person to prefer a "certain 1/3 chance" over an "uncertain 1/3 chance".
P(red) is not necessarily greater than P(green). P(red) = 30 but P(green) can be anything from 0 to 60.
I would chose red over blue. I would also chose blue and green over red and blue. However, my choices do not evidence a true preference. This is because of axiom 4, which is crucial. Axiom 4 actually defines what it means to prefer one bet over another. Preferences must be quantifiable, and I would not be able to quantify my preference. I could not say "I will take a boot to the head rather than cake if red is not chosen or three dice all roll 1s." By the framework of axioms presented, I am actually indifferent among the bets presented.
This is actually an important distinction, as the objections to inconsistent ordering of values are based on someone making multiple choices/bets that make them better off that in aggregate make them worse off. No money pumps, no dutch books.
It seems just a classic case of people being naturally unwilling to fully dive into the abstraction.
Firstly, the fully abstracted problem provides only indifference between the choices. So, as much as we are able to dive in and cut out all external influences in our thinking, even when we do, the problem at best tells us that we should be indifferent. So we're indifferent, then we're asked us to make a choice. If you're the sort of person who, when faced with an indifferent choice will say "I am provably indifferent, and thus refuse to choose", you will generally be less functional in life than somebody who just picks one. So we all make a choice, and I would say when it's indifferent then any choice is as good as any other. My point is, since the abstract problem gives us nothing, then even an infinitessimal hole in our ability to accept the rules of the problem makes all the difference.
If you don't accept the abstraction fully, then there's plenty of reasons to make the choices people make, as other comments already mention  assuming the game is more likely rigged against you than for you, and risk aversion in the case where there at least might be multiple plays.
Of course, I do think there still is a thinking failure people are making here. People see there being two levels and don't realise they can't really be separated. The problem people think they are solving is something like the following: "I will give you $X, where is the number of balls of the colour(s) you choose in the urn". Normally that would seem to be an equivalent problem, and in that problem risk aversion trumps indifference and people's choices are, well, as rational as risk aversion anyway. Strangely, a decreasing average utility of larger amounts of money both justifies risk aversion while at the same time causes my money version of the problem to no longer be equivalent.
Overall, the only thing I think is ridiculous is the suggestion that rule 2 should be dropped because of this. The justification for rule 2 is not in any way damaged by this paradox. Rationality should not be defined as what reasonable people choose to do anyway, as people have been wrong before and will be wrong again. At best it shows that perfectly reasonable people can be infinitessimally irrational in a contrived corner case, at worst it shows nothing.
A more interesting setup would be with either 29 or 31 balls. In that case there's a finite cost to the wrong decision. How many "reasonable" people still stick to their suboptimal choice in that case though?
You need to think harder about your audience when you're writing. This post was a bunch of great ideas buried deep within an endless Ugh field of paragraphs that didn't appear to be heading anywhere.
Preferring red is rational, because it is a known amount of risk, while each of the other two colours have unknown risks.
This is according to Kellys criterion and Darwinian evolution. Negative outcomes outweigh positive ones because negative ones lead to sickness and death through starvation, poorness, and kicks in the head.
This is only valid in the beginning, because when the experiment is repeated, the probabilities of blue and green become clearer.
I think what you're saying is just that humans are riskaverse, and so a gamble with lower variance is preferable to one with higher variance (and the same mean)... but if the number of green vs. blue is randomly determined with expected value 30 to 30, then it has the same variance. You need to involve something more (like the intentional stance) to explain the paradox.
No, because expected value is not the same thing as variance.
Betting on red gives 1/3 winnings, exactly.
Betting on green gives 1/3 +/ x winnings, and this is a variance, which is bad.
You don't get exactly 1/3 of a win with no variance in either case. You get exactly 1 win, 1/3 of the time, and no win 2/3 of the time.
As an example when betting on green, suppose there's a 1/3 chance of 30 blue and 30 green balls, 1/3 chance of 60 green, 1/3 chance of 60 blue. And there's always 30 red balls.
There is a 1/3 of 1/3 chance that there are 30 green balls and you pick one. There is a 2/3 of 1/3 chance that there are 60 green balls and you pick one. There is no chance that there are no green balls and you still pick one. Therre is no other way to get a green ball. The total chance of picking a green ball is therefore 1/3, that is, 1/3 of 1/3 plus 2/3 of 1/3. That means that 1/3 of the time you win and 2/3 of the time you lose, just as in the case of betting on the red ball.
A distribution of 1 one third of the time and 0 two thirds of the time has some computable variance. Whatever it is, that's the variance in your number of wins when you bet on green, and it's also the variance in you number of wins when you bet red.
Like I said below, write out the actual random variables you use as a Bayesian: they have identical distributions if the mean of your green:blue prior is 30 to 30.
There is literally no sane justification for the "paradox" other than updating on the problem statement to have an unbalanced posterior estimate of green vs. blue.
Bayesian reasoning is for maximizing the probability of being right. Kelly´s criterion is for maximizing aggregated value.
And yet again, the distributions of the probabilities are different, because they have different variance, and difference in variance give different aggregated value, which is what people tend to try to optimize.
Aggregating value in this case is to get more pies, and fewer boots to the head. Pies are of no value to you when you are dead from boots to the head, and this is the root cause for preferring lower variance.
This isn´t much of a discussion when you just ignore and deny my argument instead of trying to understand it.
If I decide whether you win or lose by drawing a random number from 1 to 60 in a symmetric fashion, then rolling a 60sided die and comparing the result to the number I drew, this is the same random variable as a single fair coinflip. Unless you are playing multiple times (in which case you'll experience higher variance from the correlation) or you have a reason to suspect an asymmetric probability distribution of green vs. blue, the two gambles will have the exact same effect in your utility function.
The above paragraph is mathematically rigorous. You should not disagree unless you find a mathematical error.
And yet again I am reminded why I do not frequent this supposedly rational forum more. Rationality swishes by over most peoples head here, except for a few really smart ones. You people make it too complicated. You write too much. Lots of these supposedly deep intellectual problems have quite simple answers, such as this Ellsberg paradox. You just have to look and think a little outside their boxes to solve them, or see that they are unsolvable, or that they are wrong questions.
I will yet again go away, to solve more useful and interesting problems on my own.
Oh, and Orthonormal, here is my correct final answer to you: You do not understand me, and this is your fault.
Nobody is choosing between green vs. blue based on variance.
Option one: a sure 1/3 or an expected 1/3 with variance
Option two: an expected 2/3 with variance or a sure 2/3.
Red by itself is certain, blue with green is certain. Green by itself is uncertain, red with blue is uncertain.
Write out the random variables. They have the same distribution as each other. I know that it "feels" like one has more variance than the other, but that's a cognitive illusion.
There's variance in the frequency, which results in variance in your metauncertainty. The 1/3 chance of red derives from a certain frequency of 1/3. The 1/3 chance of blue derives from uncertainty about the frequency, which is between 0 and 2/3.
It seems like the sort of person who would prefer to pick A and D in my game due to risk aversion would also prefer A and D in this one, for the same reason.
The effect of the metauncertainty on your utility function is the same as the effect of regular old uncertainty, unless you're planning to play the game multiple times. I am speaking rigorously here; do not keep disagreeing unless you can find a mathematical error.
ETA: Explained more thoroughly here.
It does not have the same effect on your utility function, if your utility function has a term for your metauncertainty. Much as I might pay $3 in insurance to turn an expected, variable loss of $10 into a certain loss of $10, I might also pay $3 to switch from B to A and C to D, on the grounds that I favor situations with less metauncertainty.
Consider a horse race between 3 horses. A and B each have a 1/4 probability of winning a race, and C and D each have a 1/2 probability of winning a race, but C and D flip a fair coin to see who gets to run, after bets are placed. Then, a bet on A has the same probability of winning as a bet on C. But some people might still prefer to bet on A rather than C, since they don't want to have bet on a horse that didn't even run the race.
If you endorse this reasoning, you should also accept inconsistency in the Allais Paradox. From the relevant post:
The only reason that I personally would prefer the red bet to the green bet is that it's less exploitable by a malicious experimenter: in other words, given that the experimenter gave me those options, my estimate of the green:blue distribution becomes asymmetric. All other objections in this thread are unsound.
There is a possible state of the world where I have picked "green" and it turns out that there were never any green balls in the world. It is possible to have a very strong preference to not be in that state of the world. There is nothing irrational about having a particular preference. Preferences (and utility functions) cannot be irrational.
That does not necessarily follow. The Allais Paradox is not about metauncertainty; it is about putting a special premium on "absolute certainty" that does not translate to relative certainty. Someone who values certainty could consistently choose 1A and 2A.
How many boots to the head is that preference worth? I doubt it's worth very many to you personally, and thus your personal reluctance is due to something else.
I'm done arguing this. I usually find you pretty levelheaded, but your objections in this thread are baffling.
edit: can't see how to delete this, accidental doublepost because the site lagged
Formatting note: You're suffering from formatting deleting spaces, so italicised or linked text runs directly into the words on either side of it.
Thanks... Where do you see it? I can't see any. I tried logging in and out and all that, it doesn't seem to change anything (except the vote count is hidden when I logout?)
It appears to be gone. It was there earlier, I swear! :P