**Continuation of**: The Allais Paradox, Zut Allais!

Judging by the comments on Zut Allais, I failed to emphasize the points that needed emphasis.

**The problem with the Allais Paradox is the incoherent pattern 1A > 1B, 2B > 2A. **If you need $24,000 for a lifesaving operation and an extra $3,000 won't help that much, then you choose 1A > 1B

**and**2A > 2B. If you have a million dollars in the bank account and your utility curve doesn't change much with an extra $25,000 or so, then you should choose 1B > 1A

**and**2B > 2A.

**Neither the individual choice 1A > 1B, nor the individual choice 2B > 2A, are of themselves irrational.**It's the

**combination**that's the problem.

**Expected utility is not expected dollars.** In the case above, the utility-distance from $24,000 to $27,000 is a tiny fraction of the distance from $21,000 to $24,000. So, as stated, you should choose 1A > 1B and 2A > 2B, a quite coherent combination. **The Allais Paradox has nothing to do with believing that every added dollar is equally useful.** That idea has been rejected since the dawn of decision theory.

**If satisfying your intuitions is more important to you than money, do whatever the heck you want.** Drop the money over Niagara falls. Blow it all on expensive champagne. Set fire to your hair. Whatever. **If the largest utility you care about is the utility of
feeling good about your decision, then any decision that feels good is
the right one.** If you say that different trajectories to the same outcome "matter emotionally", then you're attaching an inherent utility to conforming to the brain's native method of optimization, whether or not it actually optimizes. Heck, **running around in circles from preference reversals** could feel really good too. **But if you care enough about the stakes that winning is more important than your brain's good feelings about an intuition-conforming strategy, then use decision theory.**

**If you suppose the problem is different from the one presented**
- that the gambles are untrustworthy and that, after this mistrust is
taken into account, the payoff probabilities are not as described -
then, obviously, **you can make the answer anything you want.**

Let's say you're dying of thirst, you only have $1.00, and you have
to choose between a vending machine that dispenses a drink with
certainty for $0.90, versus spending $0.75 on a vending machine that
dispenses a drink with 99% probability. Here, the 1% chance of dying
is worth more to you than $0.15, so you would pay the extra fifteen
cents. You would also pay the extra fifteen cents if the two vending
machines dispensed drinks with 75% probability and 74% probability
respectively. **The 1% probability is worth the same amount whether or
not it's the last increment towards certainty.** This pattern of decisions is perfectly coherent. **Don't confuse being rational with being shortsighted or greedy.**

*Added:* A 50% probability of $30K and a 50% probability of $20K, is not the same as a 50% probability of $26K and a 50% probability of $24K. If your utility is logarithmic in money (the standard assumption) then you will definitely prefer the latter to the former: 0.5 log(30) + 0.5 log(20) < 0.5 log(26) + 0.5 log(24). **You take the expectation of the utility of the money, not the utility of the expectation of the money.**

I have a few questions about utility(hopefully this will clear my confusion). Someone please answer. Also, the following post contains math, viewer discretion is advised(the math is very simple however).

Suppose you have a choice between two games...

A: 1 game of 100% chance to win $1'000'000 B: 2 games of 50% chance to win $1'000'000 and 50% chance to win nothing

Which is better A, B or are they equivalent? Which game would you pick?Please answer before reading the rest of my rambling.Lets try to calculate utility.

For A, A: Utotal = 100%

U[$1'000'000] + 0%U[$0]For B, I see two possible ways to calculate it.

1)Calculate the utility for one game and multiply it by two B-1: U1game = 50%

U[$1'000'000] + 50%U[$0] B-1: Utotal = U2games = 2U1game = 2{50%U[$1'000'000] + 50%U[$0]}2)Calculate all possible outcomes of money possession after 2 games. The possibilities are: $0 , $0 $0 , $1'000'000 $1'000'000 , $0 $1'000'000 , $1'000'000

B-2: Utotal = 25%

U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%U[$2'000'000]If we assume utility is linear: U[$0] = 0 U[$1'000'000] = 1 U[$2'000'000] = 2 A: Utotal = 100%

[$1'000'000] + 0%U[$0] = 100%1 + 0%0 = 1 B-1: Utotal = 2{50%U[$1'000'000] + 50%U[0]} = 2{50%1 + 50%0} = 1 B-2: Utotal = 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%U[$2'000'000] = 25%0 + 25%1 + 25%1 + 25%2 = 1 The math is so!neatThe weirdness begins when the utility of money is non linear. $2'000'000 isn't twice as useful as $1'000'000 (unless we split that $2'000'000 between 2 people, but lets deal with one weirdness at a time). With the first million one can by a house, a car, quit their crappy job and pursue their own interests. The second million won't change the persons' life as much and the 3d even less.

Lets invent more realistic utilities(it has also been suggested that the utility of money is logarithmic but I'm having some trouble taking the log of 0): U[$0] = 0 U[$1'000'000] = 1 U[$2'000'000] = 1.1 (reduced from 2 to 1.1)

A: Utotal = 100%

[$1'000'000] + 0%U[$0] = 100%1 + 0%0 = 1 B-1: Utotal = 2{50%U[$1'000'000] + 50%U[0]} = 2{50%1 + 50%0} = 1 B-2: Utotal = 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%U[$2'000'000] = 25%0 + 25%1 + 25%1 + 25%*1.1 = 0.775Hmmmm... B-1 is not equal to B-2. Either I have to change around utility function values or discard one of them as the wrong calculation or some other mistake I didn't think of. Maybe U[$0] != 0.

Starting with the assumption that B-1 = B-2 (U[$1'000'000] = 1 U[$2'000'000] = 1.1), then 2

{50%U[$1'000'000] + 50%U[0]} = 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%*U[$2'000'000]solving for U[$0]: 2

{50%1 + 50%U[0]} = 25%U[$0] + 25%1 + 25%1 + 25%1.1 1 + U[$0] = 0.25U[$0] + 0.775 0.75*U[$0] = -0.225 U[$0] = -0.3B-1 = B-2 = 0.7 Intuitively this kind of makes sense. Comparing: A: 100%

[$1'000'000] = 50%U[$1'000'000] + 50%U[$1'000'000] to B: 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%U[$2'000'000] = 50%U[$1'000'000] + 25%U[$0] + 25%U[$2'000'000]A (=/>/<)? B 50%

U[$1'000'000] + 50%U[$1'000'000] (=/>/<)? 50%U[$1'000'000] + 25%U[$0] + 25%U[$2'000'000] the first 50% is the same so it cancels out 50%U[$1'000'000] (=/>/<)? 25%U[$0] + 25%U[$2'000'000] 0.5 > 0.2 The chance to win 2 million doesn't outweigh how much it would suck to win nothing so therefore the certainty of 1 million is preferable. The negative utility of U[$0] is absorbed by it's 0 probability coefficient in A.Or maybe calculation B-1 is just plain wrong, but that would mean we cannot calculate the utility of discrete events and add the utilities up.

Is any of this correct? What kind of calculations would you do?A bird in the hand is indeed worth 2 in the bush.