Sorry for my typo in my example. Of course I meant to say that 3A was 100% at $24K, and 3B was 50%@$26K and 50%@22K. The whole point was for the math to come out with the same expected value at $24K, just 3B has more volatility. But I think everyone got my intent despite my typo.

Eliezer of course jumped right to the key, which is the (unrealistic) assumption of linear utility. I was going to log in this morning and suggest that the financial advice of "always get paid for accepting volatility" and/or "whenever you can reduce volatility while maintaining expected value, do so" was really a rule-of-thumb summary for common human utility functions. Which is basically what Eliezer suggested in the addendum, that log utility + Bayes results in the same financial advice.

The example I was going to try to suggest this morning, in investment theory, is diversification. If you invest in a single stock that historically returns 10% annually, but sometimes -20% and sometimes +40%, it is "better" to instead invest 1/10 of your assets in 10 such (uncorrelated) stocks. The expected return doesn't change: it's still 10% annually. But the volatility drops way down. You bunch up all the probability around the expected return (using a basket of stocks), whereas with a single stock the probabilities are far more spread out.

But probably you can get to this same conclusion with log utilities and Bayes.

My final example this morning was going to be on how you can use confidence to make further decisions, in between the time you accept the bet and the time you get the payoff. This is true, for example, for tech managers trying to get a software project out. It's far more important how reliable the programmer's estimates are, than it is what their average productivity is. The overall business can only plan (marketing, sales, retail, etc) for the reliable parts, so the utility that the business sees from the volatile productivity is vastly lower.

But again, Eliezer anticipates my objection with his point #3 in the comments, about taking out a loan today and being confident that you can pay it back in five years.

My only final question, then, is: isn't "the opportunities to take advance preparations" sufficient to resolve the original Allais Paradox, even for the naive bettors who choose the "irrational" 1A/2B combination?

Allais Malaise

by Eliezer Yudkowsky 1 min read21st Jan 200838 comments


Continuation ofThe Allais Paradox, Zut Allais!

Judging by the comments on Zut Allais, I failed to emphasize the points that needed emphasis.

The problem with the Allais Paradox is the incoherent pattern 1A > 1B, 2B > 2A.  If you need $24,000 for a lifesaving operation and an extra $3,000 won't help that much, then you choose 1A > 1B and 2A > 2B.  If you have a million dollars in the bank account and your utility curve doesn't change much with an extra $25,000 or so, then you should choose 1B > 1A and 2B > 2A.  Neither the individual choice 1A > 1B, nor the individual choice 2B > 2A, are of themselves irrational.  It's the combination that's the problem.

Expected utility is not expected dollars.  In the case above, the utility-distance from $24,000 to $27,000 is a tiny fraction of the distance from $21,000 to $24,000.  So, as stated, you should choose 1A > 1B and 2A > 2B, a quite coherent combination.  The Allais Paradox has nothing to do with believing that every added dollar is equally useful.  That idea has been rejected since the dawn of decision theory.

If satisfying your intuitions is more important to you than money, do whatever the heck you want.  Drop the money over Niagara falls.  Blow it all on expensive champagne.  Set fire to your hair.  Whatever.  If the largest utility you care about is the utility of feeling good about your decision, then any decision that feels good is the right one.  If you say that different trajectories to the same outcome "matter emotionally", then you're attaching an inherent utility to conforming to the brain's native method of optimization, whether or not it actually optimizes.  Heck, running around in circles from preference reversals could feel really good too.  But if you care enough about the stakes that winning is more important than your brain's good feelings about an intuition-conforming strategy, then use decision theory.

If you suppose the problem is different from the one presented - that the gambles are untrustworthy and that, after this mistrust is taken into account, the payoff probabilities are not as described - then, obviously, you can make the answer anything you want.

Let's say you're dying of thirst, you only have $1.00, and you have to choose between a vending machine that dispenses a drink with certainty for $0.90, versus spending $0.75 on a vending machine that dispenses a drink with 99% probability.  Here, the 1% chance of dying is worth more to you than $0.15, so you would pay the extra fifteen cents.  You would also pay the extra fifteen cents if the two vending machines dispensed drinks with 75% probability and 74% probability respectively.  The 1% probability is worth the same amount whether or not it's the last increment towards certainty.  This pattern of decisions is perfectly coherent.  Don't confuse being rational with being shortsighted or greedy.

Added:  A 50% probability of $30K and a 50% probability of $20K, is not the same as a 50% probability of $26K and a 50% probability of $24K.  If your utility is logarithmic in money (the standard assumption) then you will definitely prefer the latter to the former:  0.5 log(30) + 0.5 log(20)  <  0.5 log(26) + 0.5 log(24).  You take the expectation of the utility of the money, not the utility of the expectation of the money.