Yes, philosophers, and others, do often too easily accept the advice of strong intuitions, forgetting that strong intuitions often conflict in non-obvious ways.
The idea is that $ amount equals your utility, while in reality the history of how you got this amount also matters (regret, emotions, etc.).
There's no paradox here - as your utility expressed in $ just doesn't match utility of the subjects. As for money pump - you just have a win win situation - you earn money, and the subjects earn good feelings.
If I knew the offer wouldn't be repeated, I might take 1A because I'd really rather not have to explain to people how I lost $24,000 on a gamble.
Actually, that makes me think of another explanation besides overreaction to small probabilities: if a person takes 1B and loses, they know they would have won if they'd chosen differently. If they take 2B and lose, they can tell themselves (and others) they probably would have lost anyway.
Reread the post; that's not the paradox.
The paradox is that, if you need the 20k to survive, then you should prefer 2A to 2B, because the extra 3k 33% of the time doesn't outweigh an additional 1% chance of dying.
If someone prefers A in both cases, and B in both cases, they can have a consistent utility function. When someone prefers A in one case, and B in another, then they cannot have a consistent utility function.
Risk and cost of capital introduce very strange twists on expected utility.
Assume that living has a greater expected utility to me than any monetary value. If I need a $20,000 operation within the next 3 hours to live, I have no other funding, and you make me offer 1, it is completely rational and unbiased to take option 1A. It is the difference between a 100% of living and a 97% chance of living.
If I have $1,000,000,000 in the bank and command of legal or otherwise armed forces, I may just have you killed - for I would not tolerate such frivolous philosophizing.
I think defenses of the subject's choices by recourse to nonmonetary values is missing the point. Anything can be rational with a sufficiently weird utility function. The question is, if subjects understood the decision theory behind the problem, would they still make the same choice? After seeing a valid argument that your preferences make you a money pump, you certainly could persist in your original judgment, by insisting that your feelings make your first judgment the right one.
But seriously?---why?
Since people only make a finite number of decisions in their lifetime, couldn't their utility function specify every decision independently? (You could have a utility function that is normal except that it says that everything you hear being called 1A is preferable to 1B, and anything you hear being called 2B is preferable to 2A. If this contradicts your normal utility function, this rule is always more important. Even if 2B leads to death, you still choose 2B.)
The utility function would be impossible to come up with in advance, but it exists.
My intuitions match the stated naive intuitions, but I reject your assertion that the pair of preferences are inconsistent with Bayesian probability theory.
You really underestimate the utility of certainty. "Nainodelac and Tarleton Nick"'s example in these comments about the operation is a perfect counter.
With a 33% vs. 34% chance, the impact on your life is about the same, so you just do the straightforward probability calculation for expected value and take the maximum.
But when offered 100% of some positive outcome, vs. a probability of nothin...
Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B.
I don't understand why I would pay you a penny to throw the switch gefore 12:00?
Since I know myself, I know what I will do after midnight (pay to switch it to A), and so I resign myself to doing it immediately (i.e., leaving the switch at A) so as to save either one cent or two, depending on what happens. I will do this even if I share Don's intuition about certainty. Why pay before midnight to switch it to B if I know that after midnight I will pay to switch it back to A?
*[if the first die comes up 1 to 34]
I think I missed something on the algebraic inconsistency part...
If there is some rational independent utility to certainty, the algebraic claims should be more like this:
This seems consistent so long as U(Certainty) > 1/34 U($27,000).
I'm not committed to the notion there is a rational independent value to certainty, I'm just not seeing how it can be dismissed with quick algebra. Maybe that wasn't your goal. Forgive me if this is my oversight.
This reminds me of the foolish decisions on "deal or no deal". People would fail to follow their own announced utility.
When we speak of an inherent utility of certainty, what do we mean by certainty? An actual probability of unity, or, more reasonably, something which is merely very much certain, like probability .999? If the latter, then there should exist a function expressing the "utility bonus for certainty" as a function of how certain we are. It's not immediately obvious to me how such a function should behave. If probability 0.9999 is very much more preferable to probability 0.8999 than probability 0.5 is preferable to probability 0.4, then is 0.5 very much more preferable to 0.4 than 0.2 is to 0.1?
It's rational to take the certain outcome if gambling causes psychological stress. Notwithstanding that stress is intrinsically unpleasant, it increases your risk of peptic ulcers and stroke, which could easily cancel out the expected gain.
If you crunch the numbers differently, you can come to different conclusions. For example, if I choose 1B over 1A, I have a 1 in 34 chance of getting burned. If I choose 2B over 2A, my chance of getting burned is only 1 in 100.
James D. Miller has a proposal for Lottery Tickets that Usuallly Pay Off.
Robin, were you thinking of a certain colleague of yours when you mentioned accepting intuition too readily?
Risk aversion, and the degree to which it is felt, is a personality trait with high variance between individuals and over the lifespan. To ignore it in a utility calculation would be absurd. Maurice Allais should have listened to his homonym Alphonse Allais (no apparent relation), humorist and theoretician of the absurd, who famously remarked "La logique mène à tout à condition d'en sortir". Logic leads to everything, on condition it don't box you in.
I confess, the money pump thing sometimes strikes me as ... well... contrived. Yes, in theory, if one's preferences violate various rules of rationality (acyclicity being the easiest), one could conceivably be money-pumped. But, uh, it never actually happens in the real world. Our preferences, once they violate idealized axioms, lead to messes in highly unrealistic situations. Big deal.
I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right.
As long as it was only one occasion, I wouldn't make the effort to cross the room for two pennies. If I'm playing the game just once, and I feel a one-off payment of 2p tends to zero, I'll play with you, sure. £1 for a lottery ticket crosses the threshold of palpability, even playing once. I can get a newspaper for a pound. Is this irrational? I hope not.
When I made the (predictable, wrong) choice, I wasn't using probability at all. I was using intuitive rules of thumb like: "don't gamble", "treat small differences in probability as unimportant", and "if you have to gamble against similar odds, go for the larger win".
How do you find time to use authentic probability math for all your chance-taking decisions?
The large sums of money make a big difference here. If it were for dollars, rather than thousands of dollars, I'd do what utility theory told me to do, and if that meant I missed out on $27 due to a very unlucky chance then so be it. But I don't think I could bring myself to do the same for life-changing amounts like those set out above; I would kick myself so hard if I took the very slightly riskier bet and didn't get the money.
My experience of watching game shows such as 'Deal or No Deal' suggests that people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it, as if it would make their life worse than before they were selected to appear on the show. It seems this fear is in some sense inversely proportional to the 'socially expected' probability of the bad event - so if the player is aware that very few players win less than £1 on the show, they start getting very uncomfortable if there is a high chance of this happening to them...
People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection.
If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't.
Finally, the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not.
tcpkac: no one is assuming away risk aversion. Choosing 1A and 2B is irrational regardless of your level of risk aversion.
Constant's response implies that if someone prefers 1A to 1B and 2B to 2A, when confronted with the money pump situation, the person will decide that after all, 1A is preferable to 1B and 2A is preferable to 2B. This is very strange but at least consistent.
"Nainodelac and Tarleton Nick", why are you using my (reversed) name?
steven: not if you're nonlinearly risk averse. As many have suggested, what if you take a large one-time utility hit for taking any risk, but you're not averse beyond that?
Choosing 1A and 2B is irrational regardless of your level of risk aversion.
No, only if the utility of avoiding risk is worth less than the money at risk. Duh.
Your description is not a money pump. A money pump occurs when you prefer A > B and B > C and C > A. Then someone can trade you in a round robin taking a little out for themselves each cycle. I don't feel like typing in an illustration, so see Robyn Dawes, Rational Choice in an Uncertain World.
There is a significant difference between single and iterative situations. For a single play I would prefer 1A to 1B and 2B to 2A. If it were repeated, especially open-endedly, I would prefer 1B to 1A for its slightly greater expected payoff. This is analogous, I think, to the iterated versus one-time prisoner's dilemma, see Axelrod's Evolution of Cooperation for an interesting discussion of how they differ.
How trustworthy is the randomizer?
I'd pick B in both situations if it seemed likely that the offer were trustworthy. But in many cases, I'd give some chance of foul play, and it's FAR easier for an opponent to weasel out of paying if there's an apparently-random part of the wager. Someone says "I'll pay you $24k", it's reasonably clear. They say "I'll pay you $27k unless these dice roll snake eyes" and I'm going to expect much worse odds than 35/36 that I'll actually get paid.
So for 1A > 1B, this may be based on expectation of cheating. For 2A < 2B, both choices are roughly equally amenable to cheating, so you may as well maximize your expectation.
It seems likely that this kind of thinking is unconscious in most people, and therefore gets applied in situations where it's not relevant (like where you CAN actually trust the probabilities). But it's not automatically irrational.
It seems to me that your argument relies on the utility of having a probability p of gaining x being equal to p times the utility of gaining x. It's not clear to me that this should be true.
The trouble with the "money pump" argument is that the choice one makes may well depend on how one got into the situation of having the choice in the first place. For example, let's assume someone prefer 2B over 2A. It could be that if he were offered choice 1 "out of the blue" he would prefer 1A over 1B, yet if it were announced in advance that he w...
No, only if the utility of avoiding risk is worth less than the money at risk. Duh.
Someone did not read the OP carefully enough.
Hint: re-read the definition of the Axiom of Independence.
Someone isn't thinking carefully enough.
Hint: I did not assert that X is strictly preferred to Y.
Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility, and utility is a function of money that increases slower than linearly. When an agent doesn't maximize expected utility at all, that's something different.
Do you really want to say that it can be rational to accept a 1/3 chance of participating in a lottery, already knowing that if you got to participate you would change your mind? Risk aversion is (or at least, can be) a matter of taste, this is just a matter of not being stupid.
Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility
Oh, I agree.
I just measure utility differently than you do.
Caledonian, if utility is any function defined on amounts of money, then if you are maximizing expected utility, you cannot fall prey to the Allais paradox. You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP.
you're violating rationality axioms like the one Eliezer gave in the OP
No. Those axioms are "if => then" statements. I'm violating the "if" part.
Nainodelac, if you prefer 1A to 1B and 2A to 2B, as you should if you need exactly $24,000 to save your life, that is a perfectly consistent preference pattern.
You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP.
Having a utility function determined by anything other than amounts of money is irrational? WTF?
Upon rereading the thread and all of its comments, I suspect the person I originally quoted meant something along the lines of "preferring 1A to 1B but 2B to 2A is irrational", which seems more defensible.
There is nothing irrational about preferring 1A and 2B by themselves, it's choosing the first option in the first scenario and the second in the second that's dodgy.
Nick is right to object, but removing the phrase "on amounts of money" makes the statement unobjectionable -- and relevant and true.
This may be related to the phenomenon of overconfident probability estimates. I would not be surprised to find that people who claim a 97% certainty have a real 90% probability of being right. Maybe someone who hears there's 1 chance in 34 of winning nothing interprets that as coming from an overconfident estimator whereas the 34% and 33% probabilities are taken at face value.
On the other hand, the overconfidence detector seems to stop working when faced with asserted certainty.
"Nainodelac and Tarleton Nick": This is not about risk aversion. I agree that if it is vital to gain at least $20,000, 1A is a superior choice to 1B. However, in that case, 2A is also a superior choice to 2B. The error is not in preferring 1A, but in simultaneously preferring 1A and 2B.
Is Pascal's Mugging the reductio ad absurdum of expected value?
No. I thought it might be! But Robin gave an excellent reason of why we should genuinely penalize the probability by a proportional amount, dragging the expected value back down to negligibility.
(This may be the first time that I have presented an FAI question that stumped me, and it was solved by an economist. Which is actually a very encouraging sign.)
This discussion reminded me of the Torture vs. Dust Specks discussion; i.e. in that discussion, many comments, perhaps a majority, amounted to "I feel like choosing Dust Specks, so that's what I choose, and I don't care about anything else." In the same way, there is a perfectly consistent utility function that can prefer A1 to B1 and B2 to B1, namely one that sets utility on "feeling that I have made the right choice", and which does not set utility on money or anything else. Both in this case and in the case of the Torture and Dust Sp...
Not sure if anyone pointed this out, but in a situation where you don't trust the organizer, the proper execution of 1A is a lot easier to verify than the proper execution of 1B, 2A and 2B.
1A minimizes your risk of being fooled by some hidden cleverness or violation of the contract. In 1B, 2A and 2B, if you lose, you have to verify that the random number generator is truly random. This can be extremely costly.
In option 1A, verification consists of checking your bank account and seeing that you gained $24,000. Straightforward and simple. Hardly any risk of being deceived.
I hate to discuss this again, but...
Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans.
It's simple to show that no rational person would actually give money to a Pascal mugger, as the next mugger might threaten 4^^^4 people. I'm not sure whether this solves the problem or just sweeps it under the rug, though.
Well, if Pascal's Mugging doesn't do it, how about the St. Petersburg paradox? ;)
Oh wait... infinite set atheist... never mind.
I'm afraid I don't follow the maths involved, but I'd like to know whether the equations work out differently if you take this premise:
- Since 1A offers a certainty of $24,000, it is deemed to be immediately in your possession. 1B then becomes a 33/34 chance of winning $3,000 and 1/34 chance of losing $24,000.
Can someone tell me how this works out mathematically, and how it then compares to 2B?
The Allais Paradox is indeed quite puzzling. Here are my thoughts:
0. Some commenters simply dismiss Bayesian reasoning. This doesn't solve the problem, it just strips us of any mathematical way to analyze the problem. On the other hand, the fact that the inconsistent choice seems ok does mean that the Bayesian way is missing something. Simply dismissing the inconsistent choice doesn't solve the problem either.
1. If I understand correctly, you argue that situation 1 can be turned into situation 2 by randomization. In other words, if you sell me situation 1,...
Nick,
"Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans."
The Porcine Mugging doesn't bypass the objection. Your estimates of the frequency of simulated people and pigs should be commensurably vast, and it is vastly unlikely that your simulation (out of many with intelligent beings) will be selected for an actual Porcine Mugging that will consume vast resources (enough to simulate vast numbers of humans). These things offset to get you workable calculations.
I would have chosen 1A and 2B, for the following reasons: Any sum of the order of $20,000 would revolutionize my personal circumstances. The likely payoff is enormous. Therefore, I'd pick 1A because I'd get such a sum guaranteed, rather than run the 3% risk (1B) of getting nothing at all. Whereas choice 2 is a gamble either way, so I am led to treat both options as qualitatively the same. But that's a mistake: if the value of getting either nonzero payoff at all is so great, then I should have favored the 34% chance of winning something over the 33% chance, just as I favored the 100% chance over the ~97% chance in choice 1. Interesting.
Surely the answer is dependednat on goal criterion. If the goal is to get 'some' money then the 100% option and the 34% options are better. If your goal is get 'the most' money then the 97% and the 33% options are better. However the goal might be socially construictued. This reminded me of John Nash whom offered one of his sectraries $15 dollars if she shared it equally with a co-worker but $10 if she kept it for her-self. She took the $15 and split it with her co-worker. She chose an option that maximised her social capital but was a weaker one economically.
I agree with Dagon.
This experiment assumes that the subjective probabilities of participants were identical to the stated probabilities. In reality, I feel like people are probably wary of stated probabilities due to experiences with or fears of shysters and conmen. That, is if asked to choose between 1A and 1B, 1B offers the possibility that the `randomising mechanism' that the experimenter is offering is in fact rigged.
Even if the experimenter is completely honest in their statement of their own subjective probabilities, they may simply disagree with t...
Eliezer, I see from this example that the Axiom of Independence is related to the notion of dynamic consistency. But, the logical implication goes only one way. That is, the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)
I'm afraid that the Axiom of Independence cannot really be justified as a basic principle of rationality. Von Neumann and Morgenstern probably came up with it because it was mathematically necessary to derive Expected Utility Theory, then they and others tried to justify it afterward because Expected Utility turned out to be such an elegant and useful idea. Has anyone seen Independence proposed as a principle of rationality prior to the invention of Expected Utility Theory?
Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational.
Choose between the following two options:
Which seems more intuitively appealing? And which one would you choose in real life?
Now which of these two options would you intuitively prefer, and which would you choose in real life?
The Allais Paradox - as Allais called it, though it's not really a paradox - was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953. I've modified it slightly for ease of math, but the essential problem is the same: Most people prefer 1A > 1B, and most people prefer 2B > 2A. Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously.
This is a problem because the 2s are equal to a one-third chance of playing the 1s. That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.
Among the axioms used to prove that "consistent" decisionmakers can be viewed as maximizing expected utility, is the Axiom of Independence: If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z.
All the axioms are consequences, as well as antecedents, of a consistent utility function. So it must be possible to prove that the experimental subjects above can't have a consistent utility function over outcomes. And indeed, you can't simultaneously have:
These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money.
Maurice Allais initially defended the revealed preferences of the experimental subjects - he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology. This was 1953, after all, and the heuristics-and-biases movement wouldn't really get started for another two decades. Allais thought his experiment just showed that the Axiom of Independence clearly wasn't a good idea in real life.
(How naive, how foolish, how simplistic is Bayesian decision theory...)
Surely, the certainty of having $24,000 should count for something. You can feel the difference, right? The solid reassurance?
(I'm starting to think of this as "naive philosophical realism" - supposing that our intuitions directly expose truths about which strategies are wiser, as though it was a directly perceived fact that "1A is superior to 1B". Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.)
"But come now," you say, "is it really such a terrible thing, to depart from Bayesian beauty?" Okay, so the subjects didn't follow the neat little "independence axiom" espoused by the likes of von Neumann and Morgenstern. Yet who says that things must be neat and tidy?
Why fret about elegance, if it makes us take risks we don't want? Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome's probability, add them up, etc. Okay, but why do we have to do that? Why not make up more palatable rules instead?
There is always a price for leaving the Bayesian Way. That's what coherence and uniqueness theorems are all about.
In this case, if an agent prefers 1A > 1B, and 2B > 2A, it introduces a form of preference reversal - a dynamic inconsistency in the agent's planning. You become a money pump.
Suppose that at 12:00PM I roll a hundred-sided die. If the die shows a number greater than 34, the game terminates. Otherwise, at 12:05PM I consult a switch with two settings, A and B. If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you $27,000 unless the die shows "34", in which case I pay you nothing.
Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B. The die comes up 12. After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.
I have taken your two cents on the subject.
If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don't be surprised when your pennies get taken from you...
(I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.)
Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'école américaine. Econometrica, 21, 503-46.
Kahneman, D. and Tversky, A. (1979.) Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47, 263-92.