Yes, philosophers, and others, do often too easily accept the advice of strong intuitions, forgetting that strong intuitions often conflict in non-obvious ways.
The idea is that $ amount equals your utility, while in reality the history of how you got this amount also matters (regret, emotions, etc.).
There's no paradox here - as your utility expressed in $ just doesn't match utility of the subjects. As for money pump - you just have a win win situation - you earn money, and the subjects earn good feelings.
If I knew the offer wouldn't be repeated, I might take 1A because I'd really rather not have to explain to people how I lost $24,000 on a gamble.
Actually, that makes me think of another explanation besides overreaction to small probabilities: if a person takes 1B and loses, they know they would have won if they'd chosen differently. If they take 2B and lose, they can tell themselves (and others) they probably would have lost anyway.
Reread the post; that's not the paradox.
The paradox is that, if you need the 20k to survive, then you should prefer 2A to 2B, because the extra 3k 33% of the time doesn't outweigh an additional 1% chance of dying.
If someone prefers A in both cases, and B in both cases, they can have a consistent utility function. When someone prefers A in one case, and B in another, then they cannot have a consistent utility function.
Risk and cost of capital introduce very strange twists on expected utility.
Assume that living has a greater expected utility to me than any monetary value. If I need a $20,000 operation within the next 3 hours to live, I have no other funding, and you make me offer 1, it is completely rational and unbiased to take option 1A. It is the difference between a 100% of living and a 97% chance of living.
If I have $1,000,000,000 in the bank and command of legal or otherwise armed forces, I may just have you killed - for I would not tolerate such frivolous philosophizing.
I think defenses of the subject's choices by recourse to nonmonetary values is missing the point. Anything can be rational with a sufficiently weird utility function. The question is, if subjects understood the decision theory behind the problem, would they still make the same choice? After seeing a valid argument that your preferences make you a money pump, you certainly could persist in your original judgment, by insisting that your feelings make your first judgment the right one.
But seriously?---why?
Since people only make a finite number of decisions in their lifetime, couldn't their utility function specify every decision independently? (You could have a utility function that is normal except that it says that everything you hear being called 1A is preferable to 1B, and anything you hear being called 2B is preferable to 2A. If this contradicts your normal utility function, this rule is always more important. Even if 2B leads to death, you still choose 2B.)
The utility function would be impossible to come up with in advance, but it exists.
My intuitions match the stated naive intuitions, but I reject your assertion that the pair of preferences are inconsistent with Bayesian probability theory.
You really underestimate the utility of certainty. "Nainodelac and Tarleton Nick"'s example in these comments about the operation is a perfect counter.
With a 33% vs. 34% chance, the impact on your life is about the same, so you just do the straightforward probability calculation for expected value and take the maximum.
But when offered 100% of some positive outcome, vs. a probability of nothin...
Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B.
I don't understand why I would pay you a penny to throw the switch gefore 12:00?
Since I know myself, I know what I will do after midnight (pay to switch it to A), and so I resign myself to doing it immediately (i.e., leaving the switch at A) so as to save either one cent or two, depending on what happens. I will do this even if I share Don's intuition about certainty. Why pay before midnight to switch it to B if I know that after midnight I will pay to switch it back to A?
*[if the first die comes up 1 to 34]
I think I missed something on the algebraic inconsistency part...
If there is some rational independent utility to certainty, the algebraic claims should be more like this:
This seems consistent so long as U(Certainty) > 1/34 U($27,000).
I'm not committed to the notion there is a rational independent value to certainty, I'm just not seeing how it can be dismissed with quick algebra. Maybe that wasn't your goal. Forgive me if this is my oversight.
This reminds me of the foolish decisions on "deal or no deal". People would fail to follow their own announced utility.
When we speak of an inherent utility of certainty, what do we mean by certainty? An actual probability of unity, or, more reasonably, something which is merely very much certain, like probability .999? If the latter, then there should exist a function expressing the "utility bonus for certainty" as a function of how certain we are. It's not immediately obvious to me how such a function should behave. If probability 0.9999 is very much more preferable to probability 0.8999 than probability 0.5 is preferable to probability 0.4, then is 0.5 very much more preferable to 0.4 than 0.2 is to 0.1?
It's rational to take the certain outcome if gambling causes psychological stress. Notwithstanding that stress is intrinsically unpleasant, it increases your risk of peptic ulcers and stroke, which could easily cancel out the expected gain.
If you crunch the numbers differently, you can come to different conclusions. For example, if I choose 1B over 1A, I have a 1 in 34 chance of getting burned. If I choose 2B over 2A, my chance of getting burned is only 1 in 100.
James D. Miller has a proposal for Lottery Tickets that Usuallly Pay Off.
Robin, were you thinking of a certain colleague of yours when you mentioned accepting intuition too readily?
Risk aversion, and the degree to which it is felt, is a personality trait with high variance between individuals and over the lifespan. To ignore it in a utility calculation would be absurd. Maurice Allais should have listened to his homonym Alphonse Allais (no apparent relation), humorist and theoretician of the absurd, who famously remarked "La logique mène à tout à condition d'en sortir". Logic leads to everything, on condition it don't box you in.
I confess, the money pump thing sometimes strikes me as ... well... contrived. Yes, in theory, if one's preferences violate various rules of rationality (acyclicity being the easiest), one could conceivably be money-pumped. But, uh, it never actually happens in the real world. Our preferences, once they violate idealized axioms, lead to messes in highly unrealistic situations. Big deal.
I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right.
As long as it was only one occasion, I wouldn't make the effort to cross the room for two pennies. If I'm playing the game just once, and I feel a one-off payment of 2p tends to zero, I'll play with you, sure. £1 for a lottery ticket crosses the threshold of palpability, even playing once. I can get a newspaper for a pound. Is this irrational? I hope not.
When I made the (predictable, wrong) choice, I wasn't using probability at all. I was using intuitive rules of thumb like: "don't gamble", "treat small differences in probability as unimportant", and "if you have to gamble against similar odds, go for the larger win".
How do you find time to use authentic probability math for all your chance-taking decisions?
The large sums of money make a big difference here. If it were for dollars, rather than thousands of dollars, I'd do what utility theory told me to do, and if that meant I missed out on $27 due to a very unlucky chance then so be it. But I don't think I could bring myself to do the same for life-changing amounts like those set out above; I would kick myself so hard if I took the very slightly riskier bet and didn't get the money.
My experience of watching game shows such as 'Deal or No Deal' suggests that people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it, as if it would make their life worse than before they were selected to appear on the show. It seems this fear is in some sense inversely proportional to the 'socially expected' probability of the bad event - so if the player is aware that very few players win less than £1 on the show, they start getting very uncomfortable if there is a high chance of this happening to them...
People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection.
If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't.
Finally, the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not.
tcpkac: no one is assuming away risk aversion. Choosing 1A and 2B is irrational regardless of your level of risk aversion.
Constant's response implies that if someone prefers 1A to 1B and 2B to 2A, when confronted with the money pump situation, the person will decide that after all, 1A is preferable to 1B and 2A is preferable to 2B. This is very strange but at least consistent.
"Nainodelac and Tarleton Nick", why are you using my (reversed) name?
steven: not if you're nonlinearly risk averse. As many have suggested, what if you take a large one-time utility hit for taking any risk, but you're not averse beyond that?
Choosing 1A and 2B is irrational regardless of your level of risk aversion.
No, only if the utility of avoiding risk is worth less than the money at risk. Duh.
Your description is not a money pump. A money pump occurs when you prefer A > B and B > C and C > A. Then someone can trade you in a round robin taking a little out for themselves each cycle. I don't feel like typing in an illustration, so see Robyn Dawes, Rational Choice in an Uncertain World.
There is a significant difference between single and iterative situations. For a single play I would prefer 1A to 1B and 2B to 2A. If it were repeated, especially open-endedly, I would prefer 1B to 1A for its slightly greater expected payoff. This is analogous, I think, to the iterated versus one-time prisoner's dilemma, see Axelrod's Evolution of Cooperation for an interesting discussion of how they differ.
How trustworthy is the randomizer?
I'd pick B in both situations if it seemed likely that the offer were trustworthy. But in many cases, I'd give some chance of foul play, and it's FAR easier for an opponent to weasel out of paying if there's an apparently-random part of the wager. Someone says "I'll pay you $24k", it's reasonably clear. They say "I'll pay you $27k unless these dice roll snake eyes" and I'm going to expect much worse odds than 35/36 that I'll actually get paid.
So for 1A > 1B, this may be based on expectation of cheating. For 2A < 2B, both choices are roughly equally amenable to cheating, so you may as well maximize your expectation.
It seems likely that this kind of thinking is unconscious in most people, and therefore gets applied in situations where it's not relevant (like where you CAN actually trust the probabilities). But it's not automatically irrational.
It seems to me that your argument relies on the utility of having a probability p of gaining x being equal to p times the utility of gaining x. It's not clear to me that this should be true.
The trouble with the "money pump" argument is that the choice one makes may well depend on how one got into the situation of having the choice in the first place. For example, let's assume someone prefer 2B over 2A. It could be that if he were offered choice 1 "out of the blue" he would prefer 1A over 1B, yet if it were announced in advance that he w...
No, only if the utility of avoiding risk is worth less than the money at risk. Duh.
Someone did not read the OP carefully enough.
Hint: re-read the definition of the Axiom of Independence.
Someone isn't thinking carefully enough.
Hint: I did not assert that X is strictly preferred to Y.
Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility, and utility is a function of money that increases slower than linearly. When an agent doesn't maximize expected utility at all, that's something different.
Do you really want to say that it can be rational to accept a 1/3 chance of participating in a lottery, already knowing that if you got to participate you would change your mind? Risk aversion is (or at least, can be) a matter of taste, this is just a matter of not being stupid.
Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility
Oh, I agree.
I just measure utility differently than you do.
Caledonian, if utility is any function defined on amounts of money, then if you are maximizing expected utility, you cannot fall prey to the Allais paradox. You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP.
you're violating rationality axioms like the one Eliezer gave in the OP
No. Those axioms are "if => then" statements. I'm violating the "if" part.
Nainodelac, if you prefer 1A to 1B and 2A to 2B, as you should if you need exactly $24,000 to save your life, that is a perfectly consistent preference pattern.
You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP.
Having a utility function determined by anything other than amounts of money is irrational? WTF?
Upon rereading the thread and all of its comments, I suspect the person I originally quoted meant something along the lines of "preferring 1A to 1B but 2B to 2A is irrational", which seems more defensible.
There is nothing irrational about preferring 1A and 2B by themselves, it's choosing the first option in the first scenario and the second in the second that's dodgy.
Nick is right to object, but removing the phrase "on amounts of money" makes the statement unobjectionable -- and relevant and true.
This may be related to the phenomenon of overconfident probability estimates. I would not be surprised to find that people who claim a 97% certainty have a real 90% probability of being right. Maybe someone who hears there's 1 chance in 34 of winning nothing interprets that as coming from an overconfident estimator whereas the 34% and 33% probabilities are taken at face value.
On the other hand, the overconfidence detector seems to stop working when faced with asserted certainty.
"Nainodelac and Tarleton Nick": This is not about risk aversion. I agree that if it is vital to gain at least $20,000, 1A is a superior choice to 1B. However, in that case, 2A is also a superior choice to 2B. The error is not in preferring 1A, but in simultaneously preferring 1A and 2B.
Is Pascal's Mugging the reductio ad absurdum of expected value?
No. I thought it might be! But Robin gave an excellent reason of why we should genuinely penalize the probability by a proportional amount, dragging the expected value back down to negligibility.
(This may be the first time that I have presented an FAI question that stumped me, and it was solved by an economist. Which is actually a very encouraging sign.)
This discussion reminded me of the Torture vs. Dust Specks discussion; i.e. in that discussion, many comments, perhaps a majority, amounted to "I feel like choosing Dust Specks, so that's what I choose, and I don't care about anything else." In the same way, there is a perfectly consistent utility function that can prefer A1 to B1 and B2 to B1, namely one that sets utility on "feeling that I have made the right choice", and which does not set utility on money or anything else. Both in this case and in the case of the Torture and Dust Sp...
Not sure if anyone pointed this out, but in a situation where you don't trust the organizer, the proper execution of 1A is a lot easier to verify than the proper execution of 1B, 2A and 2B.
1A minimizes your risk of being fooled by some hidden cleverness or violation of the contract. In 1B, 2A and 2B, if you lose, you have to verify that the random number generator is truly random. This can be extremely costly.
In option 1A, verification consists of checking your bank account and seeing that you gained $24,000. Straightforward and simple. Hardly any risk of being deceived.
I hate to discuss this again, but...
Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans.
It's simple to show that no rational person would actually give money to a Pascal mugger, as the next mugger might threaten 4^^^4 people. I'm not sure whether this solves the problem or just sweeps it under the rug, though.
Well, if Pascal's Mugging doesn't do it, how about the St. Petersburg paradox? ;)
Oh wait... infinite set atheist... never mind.
I'm afraid I don't follow the maths involved, but I'd like to know whether the equations work out differently if you take this premise:
- Since 1A offers a certainty of $24,000, it is deemed to be immediately in your possession. 1B then becomes a 33/34 chance of winning $3,000 and 1/34 chance of losing $24,000.
Can someone tell me how this works out mathematically, and how it then compares to 2B?
The Allais Paradox is indeed quite puzzling. Here are my thoughts:
0. Some commenters simply dismiss Bayesian reasoning. This doesn't solve the problem, it just strips us of any mathematical way to analyze the problem. On the other hand, the fact that the inconsistent choice seems ok does mean that the Bayesian way is missing something. Simply dismissing the inconsistent choice doesn't solve the problem either.
1. If I understand correctly, you argue that situation 1 can be turned into situation 2 by randomization. In other words, if you sell me situation 1,...
Nick,
"Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans."
The Porcine Mugging doesn't bypass the objection. Your estimates of the frequency of simulated people and pigs should be commensurably vast, and it is vastly unlikely that your simulation (out of many with intelligent beings) will be selected for an actual Porcine Mugging that will consume vast resources (enough to simulate vast numbers of humans). These things offset to get you workable calculations.
I would have chosen 1A and 2B, for the following reasons: Any sum of the order of $20,000 would revolutionize my personal circumstances. The likely payoff is enormous. Therefore, I'd pick 1A because I'd get such a sum guaranteed, rather than run the 3% risk (1B) of getting nothing at all. Whereas choice 2 is a gamble either way, so I am led to treat both options as qualitatively the same. But that's a mistake: if the value of getting either nonzero payoff at all is so great, then I should have favored the 34% chance of winning something over the 33% chance, just as I favored the 100% chance over the ~97% chance in choice 1. Interesting.
Surely the answer is dependednat on goal criterion. If the goal is to get 'some' money then the 100% option and the 34% options are better. If your goal is get 'the most' money then the 97% and the 33% options are better. However the goal might be socially construictued. This reminded me of John Nash whom offered one of his sectraries $15 dollars if she shared it equally with a co-worker but $10 if she kept it for her-self. She took the $15 and split it with her co-worker. She chose an option that maximised her social capital but was a weaker one economically.
I agree with Dagon.
This experiment assumes that the subjective probabilities of participants were identical to the stated probabilities. In reality, I feel like people are probably wary of stated probabilities due to experiences with or fears of shysters and conmen. That, is if asked to choose between 1A and 1B, 1B offers the possibility that the `randomising mechanism' that the experimenter is offering is in fact rigged.
Even if the experimenter is completely honest in their statement of their own subjective probabilities, they may simply disagree with t...
Eliezer, I see from this example that the Axiom of Independence is related to the notion of dynamic consistency. But, the logical implication goes only one way. That is, the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)
I'm afraid that the Axiom of Independence cannot really be justified as a basic principle of rationality. Von Neumann and Morgenstern probably came up with it because it was mathematically necessary to derive Expected Utility Theory, then they and others tried to justify it afterward because Expected Utility turned out to be such an elegant and useful idea. Has anyone seen Independence proposed as a principle of rationality prior to the invention of Expected Utility Theory?
Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational.
"That is, the Axiom of Independence implies dynamic consistency, but not vice versa."
Really? A hyperbolic discounter can conform to the Axiom of Independence at any particular time and be dynamically inconsistent.
I would love to know if the results are different if you repeatedly expose people to the situation rather than communicate it in a formal way. They are likely to observe the outcomes of their strategy and adapt. Perhaps what is being measured is simply the numeracy of the subjects and not their practical inability to determine optimal strategies.
The lottery is another interesting example, what is being bought is the probability of a big win, not a statistically optimal investment. Playing the lottery genuinely increases the chance of you suddenly gaining a life changing amount of money. This is a perfectly rational choice.
Ummm, no. The money pump fails because of the REASON for the preference difference.
The reason is, as some have already stated, that in scenario 1B if you lose you know it's your fault you got nothing. In scenario 2B if you lose, you can rationalise it easily as "Would have lost anyway"
In your money pump scenario, we have a 1/3rd chance of playing 1. If we get to play 1, we know we're playing 1. So your money pump fails, because a standard player would prefer that the switch be on A at all times.
How do I alleviate feeling pleased at myself for having read the statement of the paradox - that people preferred 1A>1B but 2B>2A - and immediately going "WHAT?" and boggling at the screen and pulling confused faces for about thirty seconds, so flabbergasted I had to reread that this choice pattern was common?
(Personally I'm really strongly biased these days toward a bird in the hand and would have chosen 1A and 2A every time. I occasionally do bits of sysadmin for dodgy dot-coms that friends are working for. There are people who offer equi...
It took me 30 minutes of sitting down and doing math before I could finally accept that 1A+2B was an irrational preference. I finally realized that a lot of it came down to: with a 66% vs 67% chance of losing, I could take the riskier option and not feel as bad, because I could sweep it under the rug with "oh, I probably would have lost anyways."
Once I ran a scenario where I'd KNOW whether it was that 1% that I controlled, or the 66% that I didn't control, that comfort evaporated.
I learned a lot about myself by working through this exercise, so thank you very much :)
The problem as stated is hypothetical: there is next to no context, and it is assumed that the utility scales with the monetary reward. Once you confront real people with this offer, the context expands, and the analysis of the hypothetical situation falls short of being an adequate representation of reality, not necessarily because of a fault of the real people.
Many real people use a strategy of "don't gamble with money you cannot afford to lose"; this is overall a pretty successful strategy (and if I was looking to make some money, my mark woul...
I wonder how the results would change if the experiment changes so that the outcomes of 2B are, "You have a 33% chance of receiving $27k, a 66% chance of not getting anything, and a 1% chance of having someone laugh in your face for not picking 2A"
If you'd ask any person capable of doing the math whether they would want to play 1A or 1B a thousand times you'd probably get a different answer, but not an answer that's more correct.
Also the utility value of money is not directly relative to the amount of money. Imagine that you would need a 1000$ dollars of money to save your dying relative with certainty by paying for his/her treatment. Good enough for explaining 1A > 1B, but doesn't resolve the contradiction with 2B > 2A.
But even a more revealing edit is based exactly onto the certainty. If yo...
Please correct me if any of my assumptions are innacurate, and I apologize if this comment comes off as completely tautological.
Expected utility is explicity defined as the statistic
U(x)})
where X is the set of all possible outcomes associated with a particular gamble, p(x) is the proportion of times that outcome x occurs within the gamble, and U(x) is the utility of outcome x, a function that must be strictly increasing with respect to the monetary value of outcome x.
To reduce ambiguity:
1A, 1B, 2A, and 2B are instances of gambles.
For 1B, the possible o
I initially chose 1A and 2B, but after reading the analysis of those decisions, I agree that they are inconsistent in a way that implies that one choice was irrational (in the context of this silly little game). So I did some introspection to figure out where I went wrong. Here's what I found:
1) I may have misjudged how small 1/34 is, and this only became apparent when the question was phased as it is in example 2.
2) I think I assumed an implicit costs in these gambles. The first cost is a delay in learning the outcome of these gambles; the second is the i...
While Elezier's argument is still correct (that you should multiply to make decisions based on probabilistic knowledge), I see a perfectly rational and utilitarian explanation for choosing 1A and 2B in the stated problem.
The clue lies in Colin Reid's comment: "people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it". This fear is explained by Kingreaper: "in scenario 1B if you lose you know it's your fault you got nothing".
That makes the two cases, stated as they are, different. In gam...
I know this was posted 4 years ago, but I had a thought. If I was offered a certainty of $24,000 vs a 33/34 chance of $27,000, my preference would depend on whether this was a once-off. If this was a once-off, my primary concern would be securing the money and being able to put food on the table tonight. Option 1 will put food on the table with 100% certainty, while Option 2 will not.
If, however, the option was to be offered many times, I would optimise for greatest return - Option 2. If I miss out this month, I'll just scrape for food until next month, wh...
This is an old post, but I guess one resolution is that:
U($24,000) > 33/34 U($27,000) + 1/34 U($0 & Regret that I didn't take the $24000)
Which is consistent with:
0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0)
It's an interesting psychological fact that the regret is triggered in one case, but not the other.
I wonder if this bias is somehow trying to compensate for some other bias. Suppose you think the experimenter is overconfident, i.e., their log-odds are twice as much as they should; so, when they say 100% they do mean 100%, but when they say 97.1% they actually mean 85.2% (and when they say 34% they mean 41.8%, and when they say 33% they mean 41.2%). Now, Option 1B suddenly looks much uglier, doesn't it? (I'm not claiming this happens consciously.)
If flipping the switch before 12:00 pm has no effect on the amount of money one acquires why would one pay anything to do it? why not just flip the switch only once after 12:00 pm and before 12:05PM?
Question: do the rest of you actually find the choice of 1A clearly intuitive?
I think my intuition for examples like this has been safely killed off, so my replacement intuition instead says: "hm, clearly 34*(27-24) > 27, so 1B!" (without actually evaluating 27-24, just noting it's ≥1). Which mainly suggests that I've grown accustomed to calculating expectations out explicitly where they're obvious, not that I'm necessarily good at avoiding real life analogues of the problem.
1A. $24,000, with certainty.
1B. 33/34 chance of winning $27,000, and 1/34 chance of winning nothing.
2A. 34% chance of winning $24,000, and 66% chance of winning nothing.
2B. 33% chance of winning $27,000, and 67% chance of winning nothing.
I would choose 1A over 1B, and 2B over 2A, despite the 9.2% better expected payout of 1B and the small increased risk in 2B. If the option was repeatable several times, I'd choose 1B over 1A as well (but switch back to 1A if I lost too many times).
This does not make me susceptible to a money pump or a Dutch book (you'...
I don't like many of the standard arguments against capital punishment. In particular, I'm tired of the argument "if you just put an innocent person in jail, they might be exonerated later. If you execute an innocent person, and they are exonerated later, it's too late."
Of course, I then point out that people can be exonerated in the time between being convicted and being executed (which can be quite long sometimes), and the response is generally that in the life sentence there's always some chance of being freed due to exoneration while in the...
My resolution to this, without changing my intuitions to pick things that I currently perceive as 'simply wrong', would be that I value certainty. A 9/10 chance of winning x dollars is worth much less to me than a 10/10 chance of winning 9x/10 dollars. However, a 2/10 chance of winning x dollars is worth only barely less than a 4/10 chance of winning x/2 dollars, because as far as I can tell the added utility of the lack of worrying increases massively as the more certain option approaches 100%. Now, this becomes less powerful the closer the odds, are, but...
I don´t really see how me chosing 1A > 1b and 2b >2A is a flaw of mine. First of all, my utility function, which i have inherited from millions of years of evolution, tells me to SOMETIMES take risks IF I CAN AFFORD IT, especially when the increasing stake outweighs the increasing risk.
This is how I see it: If it was my life at stake, I would of course try to raise the odds. But this is extra money. I don´t even starve if i don´t get the money.
If I am not certain I can get the money in case 2, I think that lowering my win-chance with 1/100 is worth...
The Allais "Paradox" and Scam Vulnerability by Karl Hammer is a much needed update for anyone who reads the OP.
Would I pay $24k to play a game where I had a 33/34 probability of winning an extra $3k? Let's consult our good friend the Kelly Criterion.
We have a bet that pays 1/8:1 with a 33/34 probability of winning, so Kelly suggests staking ~73.5% of my bankroll on the bet. This means I'd have to have an extra ~$8.7k I'm willing to gamble with in order to choose 1b. If I'm risk-averse and prefer a fractional Kelly scheme, I'd need to start with ~$20k for a three-fourths Kelly bet and ~$41k for a one-half Kelly bet. Since I don't have that kind of money lying aroun...
Forgive me if I'm misunderstanding something, but the way I see it, if I choose 1A, it means that I am willing to forgo (i.e. pay) 3000$ for an additional 1/34 ~ 3% chance of getting money. Then if I choose 2B, if means I am unwilling to forgo an additional 3000$ in exchange for an additional 1% chance of getting money. So what I learn from this is that the value I assign an extra percentage chance of getting money is somewhere between 1000$ and 3000$.
So here's why I prefer 1A and 2B after doing the math, and what that math is.
1A = 24000
1B = 26206 (rounded)
2A = 8160
2B = 8910
Now, if you take (iB-iA)/iA, which represents the percent increase in the expected value of iB over iA, you get the same number, as you stated.
(iB-iA)/iA = .0919 (rounded)
This number's reciprocal represents the number of times greater the expected value of iA is than the marginal expected value of iB
iA/(iB-iA) = 10.88 (not rounded)
Now, take this number and divide it by the quantity p(iA wins)-p(iB wins). This represents how much y...
Assuming this is a one off and not a repeated iteration;
I'd take 1A because I'd be *really* upset if I lost out on $27k due to being greedy and not taking the sure $24k. That 1/34 is a small risk but to me it isn't worth taking - the $24k is too important for me to lose out on.
I'd take 2B instead of 2A because the difference in odds is basically negligible so why not go for the extra $3k? I have ~2/3rds chance to walk away with nothing either way.
I don't really see the paradox there. The point is to win, yes? If I play game 1 and p...
Oh, here I come again, I've already commented in similar fashion elsewhere, and several people said the same here: nothing vs. non-nothing as a binary switch may work better if the situation is not repeated to "add up to normality" but only played once. One can argue that repeats may seem as being played once each time, but, being creatures gifted with memory, we can notice a catch of encountering such situations often and modify behaviour.
I would set up an insurance company that pays people to $24,500 to pick 1B and keeps their earning if they win. They get slightly more risk-free money and I profit massively. Isn't that the whole point of insurance?
I think this might just be a rephrasal of what several other commenters have said, but I found this conception somewhat helpful.
Based on intuitive modeling of this scenario and several others like it, I found that I ran into the expected “paradox” in the original statement of the problem, but not in the statement where you roll one dice to determine the 1/3 chance of me being offered the wager, and then the original wager. I suspect that the reason why is something like this:
Loosing 1B is a uniquely bad outcome, worse than its monetary utility would imply,
...It seems that the mistake that people commit is imagining the the second scenario is a choice between 0.34*24000 = 8160 and 0.33*27000 = 8910. Yes, if that was the case, then you could imagine a utility function that is approximately linear in the region 8160 to 8910, but sufficiently concave in the region 24000 to 27000 s.t. the difference between 8160 and 8910 feels greater than between 24000 and 27000... But that's not the actual scenario with which we are presented. We don't actually get to see 8160 or 8910. The slopes of the ...
"These two equations are algebraically inconsistent". Yes, combining them results into "0 < 0", which is false.
It seems that the axiom of independence doesn't always hold for instrumental goals when you are playing a game.
Suppose you are playing a zero-sum game against Omega who can predict your move - either it has read your source code, or played enough games with you to predict you, including any pseudorandom number generator you have. You can make moves a or b, Omega can make moves c or d, and your payoff matrix is:
c d
a 0 4
b 4 1
U(a) = 0, U(b) = 1.
Now suppose we got a fair coin that Omega cannot predict, and can add a 0.5 probabili...
Choose between the following two options:
Which seems more intuitively appealing? And which one would you choose in real life?
Now which of these two options would you intuitively prefer, and which would you choose in real life?
The Allais Paradox - as Allais called it, though it's not really a paradox - was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953. I've modified it slightly for ease of math, but the essential problem is the same: Most people prefer 1A > 1B, and most people prefer 2B > 2A. Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously.
This is a problem because the 2s are equal to a one-third chance of playing the 1s. That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.
Among the axioms used to prove that "consistent" decisionmakers can be viewed as maximizing expected utility, is the Axiom of Independence: If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z.
All the axioms are consequences, as well as antecedents, of a consistent utility function. So it must be possible to prove that the experimental subjects above can't have a consistent utility function over outcomes. And indeed, you can't simultaneously have:
These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money.
Maurice Allais initially defended the revealed preferences of the experimental subjects - he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology. This was 1953, after all, and the heuristics-and-biases movement wouldn't really get started for another two decades. Allais thought his experiment just showed that the Axiom of Independence clearly wasn't a good idea in real life.
(How naive, how foolish, how simplistic is Bayesian decision theory...)
Surely, the certainty of having $24,000 should count for something. You can feel the difference, right? The solid reassurance?
(I'm starting to think of this as "naive philosophical realism" - supposing that our intuitions directly expose truths about which strategies are wiser, as though it was a directly perceived fact that "1A is superior to 1B". Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.)
"But come now," you say, "is it really such a terrible thing, to depart from Bayesian beauty?" Okay, so the subjects didn't follow the neat little "independence axiom" espoused by the likes of von Neumann and Morgenstern. Yet who says that things must be neat and tidy?
Why fret about elegance, if it makes us take risks we don't want? Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome's probability, add them up, etc. Okay, but why do we have to do that? Why not make up more palatable rules instead?
There is always a price for leaving the Bayesian Way. That's what coherence and uniqueness theorems are all about.
In this case, if an agent prefers 1A > 1B, and 2B > 2A, it introduces a form of preference reversal - a dynamic inconsistency in the agent's planning. You become a money pump.
Suppose that at 12:00PM I roll a hundred-sided die. If the die shows a number greater than 34, the game terminates. Otherwise, at 12:05PM I consult a switch with two settings, A and B. If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you $27,000 unless the die shows "34", in which case I pay you nothing.
Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B. The die comes up 12. After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.
I have taken your two cents on the subject.
If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don't be surprised when your pennies get taken from you...
(I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.)
Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'école américaine. Econometrica, 21, 503-46.
Kahneman, D. and Tversky, A. (1979.) Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47, 263-92.