47

Choose between the following two options:

1A.  $24,000, with certainty. 1B. 33/34 chance of winning$27,000, and 1/34 chance of winning nothing.

Which seems more intuitively appealing?  And which one would you choose in real life?

Now which of these two options would you intuitively prefer, and which would you choose in real life?

2A. 34% chance of winning $24,000, and 66% chance of winning nothing. 2B. 33% chance of winning$27,000, and 67% chance of winning nothing.

The Allais Paradox - as Allais called it, though it's not really a paradox - was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953.  I've modified it slightly for ease of math, but the essential problem is the same:  Most people prefer 1A > 1B, and most people prefer 2B > 2A.  Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously.

This is a problem because the 2s are equal to a one-third chance of playing the 1s.  That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.

Among the axioms used to prove that "consistent" decisionmakers can be viewed as maximizing expected utility, is the Axiom of Independence:  If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z.

All the axioms are consequences, as well as antecedents, of a consistent utility function.  So it must be possible to prove that the experimental subjects above can't have a consistent utility function over outcomes.  And indeed, you can't simultaneously have:

• U($24,000) > 33/34 U($27,000) + 1/34 U($0) • 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0) These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money. Maurice Allais initially defended the revealed preferences of the experimental subjects - he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology. This was 1953, after all, and the heuristics-and-biases movement wouldn't really get started for another two decades. Allais thought his experiment just showed that the Axiom of Independence clearly wasn't a good idea in real life. (How naive, how foolish, how simplistic is Bayesian decision theory...) Surely, the certainty of having$24,000 should count for something.  You can feel the difference, right?  The solid reassurance?

(I'm starting to think of this as "naive philosophical realism" - supposing that our intuitions directly expose truths about which strategies are wiser, as though it was a directly perceived fact that "1A is superior to 1B".  Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.)

"But come now," you say, "is it really such a terrible thing, to depart from Bayesian beauty?"  Okay, so the subjects didn't follow the neat little "independence axiom" espoused by the likes of von Neumann and Morgenstern.  Yet who says that things must be neat and tidy?

Why fret about elegance, if it makes us take risks we don't want?  Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome's probability, add them up, etc.  Okay, but why do we have to do that?  Why not make up more palatable rules instead?

There is always a price for leaving the Bayesian Way.  That's what coherence and uniqueness theorems are all about.

In this case, if an agent prefers 1A > 1B, and 2B > 2A, it introduces a form of preference reversal - a dynamic inconsistency in the agent's planning.  You become a money pump.

Suppose that at 12:00PM I roll a hundred-sided die.  If the die shows a number greater than 34, the game terminates.  Otherwise, at 12:05PM I consult a switch with two settings, A and B.  If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you$27,000 unless the die shows "34", in which case I pay you nothing.

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference.  The switch starts in state A.  Before 12:00PM, you pay me a penny to throw the switch to B.  The die comes up 12.  After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.

I have taken your two cents on the subject.

If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don't be surprised when your pennies get taken from you...

(I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.)

Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'école américaine.  Econometrica, 21, 503-46.

Kahneman, D. and Tversky, A. (1979.) Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47, 263-92.

47

140 comments, sorted by Highlighting new comments since
New Comment
Some comments are truncated due to high volume. Change truncation settings

For $24,000, you can have my two cents. ;) Yes, philosophers, and others, do often too easily accept the advice of strong intuitions, forgetting that strong intuitions often conflict in non-obvious ways. 3Pablo9yYes, exactly. For instance, many philosophers invoke Parfit's "repugnant conclusion [http://plato.stanford.edu/entries/repugnant-conclusion/]" as a decisive objection to certain forms of consequentialism, overlooking the fact that all moral theories, when applied to scenarios involving different numbers of people, have implications that are arguably similarly repugnant [http://people.su.se/~guarr/Texter/Future%20Generations%20for%20homepage.pdf]. The idea is that$ amount equals your utility, while in reality the history of how you got this amount also matters (regret, emotions, etc.).

There's no paradox here - as your utility expressed in $just doesn't match utility of the subjects. As for money pump - you just have a win win situation - you earn money, and the subjects earn good feelings. If I knew the offer wouldn't be repeated, I might take 1A because I'd really rather not have to explain to people how I lost$24,000 on a gamble.

3faul_sname10yThis was my thought exactly. If I was given the option to keep the rest private if I lost, 1A would be a distinctly preferable choice. If I had a 1/34 chance of having to explain how I "lost" $24,000 vs an average loss of$2,200, I might well take choice 1B. (at a later time in my life, when I could afford to lose $2,200, and had significant financial risk from being perceived ask a risk-taker with money). 1Gunnar_Zarncke7yI think these kinds of 'side channel' loss information are what make your intuition value 1A > 1B. In a way the implicit assumptions in the offer are what cause the trouble. Naive subjects are naive only to pure math not to real life. 1DPiepgrass1yI would further predict that if someone is wealthy enough, or if the winning amount is small, e.g.$24 and $27, they are much more likely to choose 1B over 1A - because of how much less emotionally devastating it would be to lose, or rather, how much less devastating the participant imagines losing to be. I decided to Google for literature on this and found this analysis [http://research.economics.unsw.edu.au/vpanchenko/papers/Allais_paper.pdf]. It takes some effort to decode, but if I understand Table 1 correctly, (1) experiments testing the Allais Paradox have results that often seem inconsistent with each other, and strange at first glance (roughly speaking, more people choose 1B & 2A than you'd think), which reflects a bunch of underlying complexity described in section 3; (2) to the extent there is a pattern, I was right about the smaller bets; and (3) the decision to maximize expected financial gain (1B & 2B ≃ RR in Table 1) is the most popular choice in 43% of experiments. Actually, that makes me think of another explanation besides overreaction to small probabilities: if a person takes 1B and loses, they know they would have won if they'd chosen differently. If they take 2B and lose, they can tell themselves (and others) they probably would have lost anyway. 4ThisDan9yOk that is exactly my line of thinking and why i can't understand the broader point of this argument. Yes I can see the statistical similarity that makes it "the same"- but the situation is totally different in that one offers "certain win or risk" and the other is "risk vs risk" with a barely noticeable difference between them. So my decision on both questions goes like this 1a > 1b because even if i was offered MUCH less, i'd still likely take that deciding that i'm not greedy and free money always feels good but giving away free money (by trying to get a bit more) always feels foolish and greedy. 2b > 2a because if the statistic played out over 100 times, the average person will think it was equal value between them- unless they logged the statistics to find the slight difference. Therefore if it takes that much attention to feel the difference it's easy to pretend they are the same risk but one is 11.12% more money- which is a lot easier to notice without logging statistics. I don't see how these decisions conflict with each other. 0[anonymous]6yI seem to agree with you, but I think how you arrived to 11.12% is wrong. Did you divide 3000/27000? You can´t do that, since you won´t have 27000 unless you get those 3000 dollar extra. Shouldn´t you do 3000/24000 = 12,5%? A bird in the hand... Certainty is a form of utility, too. 1buybuydandavis10yThat goes hand in hand with his comments about complexity. The straightforward expected utility analysis doesn't include the cost of the analysis into the analysis. Nor the increased cost to all subsequent analyses for the uncertainty. We have limited computational power for executive functions. No doubt we have utility built into us to conserve those limited resources. Most people hate uncertainty and thinking, and they hate it much more than we do. I doubt I'm the only one here who has noticed that. -1Bugmaster10yFor me, the choice between 1A and 1B would depend on how badly I needed the money, which is why I disagree with Eliezer when the says that "marginal utility of the money doesn't count". For example, let's say I needed$20,000 in order to keep a roof over my head, food on my plate, and to generally survive. In this case, my penalty for failure is quite high, and IMO it would be more rational for me to take 1A. Sure, I could win more money if I picked 1B, but I could also die in that case. Thus, my utility in case of 1B would be something like and U($anything, dead) is a very negative number. On the other hand, if I was a billionaire who makes$20,000 per second just by existing, then I would either pick 1B, or refuse to play the game altogether, because my time could be better spent on other things.

Reread the post; that's not the paradox.

The paradox is that, if you need the 20k to survive, then you should prefer 2A to 2B, because the extra 3k 33% of the time doesn't outweigh an additional 1% chance of dying.

If someone prefers A in both cases, and B in both cases, they can have a consistent utility function. When someone prefers A in one case, and B in another, then they cannot have a consistent utility function.

0Bugmaster10yRight, I didn't mean to imply that it was. But Eliezer seemed to be saying that picking 1A is irrational in general, in addition to the paradox, which is the notion that I was disputing. It's possible that I misinterpreted him, however.
4Vaniver10yHe makes it clearer in comments [http://lesswrong.com/lw/my/the_allais_paradox/hrw]. What Caledonian is discussing is the certainty effect [http://en.wikipedia.org/wiki/Certainty_effect]- essentially, having a term in your utility function for not having to multiply probabilities to get an expected value. That's different from risk aversion, which is just a statement that the utility function is concave.

Risk and cost of capital introduce very strange twists on expected utility.

Assume that living has a greater expected utility to me than any monetary value. If I need a $20,000 operation within the next 3 hours to live, I have no other funding, and you make me offer 1, it is completely rational and unbiased to take option 1A. It is the difference between a 100% of living and a 97% chance of living. If I have$1,000,000,000 in the bank and command of legal or otherwise armed forces, I may just have you killed - for I would not tolerate such frivolous philosophizing.

I think defenses of the subject's choices by recourse to nonmonetary values is missing the point. Anything can be rational with a sufficiently weird utility function. The question is, if subjects understood the decision theory behind the problem, would they still make the same choice? After seeing a valid argument that your preferences make you a money pump, you certainly could persist in your original judgment, by insisting that your feelings make your first judgment the right one.

But seriously?---why?

Since people only make a finite number of decisions in their lifetime, couldn't their utility function specify every decision independently? (You could have a utility function that is normal except that it says that everything you hear being called 1A is preferable to 1B, and anything you hear being called 2B is preferable to 2A. If this contradicts your normal utility function, this rule is always more important. Even if 2B leads to death, you still choose 2B.)

The utility function would be impossible to come up with in advance, but it exists.

My intuitions match the stated naive intuitions, but I reject your assertion that the pair of preferences are inconsistent with Bayesian probability theory.

You really underestimate the utility of certainty. "Nainodelac and Tarleton Nick"'s example in these comments about the operation is a perfect counter.

With a 33% vs. 34% chance, the impact on your life is about the same, so you just do the straightforward probability calculation for expected value and take the maximum.

But when offered 100% of some positive outcome, vs. a probability of nothin... (read more)

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B.

I don't understand why I would pay you a penny to throw the switch gefore 12:00?

Since I know myself, I know what I will do after midnight (pay to switch it to A), and so I resign myself to doing it immediately (i.e., leaving the switch at A) so as to save either one cent or two, depending on what happens. I will do this even if I share Don's intuition about certainty. Why pay before midnight to switch it to B if I know that after midnight I will pay to switch it back to A?

*[if the first die comes up 1 to 34]

I think I missed something on the algebraic inconsistency part...

If there is some rational independent utility to certainty, the algebraic claims should be more like this:

• U($24,000) + U(Certainty) > 33/34 U($27,000) + 1/34 U($0) • 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0) This seems consistent so long as U(Certainty) > 1/34 U($27,000).

I'm not committed to the notion there is a rational independent value to certainty, I'm just not seeing how it can be dismissed with quick algebra. Maybe that wasn't your goal. Forgive me if this is my oversight.

This reminds me of the foolish decisions on "deal or no deal". People would fail to follow their own announced utility.

When we speak of an inherent utility of certainty, what do we mean by certainty? An actual probability of unity, or, more reasonably, something which is merely very much certain, like probability .999? If the latter, then there should exist a function expressing the "utility bonus for certainty" as a function of how certain we are. It's not immediately obvious to me how such a function should behave. If probability 0.9999 is very much more preferable to probability 0.8999 than probability 0.5 is preferable to probability 0.4, then is 0.5 very much more preferable to 0.4 than 0.2 is to 0.1?

It's rational to take the certain outcome if gambling causes psychological stress. Notwithstanding that stress is intrinsically unpleasant, it increases your risk of peptic ulcers and stroke, which could easily cancel out the expected gain.

1ricketson9yBut such psychological stress arises from your perception of reality. If it is caused by an erroneous perception of reality, then the rational thing to do is correct your perception, not take the error for granted. If you are certain that you made the right decision, then you shouldn't feel stressed when you "lose".

If you crunch the numbers differently, you can come to different conclusions. For example, if I choose 1B over 1A, I have a 1 in 34 chance of getting burned. If I choose 2B over 2A, my chance of getting burned is only 1 in 100.

James D. Miller has a proposal for Lottery Tickets that Usuallly Pay Off.

Robin, were you thinking of a certain colleague of yours when you mentioned accepting intuition too readily?

Risk aversion, and the degree to which it is felt, is a personality trait with high variance between individuals and over the lifespan. To ignore it in a utility calculation would be absurd. Maurice Allais should have listened to his homonym Alphonse Allais (no apparent relation), humorist and theoretician of the absurd, who famously remarked "La logique mène à tout à condition d'en sortir". Logic leads to everything, on condition it don't box you in.

I confess, the money pump thing sometimes strikes me as ... well... contrived. Yes, in theory, if one's preferences violate various rules of rationality (acyclicity being the easiest), one could conceivably be money-pumped. But, uh, it never actually happens in the real world. Our preferences, once they violate idealized axioms, lead to messes in highly unrealistic situations. Big deal.

I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right.

1wuthefwasthat9yhttp://en.wikipedia.org/wiki/Arbitrage [http://en.wikipedia.org/wiki/Arbitrage] :)

As long as it was only one occasion, I wouldn't make the effort to cross the room for two pennies. If I'm playing the game just once, and I feel a one-off payment of 2p tends to zero, I'll play with you, sure. £1 for a lottery ticket crosses the threshold of palpability, even playing once. I can get a newspaper for a pound. Is this irrational? I hope not.

When I made the (predictable, wrong) choice, I wasn't using probability at all. I was using intuitive rules of thumb like: "don't gamble", "treat small differences in probability as unimportant", and "if you have to gamble against similar odds, go for the larger win".

How do you find time to use authentic probability math for all your chance-taking decisions?

5ThisDan9yThat's exactly how i felt too. "Don't gamble" is the key. 1a allowed me to indulge that even if i was boxed into being in the game. So in question 2 I want to follow "don't gamble" but both are gambling. Additionally, both gambles would feel the same risk to most human who didn't record statistics (other than subconscious and normal memory effected observations) so could be cheaply rounded off to say they are the same. If they are "the same" but 1 pays more money... Oh one more point "easy come easy go". If you can lose 2 either way you won't feel like you ever had anything. However even before you pick 1a and they physically hand you the money, it's already yours (by virtue of the ability to choose 1a ) until you choose 1b and introduce the probability that you won't be paid. I say already yours because if you are guaranteed the choice of 1a forever and unconditionally unless until you choose 1b- that's no less "having money" than when you "have money" but it's in your pocket or in your wallet in the other room. It might not be your money anymore if you fling your wallet out the window hoping it will boomerang back (1b) but it was until you introduced that gamble rather than just choosing to clutch the wallet (1a). I feel like i must be missing the point or something because they seems so obviously right...

The large sums of money make a big difference here. If it were for dollars, rather than thousands of dollars, I'd do what utility theory told me to do, and if that meant I missed out on $27 due to a very unlucky chance then so be it. But I don't think I could bring myself to do the same for life-changing amounts like those set out above; I would kick myself so hard if I took the very slightly riskier bet and didn't get the money. My experience of watching game shows such as 'Deal or No Deal' suggests that people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it, as if it would make their life worse than before they were selected to appear on the show. It seems this fear is in some sense inversely proportional to the 'socially expected' probability of the bad event - so if the player is aware that very few players win less than £1 on the show, they start getting very uncomfortable if there is a high chance of this happening to them... (read more) People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection. If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't. Finally, the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not. 2ThisDan9yI was really confused about what point EY made that went over my head but i think I get it now. It totally changes the game to play it infinite amount of times rather than 1 go to win or lose. I made my choices based on 1 game and not a hybrid between the two of them played multiple times. If I play once, choosing 1a is just taking money that's already mine. If I play infinite times, 1b earns money faster because failing can be evened out. tcpkac: no one is assuming away risk aversion. Choosing 1A and 2B is irrational regardless of your level of risk aversion. Constant's response implies that if someone prefers 1A to 1B and 2B to 2A, when confronted with the money pump situation, the person will decide that after all, 1A is preferable to 1B and 2A is preferable to 2B. This is very strange but at least consistent. "Nainodelac and Tarleton Nick", why are you using my (reversed) name? steven: not if you're nonlinearly risk averse. As many have suggested, what if you take a large one-time utility hit for taking any risk, but you're not averse beyond that? Choosing 1A and 2B is irrational regardless of your level of risk aversion. No, only if the utility of avoiding risk is worth less than the money at risk. Duh. Your description is not a money pump. A money pump occurs when you prefer A > B and B > C and C > A. Then someone can trade you in a round robin taking a little out for themselves each cycle. I don't feel like typing in an illustration, so see Robyn Dawes, Rational Choice in an Uncertain World. There is a significant difference between single and iterative situations. For a single play I would prefer 1A to 1B and 2B to 2A. If it were repeated, especially open-endedly, I would prefer 1B to 1A for its slightly greater expected payoff. This is analogous, I think, to the iterated versus one-time prisoner's dilemma, see Axelrod's Evolution of Cooperation for an interesting discussion of how they differ. How trustworthy is the randomizer? I'd pick B in both situations if it seemed likely that the offer were trustworthy. But in many cases, I'd give some chance of foul play, and it's FAR easier for an opponent to weasel out of paying if there's an apparently-random part of the wager. Someone says "I'll pay you$24k", it's reasonably clear. They say "I'll pay you $27k unless these dice roll snake eyes" and I'm going to expect much worse odds than 35/36 that I'll actually get paid. So for 1A > 1B, this may be based on expectation of che... (read more) It seems to me that your argument relies on the utility of having a probability p of gaining x being equal to p times the utility of gaining x. It's not clear to me that this should be true. The trouble with the "money pump" argument is that the choice one makes may well depend on how one got into the situation of having the choice in the first place. For example, let's assume someone prefer 2B over 2A. It could be that if he were offered choice 1 "out of the blue" he would prefer 1A over 1B, yet if it were announced in advance that he w... (read more) No, only if the utility of avoiding risk is worth less than the money at risk. Duh. Someone did not read the OP carefully enough. Hint: re-read the definition of the Axiom of Independence. Someone isn't thinking carefully enough. Hint: I did not assert that X is strictly preferred to Y. Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility, and utility is a function of money that increases slower than linearly. When an agent doesn't maximize expected utility at all, that's something different. Do you really want to say that it can be rational to accept a 1/3 chance of participating in a lottery, already knowing that if you got to participate you would change your mind? Risk aversion is (or at least, can be) a matter of taste, this is just a matter of not being stupid. Dawes gives a very similar 2-gamble example of a money pump on pg 105 of Rational Choice. Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility Oh, I agree. I just measure utility differently than you do. Caledonian, if utility is any function defined on amounts of money, then if you are maximizing expected utility, you cannot fall prey to the Allais paradox. You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP. you're violating rationality axioms like the one Eliezer gave in the OP No. Those axioms are "if => then" statements. I'm violating the "if" part. Nainodelac, if you prefer 1A to 1B and 2A to 2B, as you should if you need exactly$24,000 to save your life, that is a perfectly consistent preference pattern.

You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP.

Having a utility function determined by anything other than amounts of money is irrational? WTF?

Upon rereading the thread and all of its comments, I suspect the person I originally quoted meant something along the lines of "preferring 1A to 1B but 2B to 2A is irrational", which seems more defensible.

There is nothing irrational about preferring 1A and 2B by themselves, it's choosing the first option in the first scenario and the second in the second that's dodgy.

Nick is right to object, but removing the phrase "on amounts of money" makes the statement unobjectionable -- and relevant and true.

Is Pascal's Mugging the reductio ad absurdum of expected value?

This may be related to the phenomenon of overconfident probability estimates. I would not be surprised to find that people who claim a 97% certainty have a real 90% probability of being right. Maybe someone who hears there's 1 chance in 34 of winning nothing interprets that as coming from an overconfident estimator whereas the 34% and 33% probabilities are taken at face value.

On the other hand, the overconfidence detector seems to stop working when faced with asserted certainty.

"Nainodelac and Tarleton Nick": This is not about risk aversion. I agree that if it is vital to gain at least $20,000, 1A is a superior choice to 1B. However, in that case, 2A is also a superior choice to 2B. The error is not in preferring 1A, but in simultaneously preferring 1A and 2B. Is Pascal's Mugging the reductio ad absurdum of expected value? No. I thought it might be! But Robin gave an excellent reason of why we should genuinely penalize the probability by a proportional amount, dragging the expected value back down to negligibility. (This may be the first time that I have presented an FAI question that stumped me, and it was solved by an economist. Which is actually a very encouraging sign.) This discussion reminded me of the Torture vs. Dust Specks discussion; i.e. in that discussion, many comments, perhaps a majority, amounted to "I feel like choosing Dust Specks, so that's what I choose, and I don't care about anything else." In the same way, there is a perfectly consistent utility function that can prefer A1 to B1 and B2 to B1, namely one that sets utility on "feeling that I have made the right choice", and which does not set utility on money or anything else. Both in this case and in the case of the Torture and Dust Sp... (read more) Not sure if anyone pointed this out, but in a situation where you don't trust the organizer, the proper execution of 1A is a lot easier to verify than the proper execution of 1B, 2A and 2B. 1A minimizes your risk of being fooled by some hidden cleverness or violation of the contract. In 1B, 2A and 2B, if you lose, you have to verify that the random number generator is truly random. This can be extremely costly. In option 1A, verification consists of checking your bank account and seeing that you gained$24,000. Straightforward and simple. Hardly any risk of being deceived.

I hate to discuss this again, but...

Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans.

It's simple to show that no rational person would actually give money to a Pascal mugger, as the next mugger might threaten 4^^^4 people. I'm not sure whether this solves the problem or just sweeps it under the rug, though.

Well, if Pascal's Mugging doesn't do it, how about the St. Petersburg paradox? ;)

Oh wait... infinite set atheist... never mind.

I'm afraid I don't follow the maths involved, but I'd like to know whether the equations work out differently if you take this premise:

- Since 1A offers a certainty of $24,000, it is deemed to be immediately in your possession. 1B then becomes a 33/34 chance of winning$3,000 and 1/34 chance of losing $24,000. Can someone tell me how this works out mathematically, and how it then compares to 2B? The Allais Paradox is indeed quite puzzling. Here are my thoughts: 0. Some commenters simply dismiss Bayesian reasoning. This doesn't solve the problem, it just strips us of any mathematical way to analyze the problem. On the other hand, the fact that the inconsistent choice seems ok does mean that the Bayesian way is missing something. Simply dismissing the inconsistent choice doesn't solve the problem either. 1. If I understand correctly, you argue that situation 1 can be turned into situation 2 by randomization. In other words, if you sell me situation 1,... (read more) Nick, "Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans." The Porcine Mugging doesn't bypass the objection. Your estimates of the frequency of simulated people and pigs should be commensurably vast, and it is vastly unlikely that your simulation (out of many with intelligent beings) will be selected for an actual Porcine Mugging that will consume vast resources (enough to simulate vast numbers of humans). These things offset to get you workable calculations. I would have chosen 1A and 2B, for the following reasons: Any sum of the order of$20,000 would revolutionize my personal circumstances. The likely payoff is enormous. Therefore, I'd pick 1A because I'd get such a sum guaranteed, rather than run the 3% risk (1B) of getting nothing at all. Whereas choice 2 is a gamble either way, so I am led to treat both options as qualitatively the same. But that's a mistake: if the value of getting either nonzero payoff at all is so great, then I should have favored the 34% chance of winning something over the 33% chance, just as I favored the 100% chance over the ~97% chance in choice 1. Interesting.

Surely the answer is dependednat on goal criterion. If the goal is to get 'some' money then the 100% option and the 34% options are better. If your goal is get 'the most' money then the 97% and the 33% options are better. However the goal might be socially construictued. This reminded me of John Nash whom offered one of his sectraries $15 dollars if she shared it equally with a co-worker but$10 if she kept it for her-self. She took the $15 and split it with her co-worker. She chose an option that maximised her social capital but was a weaker one economically. I agree with Dagon. This experiment assumes that the subjective probabilities of participants were identical to the stated probabilities. In reality, I feel like people are probably wary of stated probabilities due to experiences with or fears of shysters and conmen. That, is if asked to choose between 1A and 1B, 1B offers the possibility that the `randomising mechanism' that the experimenter is offering is in fact rigged. Even if the experimenter is completely honest in their statement of their own subjective probabilities, they may simply disagree with t... (read more) Eliezer, I see from this example that the Axiom of Independence is related to the notion of dynamic consistency. But, the logical implication goes only one way. That is, the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.) I'm afraid that the Axiom of Independence cannot really be justified as a basic principle of rationality. Von Neumann and Morgenstern probably came up with it because it was mathematically necessary to derive Expected Utility Theory, then they and others tried to justify it afterward because Expected Utility turned out to be such an elegant and useful idea. Has anyone seen Independence proposed as a principle of rationality prior to the invention of Expected Utility Theory? 2torekp10yI'm equally afraid ;). The Axiom of Independence is intuitively appealing to me, but I don't posit it to be a basic principle of rationality, because that smells like a mind projection fallacy. I suspect you're right, also, about dutch book/money pump arguments. I tentatively conclude that a rational agent need not evince preferences that can be represented as an attempt to maximize such a utility function. That doesn't mean Expected Utility Theory can't be useful in many circumstances or for many agents, but this still seems like important news, which merits more discussion on Less Wrong. 6Wei_Dai10yHave you read these posts? * indexical uncertainty and the Axiom of Independence [http://lesswrong.com/lw/102/indexical_uncertainty_and_the_axiom_of/] * Towards a New Decision Theory [http://lesswrong.com/lw/15m/towards_a_new_decision_theory/] Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational. "That is, the Axiom of Independence implies dynamic consistency, but not vice versa." Really? A hyperbolic discounter can conform to the Axiom of Independence at any particular time and be dynamically inconsistent. I would love to know if the results are different if you repeatedly expose people to the situation rather than communicate it in a formal way. They are likely to observe the outcomes of their strategy and adapt. Perhaps what is being measured is simply the numeracy of the subjects and not their practical inability to determine optimal strategies. The lottery is another interesting example, what is being bought is the probability of a big win, not a statistically optimal investment. Playing the lottery genuinely increases the chance of you suddenly gaining a life changing amount of money. This is a perfectly rational choice. 1AlephNeil11yWhat about the Allais paradox? Imagine someone who is happy to play the lottery but would refuse to play an alternative version where the ticket merely confers a slight increase on a significant pre-existing probability of winning 'life changing money'. (As I understand it, most/all lottery players would in fact refuse the 'alternative' gamble.) Do you want to say that such a person is 'perfectly rational'? Would you call them perfectly rational if they accepted both gambles (despite both of them having negative EV)? To be fair, It is possible to tell a consistent story about a person for whom either gamble would be rational: Perhaps the Earth is going to be destroyed soon and the cost of entry into the new self-sustaining Mars colony equals the lottery jackpot. But needless to say, most people aren't in situations remotely resembling this one. 0JohnDavidBustard11yThank you for your comments. I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn't change the potential rationality of actual playing it. I.e. that money and value don't necessarily have a linear relationship, and so optimising for EV is not rational. Although, I feel that the likely answer is that the brain is optimised for rapid responses to survival problems and these solutions may well be an optimal response given constraints on both processing and expected outcome. Another perspective is that in general specifications are not accurate but instead a communication of experience. If the problem specification is viewed instead as a measurement of a system where the placing of bets is an input and the output is not random but the outcome of an unknown set of interactions. Systems encountered in the past will form a probability distribution over their behaviour, the frequency of observed consequences then act as a measurement of the likelihood that the system in question is equivalent to one of these types. This would explain the feeling of switching between the two examples (they constitute the likely outcomes of two types of system) and thus represent situations where distinct behaviours were appropriate. I.e. as one starts to understand an existing system one gets diminishing returns for optimising interaction with it (a good example is AI programming itself), however systems may be unknown to the user. These unknown systems may demonstrate rare, but highly beneficial or unexpected events, like noticing an anomaly in a physics experiment. In this case it is rational to play/interact as doing so provides more information which may be used to identify the system and thus lead to understanding and thus an expected benefit in the future. 1Sniffnoy11yOf course, that just means you maximise expected utility rather than expected money. (I was almost going to write "expected value" instead of "expected utility" as you used the word "value", but obviously that would be confusing in this context...) 0JohnDavidBustard11yYes, absolutely, apologies for my unfamiliarity with the terms. The point I'm trying to make is that lottery playing optimises utility (assuming utility means what is considered valuable to the person). Saying that lottery playing is irrational is making a statement about what is valuable more that it does about what is reasonable. 0Kingreaper11yThis is likely because playing the lottery gives you "hope" of a life-changing event. It means that you KNOW there is a possible life-changing event available. If you already have that knowledge, then paying for the lottery becomes just about the money; which isn't worthwhile. If you don't, paying for the lottery is buying that knowledge, and the knowledge has value to you. Ummm, no. The money pump fails because of the REASON for the preference difference. The reason is, as some have already stated, that in scenario 1B if you lose you know it's your fault you got nothing. In scenario 2B if you lose, you can rationalise it easily as "Would have lost anyway" In your money pump scenario, we have a 1/3rd chance of playing 1. If we get to play 1, we know we're playing 1. So your money pump fails, because a standard player would prefer that the switch be on A at all times. How do I alleviate feeling pleased at myself for having read the statement of the paradox - that people preferred 1A>1B but 2B>2A - and immediately going "WHAT?" and boggling at the screen and pulling confused faces for about thirty seconds, so flabbergasted I had to reread that this choice pattern was common? (Personally I'm really strongly biased these days toward a bird in the hand and would have chosen 1A and 2A every time. I occasionally do bits of sysadmin for dodgy dot-coms that friends are working for. There are people who offer equi... (read more) 2shokwave11yPenalise expected value of equity because probability is lower than I have been led to believe - an incredibly useful heuristic. In 33/34ths of the worlds where you make choice A in 1, you are mercilessly teased and mocked by your inferiors, a la this [http://www.youtube.com/watch?v=-Vw2CrY9Igs], thirty seconds in, for not picking B. Assuming counterfactual outcomes are revealed. 2David_Gerard11yI'll just have to cry myself to sleep on a big bed made of$24,000!

It took me 30 minutes of sitting down and doing math before I could finally accept that 1A+2B was an irrational preference. I finally realized that a lot of it came down to: with a 66% vs 67% chance of losing, I could take the riskier option and not feel as bad, because I could sweep it under the rug with "oh, I probably would have lost anyways."

Once I ran a scenario where I'd KNOW whether it was that 1% that I controlled, or the 66% that I didn't control, that comfort evaporated.

I learned a lot about myself by working through this exercise, so thank you very much :)

The problem as stated is hypothetical: there is next to no context, and it is assumed that the utility scales with the monetary reward. Once you confront real people with this offer, the context expands, and the analysis of the hypothetical situation falls short of being an adequate representation of reality, not necessarily because of a fault of the real people.

Many real people use a strategy of "don't gamble with money you cannot afford to lose"; this is overall a pretty successful strategy (and if I was looking to make some money, my mark woul... (read more)

0wedrifid10yThe problem is not with the hypothetical. It is with the intuition. Intuitions which really do prompt bad decisions in the real life circumstances along these lines.
0mendel10yYou seem to have examples in mind?
2Pavitra10yThe lottery comes immediately to mind. You can't be absolutely sure that you'll lose.
5[anonymous]10yNot necessarily. It is assumed that receiving $24000 is equally good in either situation. Your utility function can ignore money entirely (in which case 1A2A is irrational because you should be indifferent in both cases). You can use the utility function which prefers not to receive monetary rewards divisible by 9: in this case, 1A>1B and 2A>2B is your best bet, giving you 100% and 34% chances to avoid 9s, rather than 0% chances. In general, your utility function can have arbitrary preferences on A and B separately; but no matter what, it will prefer 1A to 1B if and only if it prefers 2A to 2B. As for the rest of your reply -- yes, it is true that real people use strategies ("heuristic" is the word used in the original post) that lead them to choose 1A and 2B. That's sort of why it's a paradox, after all. However, these strategies, which work well in most cases, aren't necessarily the best in all cases. The math shows that. What the math doesn't tell us is which case is wrong. My own judgment, for this particular sum of money (which is high relative to my current income), is that choice 1A is correctly better than choice 2A, in order to avoid risk. However, choice 1B is also better than choice 2B, upon reflection, even though my intuitions tell me to go with 2B. This is because my intuitions aren't distinguishing 33% and 34% correctly. In reality, faced with the opportunity to earn amounts on the order of$20K, I should maximize my chances to walk away with something. In the first case, I can maximize them fully, to 100%, which triggers my "success!" instinct or whatever: I know I've done everything I can because I'm certain to get lots of money. In the second case, I don't get any satisfaction from the correct decision, because all I've done is improve my chances by 1%. In general, the heuristic that 1% chances are nearly worthless is correct, no matter what's at stake: I can usually do better by working on something that will give me a 10% or 25% chance. In
0mendel10yThe utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insecurity, and we could probably devise some experimental setup to translate this into a utility money equivalent (i.e. how much is the test subject prepared to pay for security and predictability? that is the margin of insurance companies, btw). :-P I wanted to suggest that a real-life utility function ought to consider even more: not just to the single case, but the strategies used in this case - do these strategies or heuristics have better utility in my life than trying to figure out the best possible action for each problem? In that case, an optimal strategy may well be suboptimal in some cases, but work well re: a realistic lifetime filled with probable events, even if you don't contrive a $24000 life-or-death operation. (Should I spend two years of my life studying more statistics, or work on my father's farm? The farm might profit me more in the long run, even if I would miss out if somebody made me the 1A/1B offer, which is very unlikely, making that strategy the rational one in the larger context, though it appears irrational in the smaller one.) 1[anonymous]10yRisk-avoidance is captured in the assignment of U($X). If the risk of not getting any money worries you disproportionately, that means that the difference U($24K) - U($0) is higher than 8 times the difference U($27K) - U($24K).
0mendel10yThat's a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn't lead to that. (It does lead to your take of the term though - your preference isn't 1A/2B, though). Your assignment looks like "diminishing utility", i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?
0[anonymous]10yI think so, but your question forces me to think about it harder. When I thought about it initially, I did come to that conclusion -- for myself, at least. [I realized that the math I wrote here was wrong. I'm going to try to revise it. In the meantime, another question. Do you think that risk avoidance can be modeled by assigning an additional utility to certainty, and if so, what would that utility depend on?] Also, thinking about the paradox more, I've realized that my intuition about probabilities relies significantly on my experience playing the board game Settlers of Catan. Are you familiar with it?
0mendel10yOne way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that. I know Settlers of Catan, and own it. It's been awhile since I last played it, though. Your point about games made me aware of a crucial difference between real life and games, or other abstract problems of chance: in the latter, chances are always known without error, because we set the game (or problem) up to have certain chances. In real life, we predict events either via causality (100% chance, no guesswork involved, unless things come into play we forgot to consider), or via experience / statistics, and that involves guesswork and margins of error. If there's a prediction with a 100% chance, there is usually a causal relationship at the bottom of it; with a chance less than 100%, there is no such causal chain; there must be some factor that can thwart the favorable outcome; and there is a chance that this factor has been assessed wrong, and that there may be other factors that were overlooked. Worst case, a 33/34 chance might actually only be 30/34 or less, and then I'd be worse off taking the chance. Comparing a .33 with a .34 chance makes me think that there's gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there's usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense. [rewritten] Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome. In situation, you can pretty much advise wh
1[anonymous]10yThe problem with this is that dealing with p=1 is iffy. Ideally, our certainty response would be triggered, if not as strongly, when dealing with 99.99% certainty -- for one thing, because we can only ever be, say, 99.99% certain that we read p=1 correctly and it wasn't actually p=.1 or something! Ideally, we'd have a decaying factor of some sort that depends on the probabilities being close to 1 or 0. The reason I asked is that it's very possible that a correct model of "attaching a utility to certainty" would be equivalent to a model with diminishing utility of money. If that were the case, we would be arguing over nothing. If not, we'd at least stand a chance of formulating gambles clarifying our intuitions if we knew what the alternatives are. If the 33% and 34% chances are in the middle of their error margins, which they should be, our uncertainty about the chances cancels out and the expected utility is still the same. Going for the higher expected value makes sense. I brought up Settlers of Catan because, if I imagine a tile on the board with $24K and 34 dots under it, and another tile with$27K and 33 dots, suddenly I feel a lot better about comparing the probabilities. :) Does this help you, or am I atypical in this way? Obviously with the advisor situation, you have to take your advisee's biases into account. The one most relevant to risk avoidance is, I think, the status quo bias: rather than taking into account the utility of the outcomes in general, the king might be angry at you if the utility becomes worse, and not as picky if the utility becomes better (than it is now). You have to take your own utility into account, which depends not on the outcome but on your king's satisfaction with it.

I wonder how the results would change if the experiment changes so that the outcomes of 2B are, "You have a 33% chance of receiving $27k, a 66% chance of not getting anything, and a 1% chance of having someone laugh in your face for not picking 2A" If you'd ask any person capable of doing the math whether they would want to play 1A or 1B a thousand times you'd probably get a different answer, but not an answer that's more correct. Also the utility value of money is not directly relative to the amount of money. Imagine that you would need a 1000$ dollars of money to save your dying relative with certainty by paying for his/her treatment. Good enough for explaining 1A > 1B, but doesn't resolve the contradiction with 2B > 2A.

But even a more revealing edit is based exactly onto the certainty. If yo... (read more)

4Vaniver10yYou're right that certainty helps out with planning, and so certainty can be valuable sometimes. It's still a bias to unconsciously add in a value for certainty if you don't need it in this case, even if it sometimes pays off, and so it's worth thinking through the 'paradox.'
0Surunveri10yI wanted to point out that this flaw is not a foolish flaw. That's how we create plans, we project and create expectations, and the anticipated feeling of loss is frustrating to plan for. In a theoretical example you might make a bad decision, but isn't it also that this flaw causes you to make good decisions in actual real-world situations? Since they don't tend to occur in such theoretical forms where you have all the required information available and which lack context. If you'd actually encounter this problem in a real-world situation, you might end up making a bad decision because of handling it with a too theoretical approach - what if I told you get to play both games and actually get to choose between both, when you come to visit me? But you didn't have money to pay for the ticket to fly over? What if you took a loan? And without the certainty of A1 you might end up in a bad situation where you'll lack the means to pay back your loan - in other words a decision making agent with this flaw handles the situation well. But of course you can take all that into account. And as it's a problem dealing with rationality, I think it's pretty important to note these things. Anyway I agree with you, Vaniver =)

Please correct me if any of my assumptions are innacurate, and I apologize if this comment comes off as completely tautological.

Expected utility is explicity defined as the statistic

$\sum_{{x}\in{X}}{p(x\$U(x)})

where X is the set of all possible outcomes associated with a particular gamble, p(x) is the proportion of times that outcome x occurs within the gamble, and U(x) is the utility of outcome x, a function that must be strictly increasing with respect to the monetary value of outcome x.

To reduce ambiguity:

• 1A, 1B, 2A, and 2B are instances of gambles.

• For 1B, the possible o

0thomblake9yThat all seems pretty uncontroversial.

I initially chose 1A and 2B, but after reading the analysis of those decisions, I agree that they are inconsistent in a way that implies that one choice was irrational (in the context of this silly little game). So I did some introspection to figure out where I went wrong. Here's what I found:

1) I may have misjudged how small 1/34 is, and this only became apparent when the question was phased as it is in example 2.

2) I think I assumed an implicit costs in these gambles. The first cost is a delay in learning the outcome of these gambles; the second is the i... (read more)

While Elezier's argument is still correct (that you should multiply to make decisions based on probabilistic knowledge), I see a perfectly rational and utilitarian explanation for choosing 1A and 2B in the stated problem.

The clue lies in Colin Reid's comment: "people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it". This fear is explained by Kingreaper: "in scenario 1B if you lose you know it's your fault you got nothing".

That makes the two cases, stated as they are, different. In gam... (read more)

3Vaniver9yIf you could choose whether or not to have this guilt, would you choose to have it? Does it make you better off?

I know this was posted 4 years ago, but I had a thought. If I was offered a certainty of $24,000 vs a 33/34 chance of$27,000, my preference would depend on whether this was a once-off. If this was a once-off, my primary concern would be securing the money and being able to put food on the table tonight. Option 1 will put food on the table with 100% certainty, while Option 2 will not.

If, however, the option was to be offered many times, I would optimise for greatest return - Option 2. If I miss out this month, I'll just scrape for food until next month, wh... (read more)

6Paul Crowley9yIt absolutely can make sense to prefer option 1A over option 1B (which I think is what you mean). What does not make sense is to prefer option 1A over 1B, AND prefer 2B over 2A. It's worth reading the two followup articles before you get into this further: Zut Allais [http://lesswrong.com/lw/mz/zut_allais/] and Allaise Malaise [http://lesswrong.com/lw/n1/allais_malaise]. Welcome to Less Wrong!

This is an old post, but I guess one resolution is that:

U($24,000) > 33/34 U($27,000) + 1/34 U($0 & Regret that I didn't take the$24000)

Which is consistent with:

0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0)

It's an interesting psychological fact that the regret is triggered in one case, but not the other.

[-][anonymous]9y 0

I wonder if this bias is somehow trying to compensate for some other bias. Suppose you think the experimenter is overconfident, i.e., their log-odds are twice as much as they should; so, when they say 100% they do mean 100%, but when they say 97.1% they actually mean 85.2% (and when they say 34% they mean 41.8%, and when they say 33% they mean 41.2%). Now, Option 1B suddenly looks much uglier, doesn't it? (I'm not claiming this happens consciously.)

If flipping the switch before 12:00 pm has no effect on the amount of money one acquires why would one pay anything to do it? why not just flip the switch only once after 12:00 pm and before 12:05PM?

Question: do the rest of you actually find the choice of 1A clearly intuitive?

I think my intuition for examples like this has been safely killed off, so my replacement intuition instead says: "hm, clearly 34*(27-24) > 27, so 1B!" (without actually evaluating 27-24, just noting it's ≥1). Which mainly suggests that I've grown accustomed to calculating expectations out explicitly where they're obvious, not that I'm necessarily good at avoiding real life analogues of the problem.

2Martin-28yI chose 1B. I seem to be an outlier in that I chose 1B and 2B and did no arithmetic.
1[anonymous]6yMe too! We're just two greedy people!:)

1A. $24,000, with certainty. 1B. 33/34 chance of winning$27,000, and 1/34 chance of winning nothing.

2A. 34% chance of winning $24,000, and 66% chance of winning nothing. 2B. 33% chance of winning$27,000, and 67% chance of winning nothing.

I would choose 1A over 1B, and 2B over 2A, despite the 9.2% better expected payout of 1B and the small increased risk in 2B. If the option was repeatable several times, I'd choose 1B over 1A as well (but switch back to 1A if I lost too many times).

This does not make me susceptible to a money pump or a Dutch book (you'... (read more)

2Vaniver8yThis... means you're vulnerable to the Dutch Book described in the post. Why do you think otherwise? Basically, this. The point of utility is that it's linear in probability, which disallows a premium for certainty. If I know your utility for $27,000, and your utility for$24,000, and $0, then I can calculate your preferences over any gamble containing those three outcomes. If your decision procedure is not equivalent to a utility function, then there are cases where you can be made worse off even though it looks to you like you're being made better off. Isn't certainty impossible in a world of overconfident people, accidents, and cheaters? 0christopherj8yI'm really not. You mean, "This means that according to my theory you're vulnerable to the Dutch Book described in the post" Like I said though, I'm not accepting trades with negative utility, and being money pumped and Dutch Booked both have negative utility. As for the "money pump" described in the post, I gain$23,999.98 if it happens as described. Also, there would have been no need to pay the first penny as the state of the switch was not relevant at that time. Also the game was switched from "34% for 24,000 and 33% for 27,000" to "34% chance to play game 1, at which time you may choose" I agree that if you take the probability out of my utility function, then I am directly altering my preference in the exact same situation. Even so, there is in reality at least one difference: if someone is cheating or made a miscalculation, option 1A is cheat-proof and error-proof but none of the other options are. And I've definitely attached utility to that. This aspect would disappear if probabilities were removed from my utility function.
0christopherj8yNote that it becomes a different problem this way than my stated preferences (and note again that my stated choices (not preferences) were context-dependent) -- there is the additional information that the dealmaker had a good chance to cheat and didn't take it. This information will reduce my disutility calculation for the uncertainty in the offer, as it increases my odds of winning 1B from [33/34 - good chance of cheating] to [33/34 - small chance of cheating] Or 23,999.98 dollars richer. If I did hold those preferences, I would not be vulnerable to Dutch booking, nor money pumping. Money pumping is infinite, whereas by giving me two pairs of different choices you can make me choose twice (and it's not a preference reversal, though it would be exactly a preference reversal if you multiply the first choice's odds by 0.34 and pretend that changes nothing). For me to be vulnerable to Dutch booking, you'd have to somehow get money out of me as well. But how? I can't buy game 1 for less than 24,000 minus the cost of various witnesses if I intend to choose 1A, and you can't sell game 1 for less than 26,200. You'd have an even worse time convincing me to buy game 2. You can't convince me to bid against either of the theoretically superior choices 1B and 2B. If you change my situation I might change my choice, as I already stated several conditions that would cause me to abandon 1A. Option 1A has a 0% chance of undetected cheating. Options 1B, 2A, and 2B all have a 100% chance of undetected cheating. In Game 3, you can pay to change your default choice twice, and the dealmaker shows a willingness to eliminate his ability to cheat before your second choice. Not currently. There would be a lot of factors determining how likely I think a miscalculation or cheating might be, and there is no way to determine this in the abstract.

I don't like many of the standard arguments against capital punishment. In particular, I'm tired of the argument "if you just put an innocent person in jail, they might be exonerated later. If you execute an innocent person, and they are exonerated later, it's too late."

Of course, I then point out that people can be exonerated in the time between being convicted and being executed (which can be quite long sometimes), and the response is generally that in the life sentence there's always some chance of being freed due to exoneration while in the... (read more)

0hyporational8yI agree. Have you considered that life in prison has more value than being dead? Also, why compare capital punishment to life sentences? What if there were no life sentences? Of course you can still die in prison for whatever that's worth, but the chance is significantly smaller.
1Jiro8yI didn't post that because it was about capital punishment, I posted it because I thought this particular anti-capital punishment argument was relevant to the Allais problem. I don't see how life in prison being more valuable than being dead is relevant to the Allais problem. Insofar as that's relevant, it just changes the values of X and Y; the absolutist "we can't do it because an innocent may be exonerated only after he is killed" position still has the same flaw.
0hyporational8yOk, good to know you weren't trying to sneak in politics. I agree it's not relevant. Yes, if we're strictly logical this is true.

My resolution to this, without changing my intuitions to pick things that I currently perceive as 'simply wrong', would be that I value certainty. A 9/10 chance of winning x dollars is worth much less to me than a 10/10 chance of winning 9x/10 dollars. However, a 2/10 chance of winning x dollars is worth only barely less than a 4/10 chance of winning x/2 dollars, because as far as I can tell the added utility of the lack of worrying increases massively as the more certain option approaches 100%. Now, this becomes less powerful the closer the odds, are, but... (read more)

[-][anonymous]6y 0

I don´t really see how me chosing 1A > 1b and 2b >2A is a flaw of mine. First of all, my utility function, which i have inherited from millions of years of evolution, tells me to SOMETIMES take risks IF I CAN AFFORD IT, especially when the increasing stake outweighs the increasing risk.

This is how I see it: If it was my life at stake, I would of course try to raise the odds. But this is extra money. I don´t even starve if i don´t get the money.

If I am not certain I can get the money in case 2, I think that lowering my win-chance with 1/100 is worth... (read more)

[-][anonymous]6y 6

The Allais "Paradox" and Scam Vulnerability by Karl Hammer is a much needed update for anyone who reads the OP.

Would I pay $24k to play a game where I had a 33/34 probability of winning an extra$3k? Let's consult our good friend the Kelly Criterion.

We have a bet that pays 1/8:1 with a 33/34 probability of winning, so Kelly suggests staking ~73.5% of my bankroll on the bet. This means I'd have to have an extra ~$8.7k I'm willing to gamble with in order to choose 1b. If I'm risk-averse and prefer a fractional Kelly scheme, I'd need to start with ~$20k for a three-fourths Kelly bet and ~$41k for a one-half Kelly bet. Since I don't have that kind of money lying aroun... (read more) Forgive me if I'm misunderstanding something, but the way I see it, if I choose 1A, it means that I am willing to forgo (i.e. pay) 3000$ for an additional 1/34 ~ 3% chance of getting money. Then if I choose 2B, if means I am unwilling to forgo an additional 3000$in exchange for an additional 1% chance of getting money. So what I learn from this is that the value I assign an extra percentage chance of getting money is somewhere between 1000$ and 3000$. So here's why I prefer 1A and 2B after doing the math, and what that math is. 1A = 24000 1B = 26206 (rounded) 2A = 8160 2B = 8910 Now, if you take (iB-iA)/iA, which represents the percent increase in the expected value of iB over iA, you get the same number, as you stated. (iB-iA)/iA = .0919 (rounded) This number's reciprocal represents the number of times greater the expected value of iA is than the marginal expected value of iB iA/(iB-iA) = 10.88 (not rounded) Now, take this number and divide it by the quantity p(iA wins)-p(iB wins). This represents how much y... (read more) Assuming this is a one off and not a repeated iteration; I'd take 1A because I'd be *really* upset if I lost out on$27k due to being greedy and not taking the sure $24k. That 1/34 is a small risk but to me it isn't worth taking - the$24k is too important for me to lose out on.

I'd take 2B instead of 2A because the difference in odds is basically negligible so why not go for the extra $3k? I have ~2/3rds chance to walk away with nothing either way. I don't really see the paradox there. The point is to win, yes? If I play game 1 and p... (read more) Oh, here I come again, I've already commented in similar fashion elsewhere, and several people said the same here: nothing vs. non-nothing as a binary switch may work better if the situation is not repeated to "add up to normality" but only played once. One can argue that repeats may seem as being played once each time, but, being creatures gifted with memory, we can notice a catch of encountering such situations often and modify behaviour. I would set up an insurance company that pays people to$24,500 to pick 1B and keeps their earning if they win. They get slightly more risk-free money and I profit massively. Isn't that the whole point of insurance?

I think this might just be a rephrasal of what several other commenters have said, but I found this conception somewhat helpful.

Based on intuitive modeling of this scenario and several others like it, I found that I ran into the expected “paradox” in the original statement of the problem, but not in the statement where you roll one dice to determine the 1/3 chance of me being offered the wager, and then the original wager. I suspect that the reason why is something like this:

Loosing 1B is a uniquely bad outcome, worse than its monetary utility would imply,

I think an essential part of why people make such an irrational decision can be explained by thinking of the probabilities as frequencies. In problem one, 33 out of 34 possible versions of you will receive money, and you're willing to pay $3,000 to make sure that the 34th can as well. But in problem two, 33 out of 100 will receive money, and yet you're not willing to pay$3,000 to make sure that the 34th can. The bias here is essentially that people care more about a certainty than the actual probabilities.