I'm confused. Could someone help?

Imagine that I'm offering a bet that costs 1 dollar to accept. The prize is X + 5 dollars, and the odds of winning are 1 in X. Accepting this bet, therefore, has an expected value of 5 dollars a positive expected value, and offering it has an expected value of -5 dollars. It seems like a good idea to accept the bet, and a bad idea for me to offer it, for any reasonably sized value of X.

Does this still hold for unreasonably sized values of X? Specifically, what if I make X really, really, big? If X is big enough, I can reasonably assume that, basically, nobody's ever going to win. I could offer a bet with odds of 1 in 10100 once every second until the Sun goes out, and still expect, with near certainty, that I'll never have to make good on my promise to pay. So I can offer the bet without caring about its negative expected value, and take free money from all the expected value maximizers out there.

What's wrong with this picture?

See also: Taleb Distribution, Nick Bostrom's version of Pascal's Mugging

(Now, in the real world, I obviously don't have 10100 +5 dollars to cover my end of the bet, but does that really matter?)


Edit: I should have actually done the math. :(

10 comments, sorted by
magical algorithm
Highlighting new comments since Today at 3:01 AM
Select new highlight date

Treating money as a linear measure of value breaks down when the amounts get sufficiently large. The marginal utility of $10,000,000 is not simply 10 x the marginal utility of $1,000,000 for one thing (for someone who is not already wealthy). Also, for really large amounts of money such that they represent a significant fraction of the total money supply the linear relationship does not even hold ignoring the marginal utility - owning all the money in the world is not simply 100 x more valuable than owning 1% of all the money in the world.

Then of course there is the problem that nobody would take the bet with you since they would know you can't possibly pay if they were to win. Unless it's Goldman Sachs taking the bet and they know the government will print the money and bail you out if they win.

I think I can help. You have set up a game of chance so that the expected value for the house (yourself) is negative. That means that on average you would have to pay out more than you would receive. However, while the payout is very big the chances of winning are very tiny so you wonder if this changes the game. In some sense, you are asking about the expected value of the game when you know the law of large numbers is not going to apply, because you are not going to play enough times for the ratio of wins to losses to average out.

This is a problem about sampling. The number of times you play the game will be much smaller than the number of games needed to yield the expected average. Suppose you conduct the game (only!) a million times. How reasonable is it to expect that you would collect a million dollars and not have to pay anything? In other words, we just need to calculate the probability of not having any "win" in the sample size of a million. The probability of a win in such a small sample size is tiny (epsilon) - so you wonder if you could consider it effectively zero and if it would be worthwhile to play the game.

The answer is that the chances are extremely high that you will not have to pay out anything (1-epsilon) so in almost every case it is lucrative to play the game. However, when you do lose, you lose so big that it (really does) cancel out the winnings you would be making in most case. So the expected value still holds -- it's not profitable to play the game.

My brain -- and your brain too, probably -- keeps buzzing that it is profitable to play the game because in almost every conceivable scenario, we can expect to make a million dollars. Human beings can't correctly think intuitively about very small and very large numbers. Every time your brain buzzes on this problem -- remind yourself it is because you're not really weighing the enormity of the pay-off you'd have to pay. Your brain keeps saying the probability is small, but the product of the probability and the payout is a finite, non-zero number.

As several comments below have eluded, perhaps the impracticality of such a pay-off is detracting from the abstract understanding of the problem. However, this is a fascinating question, and should be addressed squarely. (I'm pretty certain you didn't mean that you would just claim bankruptcy if you lost. Then your game would really be a scam, though I suppose we could argue about whether it is a scam in a sample where no one wins.)

Imagine this post with the problems that other commenters have pointed out fixed. In effect you're saying: Suppose I multiply something that's REALLY small (probability of having to pay out) by something that's REALLY big (amount that you would have to pay out). Further, suppose that the product (the expected payout) is 5 dollars. Can I just claim that the small probability is "practically zero" and get a different answer for the payout (that is, 0 dollars)?

There's nothing in your problem to prefer "small is approximately zero" over "big is approximately infinite". By making the other approximation, it seems just as reasonable for someone to pay a small amount for a small but finite chance of an infinite payout.

This question reminds me of the "immovable force and unstoppable barrier" problem that some of us encountered in middle school. One's intuition is destroyed by the extremes involved, and you can easily get your thinking into a circular rut focused on one half of the problem without noticing that your debate partner is going in a symmetrical circle on the other side.

There are several problems. You're really looking to take free money from expected utility maximizers, not "expected value maximizers", and the equation from an expected utility maximizer's point of view is:

Expected change in utility given that I have N dollars = [U(N-1) P(no pay|X) + U(N+X+5-1) P(pay|X)] - U(N)

Key points here are the transformation of dollar winnings to utility (diminishing marginal utility of money), the fact that the expected value looks more like 5/X (not 5) dollars, and the fact that the expected utility maximizer cares about P(pay|X), not P(win the bet|X) - its estimation of your ability to pay cannot be swept under the rug, so p quickly becomes much smaller than 1/X when X is 10^100.

I think this is one intuitive leap that could help:

You're supposing it's reasonable to assume that you'll never have to pay out in your lifetime. Anyone taking your bet, then, can just as much assume that they'll never win in your lifetime.

So it's as balanced as one should expect - if you obviously should offer the bet, then anyone else just as obviously shouldn't take it.

One way to approach the derivation of expected utility is to say that any outcome is equivalent (in the preference order) to some combination of the worst possible and the best possible outcomes. So, you pick your outcome, like [eating a pie], and say that there exists a probability P such that, say, the lottery P [life in eutopia] + (1-P) [torture and death] is as preferable. As it turns out, you can use that P as utility of your event.

So, yes, if there are no problems with the possibility to pay up on the promise, playing an incredibly risky lottery is the right thing to do. Just make sure that you calculated your odds and utility correctly, which becomes trickier and trickier as the values go extreme.