The Kelly Criterion in 3D

9abramdemski

1papetoast

8SimonM

25gjm

3knite

3lsusr

2lsusr

6jimv

7lsusr

4lsusr

4TropicalFruit

4abramdemski

2lsusr

3Dave Orr

5gjm

1Dave Orr

5gjm

1[anonymous]

New Comment

18 comments, sorted by Click to highlight new comments since: Today at 8:55 AM

OK, so when is high, we basically want to bet of our bankroll. This is easy to remember. I would like to think of the Kelly bet as kind of an adjustment on that.

Given fixed , the Kelly bet is a line from 0% at the break-even point to 100% on sure things. So where is the break-even point?

The break-even point is where the top of the formula equals zero, ie , so . I prefer to think in terms of , the "returns", . (If there's an opportunity to "*double*" your money, while ; so I think is more how people intuitively think about things.) So the break-even is . That's also pretty easy to remember.

So we can just check whether is above , which is a little easier than calculating the expected value of the bet to check that it's above zero.

As noted earlier: when is really high, we bet , sliding from 0 to 1 as slides from 0 to 1. We can now state the adjustment: when is not so high, the Kelly formula adjusts things by making the slide start at instead of at zero.

So one way to calculate Kelly is to ask how much along the way between and we are.

For example, if , then things start to be profitable after . If , we're 3/5ths of the way, so would invest that part of our bankroll.

If , then things start to be profitable after . If , then it's half way to , so we'd spend half of our bankroll.

This seems way easier than trying to mentally calculate "expected net winnings over net winnings if you win", even though it's the same formula.

Rather than talking about p and b (your probability and net fractional odds*) doing things in terms of p and q = 1/(b+1) (your probability and implied odds probability) makes calculations easier and lots of the things you didn't expect more intuitive. Specifically:

This means:

1. It is easy to see the fraction is piecewise linear

2. The answer to Q5 is obvious.

* aside: who talks in net fractional odds? None of the 3 major odds styles use it

I agree, and I'd put it in slightly different terms again.

Your *edge* is , how much bigger your own probability estimate is than the one implied by the odds you're getting. The *maximum possible edge* is , what your edge would be if you *knew* you would win. (This also equals your counterparty's probability that you lose.) Kelly says: **the fraction of your funds to bet equals the fraction of the maximum possible edge that you've got**.

Equivalently: you bet nothing if you've got no edge, bet all your funds if you know you're going to win, and interpolate linearly in between. (That's exactly lsusr's observation, of course.)

I am interested in a couple of points around this:

What makes this worse is that the region of 0.05% to 1% of my net worth is full of long tails. The wagers I'm skipping could easily repay themselves a thousandfold. If I take a wager like this every day for 2 years and just a single one of them repays itself a thousandfold then I win bigtime.

If I take b = 1000 and f* at 0.0005 I get p ~= 0.0015

If I take b = 1000 and f* at 0.01 I get p ~= 0.011

You draw from this the lesson that you need to bet more. What do you see as a source of potential wagers, coming along daily, where there's a potential 1000x payout and your probability of a win is between around 0.15% and 1.1%?

Also, I'd urge caution over the "[if] just a single one of them repays itself a thousandfold then I win bigtime" framing.

At the p=1.1% / bet=1% of wealth situation, one win in 730 (daily for 2 years) would take a starting balance of $100,000 down to $718. Now, treating this as binomial distribution, that'll only happen 0.25% of the time, but 2 wins leaving you with $7,990 will happen 1% of the time and 3 wins, leaving $88,871 will happen 2.8% of the time (assuming I've not mangled my calculations).

(Yes, in ~96% of cases you'd be left with $988,451 or greater, typically much greater, in the $millions, $billions or $trillions. But this is a caution against thinking 'only one of these 1% chances in 2 years has to come off to be winning bigtime'.)

At the other end of the spectrum you mention, i.e. the bets are all at the 0.05% of wealth (p=0.15%) situation, the results are much less dramatic.

wins=0, closing balance = $69,337 (prob = 0.33)

wins=1, closing balance = $104,162 (prob = 0.37)

wins=2, closing balance = $156,478 (prob = 0.20)

wins=3, closing balance = $235,070 (prob = 0.07)

wins=4, closing balance = $353,134 (prob = 0.02)

What do you see as a source of potential wagers, coming along daily, where there's a potential 1000x payout and your probability of a win is between around 0.15% and 1.1%?

There's two big sources of wagers with small downside and large upside: networking events and buying tools.

- I don't pay enough for serendipitous opportunities to meet people. If I meet the right people people, I think the payout of social serendipity really is and the probability is between and . And that's just counting the most extreme wins. and payouts come along much more frequently.
- I'm too conservative when it comes to buying tools. I'll usually only buy tools when the tools are very cheap or I at at least a 75% confident I will use them. I treat tools I don't use as
*mistakes*when I should think of them as wagers. I expect thinking of tools as wagers will also make it easier to throw away the ones I don't use.

Note that the formula listed in the article is the Kelly formula for when you lose 100% of your stake if you lose the bet, which isn't always the case.

The Kelly formula is derived from the starting point of:

Essentially, after a sufficiently large number of n wagers, you expect to have won pn times and lost (1-p)n times. Each time, your previous bankroll is multiplied, either by (1 + b*wager) if you won, or by (1 - a*wager) if you lost.

Often, a = 1. Sports betting, poker tournament, etc - if you lose your bet, you lose your entire wager.

Sometimes, though it isn't: For something like an investment with a stop-loss, for example, the downside risk could be something like 20% instead of 100%.

If you leave it as "a" instead of assuming a=1, you end up dividing that first term by it:

The Kelly Criterion maximizes the growth of your bankroll over time. This is probably not actually the goal that you personally have for wealth, because of the nonlinearity of money. You (if you're like everyone else) care much more about preserving wealth, once you have some, than you do about growing it.

Some of this might be loss aversion, but mostly this is right -- going from $1M to $2M is nice but far from a doubling in your happiness or ability to do things; going from $1M to zero is a disaster. Kelly doesn't take that into account, except in the purely mathematical way that if you literally go to zero you can't make any more bets (which never happens).

For this reason, professional gamblers I know tend to bet half-Kelly to balance out bankroll preservation with growth. (Source: used to be a pro poker player.)

On the flipside, if you have another source of income, you can bet more aggressively. For instance, if you have a job that generates positive savings, you can count unearned savings as part of your bankroll for Kelly purposes. This is a huge advantage pure pro gamblers don't have. You probably don't want to be too too aggressive there, and how much to count will depend on the stability and/or fungibility of your income. A year or two of savings could be appropriate.

None of this should change your bottom line that you should take +EV longshot bets if you've been passing on them, just how much you should bet.

Kelly does *kinda* take nonlinearity of money into account, in the following sense.

Suppose your utility increases logarithmically with bankroll. (I think it's widely thought that actually it grows a bit slower than that, but logarithmically will do.) Suppose you make a bet that with probability *p* wins you a fraction *x* of your bankroll and with probability 1-*p* loses you a fraction *bx*. You get to choose *x* but not *p* or *b*. Then your expected utility on making the bet is whose derivative w.r.t. *x *is . You get max expected utility when or equivalently when which is exactly the Kelly bet. So betting Kelly maximizes your (short-term) expected utility, if your utility grows logarithmically with bankroll.

Well, near zero utility ~ log(wealth) would mean *infinite negative utility* for zero wealth. That seems obviously false -- I would hate to lose all my possessions but I wouldn't consider it *infinitely bad* and I can think of other things that I would hate more. (So near zero I think reality is *less* nonlinear than the log(bankroll) assumption treats it as being.) In reality, of course, your real wealth basically never goes all the way to zero because pretty much everyone has nonzero earning power or benevolence-of-friends or national-safety-net, and in any case when you're contemplating Kelly-style bets I think it's common to use something smaller than the *total value of all your possessions* as the bankroll in the calculation.

Does considering to be a kind of "misfortune reduction factor" help your intuition at all?

In an even-money bet (), is the edge: . No reduction of misfortune here.

For general , is the edge with misfortune reduced by a factor of : .

The same idea works for bets in which you forfeit a fraction (not necessarily ) of you bet upon losing: in that case, your good fortune is effectively reduced by a factor of : .

The Kelly Criterion is a gambling strategy which maximizes the logarithm of your expected wealth. The Kelly Criterion tells you what fraction f∗ of your bankroll to wager. It is a function of the net fractional odds

^{[1]}received b>0 and the probability of a win p∈(0,1).f∗=p(b+1)−1b

Some properties are intuitively easy to understand.

What surprised me is that if you fix b and restrain p to the region of positive f∗ then f∗ is a linear function of p. This was not intuitive to me.

I expected asymptotic behavior with the greatest df∗dp in a neighborhood of p=1. In other words, I expected the fractional wager to increase slowly at first and then increase faster as p approached 1. Actually, p is linear.

Kelly wagers tend to be more aggressive then human intuitions. I knew this and I

stillunderestimated the Kelly wager. I didn't mess this up in a high stakes situation where fear throws off my calculations. I didn't even mess this up in a real-world situation where uncertainty complicates things. I underestimated the Kelly wager on a purely conceptual level.I have written before about the utility of my fear heuristic. My fear heuristic might be helping to compensate for my Kelly miscalibration.

## Recalibrating

I'm good at tolerating risk when it comes the small number of gigantic risky bets guiding my professional career. I'm also good at tolerating risk in the domain of painlessly small bets. (Not that there is much risk to tolerate in this latter case.) Judging by this post's analysis, I am

awfulat calibrating my risk tolerance for wagers between 0.05% and 1% of my net worth. Specifically, I am insufficiently risk tolerant.What makes this worse is that the region of 0.05% to 1% of my net worth is full of long tails. The wagers I'm skipping could easily repay themselves a thousandfold. If I take a wager like this every day for 2 years and just a single one of them repays itself a thousandfold then I win bigtime.

I need to gamble more.

## Optional Practice

I used these problems is to help develop my intuitive grasp of the Kelly criterion.

Q1:If p=0.01 and b=1000 then what is the corresponding f∗?0.9%

The above number is way higher than what my intuition tells me is appropriate.

Q2:If p=0.1 and b=20 then what is the corresponding f∗?5.5%

The above number is

higherthan the answer to Q1. This result was, again, unintuitive to me. I expected it to be smaller because b is smaller in absolute terms. But I didn't pay sufficient attention to bp=2. The average return is 2× your initial investment.Q3:If p=0.51 and b=1 then what is the corresponding f∗?2%

Q4:If p=0.51 and b=2 then what is the corresponding f∗? (Not that b=1 means you get back your original wagerplusdouble you wager for a total of 3× your wager.)26.5%

Q5:If p=0.65 and b=100 then what is the corresponding f∗? (In practice, opportunities like this are so rare you will usually not get to wager a full Kelly.)65%

I was a little surprised; I had expected a higher result. The logarithmic value function is doing the work of keeping Kelly down.

Q6:If p=0.05 and b=100 then what is the corresponding f∗?4%

The "net fractional odds" b indicate how much you win in the case of a win. If you wager x and lose then you lose x. If you wager x and win then you get your x back plus an additional xb. ↩︎