This is a dialogue made during the online dialogues party. Phil, River, and I (Garrett) talked about the Kelly criterion. I was under the mistaken impression that it would come up in finite games, even without discount factors. This turned out to be wrong! In such betting games, you always want to bet the maximal amount (assuming linear utility in money). 

We then talked about how maybe you could save the criterion without bringing in log utility in money or discount rates in infinite games. The two conclusions were you could either have a finite utility function, or care intrinsically about not being broke/maximizing the probability of getting more money than other agents in your world.

The highlight in my opinion is me changing my mind about the finite games thing after using dynamic programming to try to show Phil to be wrong. The proof we came up with was both surprising and elegant to me.

Possible justifications for Kelly

Garrett Baker

So, I've seen some previous explanations of the kelly criterion on LessWrong, and they seem to fall into 3 clusters:

  1. You do the kelly criterion because you have log-utility in money.
  2. You do the kelly criterion because you have linear utility in money, but you can use current money to gain even more in the future.
  3. You do the kelly criterion because you partially update on the market's position.
philh

Some background here for me is I've previously written https://www.lesswrong.com/posts/XnnfYrqaxqvirpxFX/on-kelly-and-altruism and https://www.lesswrong.com/posts/JAmyTWoukk8xzhE9n/ruining-an-expected-log-money-maximizer, they didn't get much karma and I don't know if that's "there's something wrong with them" or "they didn't get seen much" or what

philh

You do the kelly criterion because you have log-utility in money.

This one I think is complicated; I'd say "you do a thing that turns out equivalent to the kelly criterion, but for simpler reasons than why the kelly criterion is derived"

philh

You do the kelly criterion because you have linear utility in money, but you can use current money to gain even more in the future.

In this case I'd say no, if you actually have linear utility in money you should just bet everything every time in most/all game structures I've seen; this results in behavior that is obviously wrong, but that's because linear utility in money is obviously wrong

philh

You do the kelly criterion because you partially update on the market's position.

I don't think I've seen this one, though I guess fractional kelly would be this? In my head pure Kelly is "the market thinks this, I think that, and the difference between the two is how much I can win over the market"

philh

I guess one thing I'd be explicit about here (partially rehashing the above) is that I think if you have a utility function at all, you don't need to bring Kelly into things. If your utility function is log, then the thing you do turns out equivalent to kelly but derived more simply. The way kelly is derived, it seems to me that the thing it gives you is just a different thing than optimizing a utility function

Garrett Baker

In this case I'd say no, if you actually have linear utility in money you should just bet everything every time in most/all game structures I've seen; this results in behavior that is obviously wrong, but that's because linear utility in money is obviously wrong

In most real world situations this is false? Can you give some concrete examples here? Like, in poker you have this dynamic, in investing you have this dynamic, in betting you have this dynamic, when making life decisions you have this dynamic, etc.

philh

Hm, I admittedly haven't thought much about realistic scenarios, my thinking was I wanted to figure out simple examples first. So the simple example in my head is the classic "you can bet any amount of money on a 60/40 chance to double your stake", and then I claim that with linear utility you should bet everything every time. Do you disagree with that?

Is Phil using words in a weird way?

Garrett Baker

I guess one thing I'd be explicit about here (partially rehashing the above) is that I think if you have a utility function at all, you don't need to bring Kelly into things. If your utility function is log, then the thing you do turns out equivalent to kelly but derived more simply. The way kelly is derived, it seems to me that the thing it gives you is just a different thing than optimizing a utility function

This seems like a weird claim to me? Like, I could tell you about the simplex algorithm for optimizing linear programming problems, and using this same logic you could come back to me, and tell me the simplex algorithm isn't so interesting because the true thing you're doing is optimizing a linear function under linear constraints, and the linear function you're optimizing need not be a utility function, it could be anything. And the constraints need not be physical, they could be social constraints as well, or even part of your so-called utility function. 

This feels a weird thing to claim to me, despite being true. To me, it seems like there's a bunch of circumstances where you want to use the simplex algorithm, and a bunch of circumstances where you want to use a kelly bet formulation

Garrett Baker

Hm, I admittedly haven't thought much about realistic scenarios, my thinking was I wanted to figure out simple examples first. So the simple example in my head is the classic "you can bet any amount of money on a 60/40 chance to double your stake", and then I claim that with linear utility you should bet everything every time. Do you disagree with that?

Yeah, I do disagree with this. 

Garrett Baker

Well, actually, if you're offered lots of bets of this form, then it becomes smart to use kelly, and not bet everything all the time.

philh

I'm not sure I see the connection you're drawing, so I might try being a bit clearer about why I'm making the claim. So if I have a log utility function, I can do "what is the bet amount that maximizes my expected utility", and that's a relatively simple calculation, and it simplifies to a formula that's equal to the Kelly formula. Whereas the way the Kelly criterion is derived is a much more complicated way of getting to the same formula, that doesn't involve maximizing log utility. Or like, probably under the hood it's doing that through deep mathematical equivalence or something. But it's at any rate more complicated than "maximize my expected log utility"

Garrett Baker

Yeah, the way its derived is via number 2, right? You have a situation, and you want to maximize your utility in that situation, and the way you do this is Kelly

philh

#2 being "You do the kelly criterion because you have linear utility in money, but you can use current money to gain even more in the future."? I wouldn't say so; the way it's originally derived if I recall the paper correctly is by defining a thing it calls "growth rate" and then trying to maximize that

philh

Where growth rate is something like, lim (n->∞) of 1/n log(wealth at time n / wealth at time 0). Which is a limit of random variables, but it's fine because in the limit you get a random variable which takes on a particular value with probability 1

Garrett Baker

Looking at the wikipedia page, are you talking about Bernouli's proof?

Garrett Baker

In a 1738 article, Daniel Bernoulli suggested that, when one has a choice of bets or investments, one should choose that with the highest geometric mean of outcomes. This is mathematically equivalent to the Kelly criterion, although the motivation is different (Bernoulli wanted to resolve the St. Petersburg paradox).

philh

I don't think so - I took this from what I think was the original paper (which was motivated in terms of information theory)

Should you bet everything every time in a finite game?

philh

Um, taking a step back - I think this thread is "am I saying something in a weird way", right? Happy to continue it if you want, but we might prefer to switch to the "should you bet everything every time" thread

River

My understanding of the reason for using Kelly, which is perhaps related to #2 but maybe distinct, is that if you bet more aggressively than Kelly, you get more and more expected dollars concentrated in smaller and smaller slivers of the possible worlds. Taken to the extreme, you get infinite dollars in an infinitesimally small sliver of the possible worlds, which means you get 0 dollars, and therefor 0 utility. And that is true whether your utility function is logarithmic or linear or anything else. Unless 0 dollars is somehow a positive utility state for you, in which case I guess bet as aggressively as you want.

Garrett Baker

I mean, obviously you shouldn't bet everything everytime

philh

Taken to the extreme, you get infinite dollars in an infinitesimally small sliver of the possible worlds, which means you get 0 dollars, and therefor 0 utility.

So I claim this is doing infinity in a way that's not allowed :p "probability 0 of infinite dollars" is more "undefined" than "0 utility", in this case

River

I mean, obviously you shouldn't bet everything everytime

You don't have to bet everything any time to get this conclusion, you just have to bet more than Kelley tells you to.

philh

I mean, obviously you shouldn't bet everything everytime

So if we have a finite game, I think you'd agree you should? Like, there's 100 rounds, you end up with a 0.6^100 chance of 2^100 times your original stake, and the rest of the time you get zero, and that's the best you can do according to a linear utility function. Agree?

Garrett Baker

...finite game...

I'd guess you do something more complicated, since earlier bets influence how much money you can earn in later bets, but later bets don't have this property. You should bet lots in later bets, but not lots in earlier bets.

philh

You don't have to bet everything any time to get this conclusion, you just have to bet more than Kelley tells you to.

(Is it really "any amount at all more?" that surprises me if so, but I don't think it's a crux for me on anything)

philh

I'd guess you do something more complicated, since earlier bets influence how much money you can earn in later bets, but later bets don't have this property. You should bet lots in later bets, but not lots in earlier bets.

I don't think so; I think the math (in this toy example) really does work out that to optimize expected money at the end, you just bet the full amount every time

Garrett Baker

So I claim this is doing infinity in a way that's not allowed :p "probability 0 of infinite dollars" is more "undefined" than "0 utility", in this case

In general, infinite utilities are super fucked in lots of different ways. So when I think of an "infinite utility" I think of just an astronomically large utility, and no matter how astronomically large you make your utility, we still get River's conclusion.

philh

I agree that if utility is bounded at merely astronomically large, things are different

philh

Um, so I think one thing going on here is that you're modelling it as "you play this game, and you can keep playing it for as long as you want, possibly forever, and then you finish the game with some probability distribution over money and it doesn't matter how long you played for". Or something? But like, if you can keep playing forever, kelly gives you infinite money but so does "bet min(kelly, $1) every time"

Garrett Baker

That's a good point. I was modelling things that way. I don't know if modelling things differently, with a discount rate for example, gives so different results though?

philh

So the thing I'd want to add to the model isn't a discount rate, it's a "how much utility do I have at this point in time, while playing the game?" Like, at timestep 100, what does my utility look like?

philh

Because if the game goes on forever, and then stops... I'm not really sure what to do with the result of that. There are a bunch of ways to get inifinite money in that limit, and there's a way in which the kelly function gives you a higher infinity but it's weird

Garrett Baker

Yeah, in that case I will point to my answer here

I'd guess you do something more complicated, since earlier bets influence how much money you can earn in later bets, but later bets don't have this property. You should bet lots in later bets, but not lots in earlier bets.

And as you increase when you stop, you will approach kelly.

Garrett Baker

Maybe I'm just wrong

philh

Yeah, I still disagree with that. Um, we can try to work through the math but I dunno how conducive dialog format is for that. I guess we can at least do latex

Garrett Baker

Its not very conducive, I've tried similar things before, and those were hard, even though I already knew the proof I was trying to make before going in, and if I had a whiteboard, it would be done in 5 min.

philh

Nod

philh

I guess, rather than coming up with a proof, we could do it for small n and see what happens?

Garrett Baker

Yeah

philh

Okay, so for one coin flip you just have one value to choose. Assume you start with £1 and you bet £, you end up with 60% of () and 40% of () which is maximized by maximizing 

philh

For two you get two values, and it's possible that when I've worked through this in the past I've assumed they have to be the same fraction but they don't. And actually they don't even have to be the same fraction depending if you win or not, so that makes three values you can bet

Garrett Baker

Well, taking a dynamic programming approach, we can assume that the last time you bet, you bet all your money, and you end up with , where  is your wealth at time , and  is how much you bet at time . Your bet must be a fraction of your wealth, so we can rewrite this as , so that . Hm... this would in fact mean we want to choose maximal  for everything, because multiplication is order independent...

philh

Ah, yes! Yeah, that argument seems correct to me (and gives the conclusion I already expected so confirmation bias :p)

Garrett Baker

Wild, ok. I guess you've convinced me of this

philh

I guess a similar argument is, suppose we're about to make the final bet. We've established that no matter how much money we currently have (let's say it's , we want to bet everything) to maximize expected  at , and then by induction we maximize  by betting all our wealth on that bet too

philh

Okay, cool. Um, so I guess we take a step back now and figure out what threads were hanging off of this?

Garrett Baker

I think you were asserting that I was assuming there was a discount rate, or at least drawing lots of my intuitions off that fact. 

philh

I don't think I was thinking of it it in those terms, but something like that sounds right, yeah

What about infinite games?

philh

Okay, so at this point we agree that in a finite game a linear utility player should bet everything every time. I guess, we can talk about infinite games if we want? But it sounds like we also both agree those are fucked up, so that might not be where we want to go

Garrett Baker

Infinite utilities are fucked, but infinite games I'm fine with

philh

Okay, so infinite game. So one way to model this is "if you play this game forever, the kelly bettor will have infinite money with probability 1, and the bet-everything bettor will have zero money with probability 1", but that feels like a bad way to model it to me

philh

Like, if we take the limit of probability distributions over wealth at each timestep, I think the bet-everything limit is indeed a probability distribution that has 1 at 0 and 0 everywhere else. But I think the Kelly limit is a function that's just zero everywhere, not a probability distribution

Garrett Baker

Yeah, this sounds correct. You have succeeded in getting me confused, which I think is where you want me

Garrett Baker

Ok, I think I found my confusion. The problem is is if there's a bound to the utility function, then the bet everything guy just gets 0 utility, but the kelly girl gets maximal utility

River

I do not understand what you mean by that Phil. What does it mean for the limit of the kelly function to be something at a particular place? The limit is the place we are talking about.

philh

Ok, I think I found my confusion. The problem is is if there's a bound to the utility function, then the bet everything guy just gets 0 utility, but the kelly girl gets maximal utility

Yeah, that sounds right. But then the person betting with a linear-up-to-some-bound utility function would act different somehow, it's not obvious to me how. (Wouldn't completely shock me if they just end up betting Kelly, actually...)

philh

What does it mean for the limit of the kelly function to be something at a particular place?

So at some fixed timestep we have some probability distribution over wealth, and a probability distribution is a function  in this case, that integrates to 1. So in the limit as time -> ∞, we can take the limit of these probability distributions. And the pointwise limit of those, i.e. the function that comes from taking each previous function at a fixed point and taking the limit of that sequence of numbers, is a function that's constantly 0

River

Why would we care about the limit of a probability distribution at a fixed point in time? And wouldn't that limit always have to be zero?

River

we can take the limit of these probability distributions.

Sure sounds like you are taking the limit of a probability distribution at a fixed point in time?

River

Like, I don't know how else to parse that statement.

philh

So at time t we have a probability distribution . And we take , which since each  is a function, the limit is also a function. Um, kinda. I think it might actually be "depending what we think of as the limit blah blah blah". But in this case, there's a limit that we call the pointwise limit, and it does exist and it's a function. But even though each  is a function that's also a probability distribution, the limit is a function that's not a probability distribution

philh

Um, but part of what's going on here is I'm saying "this is a silly thing to be doing", but it's also a thing that people do when they compare the outcomes of kelly versus bet-everything in the infinite game

River

I agree with Garret that trying to do math on here is annoying. And I agree that the bet-everything strategy has a limit that is 1 at 0 and 0 everywhere else. I do not think I agree that Kelley has a limit that is 0 everywhere.

Garrett Baker

I think by "0 everywhere" it's meant that p(utility=Utility|kelly) = 0 regardless of what utility is.

River

Kelly will never tell you to bet everything, no matter how certain you are. So even if you take a big loss betting Kelly, you will eventually recover. So in the limit, you should still come out rich.

Garrett Baker

whereas p(0 = Utility | bet everything) = 1

Garrett Baker

And the point is to show that this is actually a really dumb way to analyze things

philh

I think by "0 everywhere" its meant that p(utility=Utility|kelly) = 0 regardless of what utility is.

Yes, this. Any specific outcome has zero probability (which means it's not a probability distribution)

philh

(Um, that's imprecise, but I claim it's at least not a standard probability distribution. I think there are weird things people have come up with that let you handle things like this maybe)

Garrett Baker

I guess someone could retort that having a probability 0 everywhere is better than having probability 0 everywhere except at 0.

philh

Yeah. But my main reply is "at that point we're not really doing statistics" I guess

River

ok, I think I agree then. Intuitively, I think there must be a relevant sense in which summing all the non-zero outcomes for kelly gives 1, and the non-zero outcomes for bet-everything gives 0, but I don't know enough analysis to put anything more concrete on that.

Garrett Baker

I guess we could take the limit of the integral over non-zero outcomes

philh

Hm, I think I see two things that could mean, and one of them is "limit of expected value" and the other is "limit of a function that's constantly 1"

Garrett Baker

I mean 

where 

River

Maybe the way to think about it is that at each point in time, we can integrate the probability function for each strategy over the range (0, inf), and take the limit of that. For bet-everything, this is 0. For Kelly, this is 1. And this seems like something we should care about.

Garrett Baker

Translating to my thing, , and .

Garrett Baker

I think maybe phil can make the point he tried to make in On kelly and altrusim's TLDR:

One-sentence summary: Kelly is not about optimizing a utility function; in general I recommend you either stop pretending you have one of those, or stop talking about Kelly.

philh

at each point in time, we can integrate the probability function for each strategy over the range (0, inf), and take the limit of that.

So if the thing we're integrating is just the probability function itself, the intgegral is always 1 (that's what a probability function is) If the thing we're doing is integral of  then that's taking expected value. And both of them grow to infinity in the limit

philh

(Hm, I don't currently know what's going on with this)

Garrett Baker

this is basically taking the limit of  of the probability you don't have 0 money after  timesteps

(assuming you can't be in debt)

River

So if the thing we're integrating is just the probability function itself, the intgegral is always 1 (that's what a probability function is)

No, a probability function integrated over all possible outcomes has to be one. My point was to exclude an outcome (being broke), which means that the integral can be less than 1.

philh

No, a probability function integrated over all possible outcomes has to be one. My point was to exclude an outcome (being broke), which means that the integral can be less than 1.

Ah, so if the probability distribution is continuous (which I think it has to be for this infinite game, but... I dunno, maybe not), then excluding a point value actually doesn't change the integral. But we can consider "probability of being broke". But then my reply would be "but the linear utility person doesn't care about their probability of being broke, they care about their expected money"

River

We have discrete timesteps, and discrete numbers of dollars at each timestep, so I don't think any of these probability distributions will be continuous.

philh

Okay. I'm not sure, but I don't think it's a crux

Garrett Baker

We can consider the case of a continuous number of dollars, and be fine

River

Yea, this was not a particularly productive direction, sorry about that.

Returning to Phil's previous tl;dr

philh

I think maybe phil can make the point he tried to make in On kelly and altrusim's TLDR:

Sure. So some of this depends on the thread we dropped about whether I was saying something in a weird way. But like, if we accept that weird way of saying things, then I claim that if you have a log-utility function, the thing you do is equivalent to betting Kelly but I (admittedly mildly) disapprove of calling it Kelly betting. And if you have a different utility function, then the thing you do is something different.

But (this is another point we didn't manage to get to) there is a thing that Kelly gives you that it doesn't really make sense to think of as being "I'm maximizing my utility function by getting this", but does seem to me like a good thing (because I don't have a utility function and if I did it would not be denominated purely in dollars and so on).

That thing is that betting Kelly means that with probability 1, over time you'll be richer than someone who isn't betting Kelly. So if you want to achieve that, Kelly is great.

Garrett Baker

I will claim I am confused here. I do think if you make the argument River made, and find it convincing, it really does seem like you're trying to just maximize the probability you're the entity with the most amount o money at the end of the day. However, I think you can also get kelly betting if you have some discount rate, which seems a more reasonable adjustment than saying I just want to maximize the probability I have more money than everyone else. I also claim that wanting more money than everyone else is in fact a reasonable utility function to have.

philh

So I think a discount rate with linear utility might just turn out to be equivalent to having log utility?

Garrett Baker

Yeah, seems likely

philh

I agree wanting more money than everyone else is reasonable, I'm not sure it makes sense to think of as a utility function. At any rate it's awkward as one. Like, it's certainly not a utility function that can be expressed in terms of your current wealth. I had an appendix in I think "on kelly and altruism" that looked into this a bit but I don't remember how closely

Garrett Baker

I don't think it's so awkward, you just max the probability you have more money than everyone else. But I do agree you need to incorporate info on the distribution of agents in your world in order to implement it, so it's not solely a function of your wealth. 

Garrett Baker

Maybe what you meant to say was that people don't have utility functions simply representable in terms of their wealth.

philh

Well, I think people, at least human people, don't have utility functions at all. It's sometimes reasonable to talk about them as a shorthand, but I think it breaks down. But like, I also think that's fine in this case, because we don't need to think of "I want to be the richest person in the room" as a utility function

philh

We can just say "I want to maximize the probability of that", and then Kelly gives you it

Garrett Baker

There's a sense in which you're right, but I do think there's a stronger sense in which you're wrong. Like, we don't literally have utility functions, but you do get useful results by analyzing people as if they have utility functions. See the field of economics for lots of examples of this.

Garrett Baker

And I also think its a useful way of looking at life choices, and looking for areas where you may be making mistakes (keeping in mind your utility function need not be simple, and if a change feels wrong, you probably should find yourself siding with your gut more often than your simple analysis of the situation)

philh

Seems reasonable - I think any disagreement we have here is probably about line-drawing inside grey areas, rather than being substantive

Garrett Baker

Sounds right. This was a good dialogue! I think we probably don't have anything else to discuss. Anything come to your mind?

philh

I kinda expect we could come up with something, but this does feel like a natural conclusion

philh

Thank you! I enjoyed this

Garrett Baker

Same.

New to LessWrong?

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 3:26 PM

I have supported myself for almost a decade now via speculation / gambling / arbitrage. I almost never find the Kelly criterion all that useful in my own life. If a bet is really juicy go as hard as you can while finding the downside tolerable. If a bet isn't QUITE JUICY I usually pass.

Yeah, I'd expect that for that strategy you would not want to use the Kelly criterion, and it seems more useful when you're relatively uncertain about the quality of your bet.

The part about the Kelly criterion that has most attracted me is this:


That thing is that betting Kelly means that with probability 1, over time you'll be richer than someone who isn't betting Kelly. So if you want to achieve that, Kelly is great.

So with more notation, P(money(Kelly) > money(other)) tends to 1 as time goes to infinity (where money(policy) is the random score given by a policy).

This sounds kinda like strategic dominance - and you shouldn't use a dominated strategy, right? So you should Kelly bet!

The error in this reasoning is the "sounds kinda like" part. "Policy A dominates policy B" is not the same claim as P(money(A) >= money(B)) = 1. These are equivalent in "nice" finite, discrete games (I think), but not in infinite settings! Modulo issues with defining infinite games, the Kelly policy does not strategically dominate all other policies. So one shouldn't be too attracted to this property of the Kelly bet. 

(Realizing this made me think "oh yeah, one shouldn't privilege the Kelly bet as a normatively correct way of doing bets".)

Yes, but there's an additional thing I'd point out here, which is that at any finite timestep, Kelly does not dominate. There's always a non-zero probability that you've lost every bet so far.

When you extend the limit to infinity, you run into the problem "probability zero events can't necessarily be discounted" (though in some situations it's fine to), which is the one you point out; but you also run into the problem "the limit of the probability distributions given by Kelly betting is not itself a probability distribution".

The expected value of the product of two independent random variables is the product of the expected values of each; this concludes my proof that betting everything on each round is expected value maximizing in a finite game (and infinite too, if you adopt the common ways to make "infinite" precise). I'm surprised the dialogue got that far without this being brought up!

Oh, right! I would have been able to give that proof ten years ago, but I've forgotten a lot since leaving university.

The Kelly criterion can be thought of in terms of maximizing a utility function that depends on your wealth after many rounds of betting (under some mild assumptions about that utility function that rule out linear utility). See https://www.lesswrong.com/posts/NPzGfDi3zMJfM2SYe/why-bet-kelly

So I claim that Kelly won't maximize , or more generally  for any , or , or , or , or even  but it'll get asymptotically close when . Do you disagree?

Your "When to act like your utility is logarithmic" section sounds reasonable to me. Like, it sounds like the sort of thing one could end up with if one takes a formal proof of something and then tries to explain in English the intuitions behind the proof. Nothing in it jumps out at me as a mistake. Nevertheless, I think it must be mistaken somewhere, and it's hard to say where without any equations.

Correct. This utility function grows fast enough that it is possible for the expected utility after many bets to be dominated by negligible-probability favorable tail events, so you'd want to bet super-Kelly.

If you expect to end up with lots of money at the end, then you're right; marginal utility of money becomes negigible, so expected utility is greatly effected by neglible-probability unfavorable tail events, and you'd want to bet sub-Kelly. But if you start out with very little money, so that at the end of whatever large number of rounds of betting, you only expect to end up with  money in most cases if you bet Kelly, then I think the Kelly criterion should be close to optimal.

(The thing you actually wrote is the same as log utility, so I substituted what you may have meant). The Kelly criterion should optimize this, and more generally  for any , if the number of bets is large. At least if  is an integer, then, if  is normally distributed with mean  and standard deviation , then  is some polynomial in  and  that's homogeneous of degree . After a large number  of bets,  scales proportionally to  and  scales proportionally to , so the value of this polynomial approaches its  term, and maximizing it becomes equivalent to maximizing , which the Kelly criterion does. I'm pretty sure you get something similar when  is noninteger.

It depends how much money you could end up with compared to . If Kelly betting usually gets you more than  at the end, then you'll bet sub-Kelly to reduce tail risk. If it's literally impossible to exceed  even if you go all-in every time and always win, then this is linear, and you'll bet super-Kelly. But if Kelly betting will usually get you less than  but not by too many orders of magnitude at the end after a large number of rounds of betting, then I think it should be near-optimal.

If there's many rounds of betting, and Kelly betting will get you  as a typical outcome, then I think Kelly betting is near-optimal. But you might be right if .

Okay, "Kelly is close to optimal for lots of utility functions" seems entirely plausible to me. I do want to note though that this is different from "actually optimal", which is what I took you to be saying.

(The thing you actually wrote is the same as log utility, so I substituted what you may have meant)

Oops! I actually was just writing things without thinking much and didn't realize it was the same.

I do want to note though that this is different from "actually optimal"

By "near-optimal", I meant converges to optimal as the number of rounds of betting approaches infinity, provided initial conditions are adjusted in the limit such that whatever conditions I mentioned remain true in the limit. (e.g. if you want Kelly betting to get you a typical outcome of  in the end, then when taking the limit as the number  of bets goes to infinity, you better have starting money , where  is the geometric growth rate you get from bets, rather than having a fixed starting money while taking the limit ). This is different from actually optimal because in practice, you get some finite amount of betting opportunities, but I do mean something more precise than just that Kelly betting tends to get decent outcomes.

Thanks for clarifying! Um, but to clarify a bit further, here are three claims one could make about these examples:

  1. As , the utility maximizing bet at given wealth will converge to the Kelly bet at that wealth. I basically buy this.
  2. As , the expected utility from utility-maximizing bets at timestep  converges to that from Kelly bets at timestep . I'm unsure about this.
  3. For some finite , the expected utility at timestep  from utility-maximizing bets is no higher than that from Kelly bets. I think this is false. (In the positive: I think that for all finite , the expected utility at timestep  from utility-maximizing bets is higher than that from Kelly bets. I think this is the case even if the difference converges to 0, which I'm not sure it does.)

I think you're saying (2)? But the difference between that and (3) seems important to me. Like, it still seems that to a (non-log-money) utility maximizer, the Kelly bet is strictly worse than the bet which maximizes their utility at any given timestep. So why would they bet Kelly?


Here's why I'm unsure about 2. Suppose we both have log-money utility, I start with $2 and you start with $1, and we place the same number of bets, always utility-maximizing. After any number of bets, my expected wealth will always be 2x yours, so my expected utility will always be  more than yours. So it seems to me that "starting with more money" leads to "having more log-money in expectation forever".

Then it similarly seems to me that if I get to place a bet before you enter the game, and from then on our number of bets is equal, my expected utility will be forever higher than yours by the expected utility gain of that one bet.

Or, if we get the same number of bets, but my first bet is utility maximizing and yours is not, but after that we both place the utility-maximizing bet; then I think my expected utility will still be forever higher than yours. And the same for if you make bets that aren't utility-maximizing, but which converge to the utility-maximizing bet.

And if this is the case for log-money utility, I'd expect it to also be the case for many other utility functions.

...but something about this feels weird, especially with , so I'm not sure. I think I'd need to actually work this out.


Here's a separate thing I'm now unsure about. (Thanks for helping bring it to light!) In my terminology from on Kelly and altruism, making a finite number of suboptimal bets doesn't change how rank-optimal your strategy is. In Kelly's terminology from his original paper, I think it won't change your growth rate.

And I less-confidently think the same is true of "making suboptimal bets all the time, but the bets converge to the optimal bet".

But if that's true... what actually makes those bets suboptimal, in those two frameworks? If Kelly's justification for the Kelly bet is that it maximizes your growth rate, but there are other bet sizes that do the same, why prefer the Kelly bet over them? If my justification for the Kelly bet (when I endorse using it) is that it's impossible to be more rank-optimal than it, why prefer the Kelly bet over other things that are equally rank-optimal?

Yeah, I was still being sloppy about what I meant by near-optimal, sorry. I mean the optimal bet size will converge to the Kelly bet size, not that the expected utility from Kelly betting and the expected utility from optimal betting converge to each other. You could argue that the latter is more important, since getting high expected utility in the end is the whole point. But on the other hand, when trying to decide on a bet size in practice, there's a limit to the precision with which it is possible to measure your edge, so the difference between optimal bet and Kelly bet could be small compared to errors in your ability to determine the Kelly bet size, in which case thinking about how optimal betting differs from Kelly betting might not be useful compared to trying to better estimate the Kelly bet.

Even in the limit as the number of rounds goes to infinity, by the time you get to the last round of betting (or last few rounds), you've left the  limit, since you have some amount of wealth and some small number of rounds of betting ahead of you, and it doesn't matter how you got there, so the arguments for Kelly betting don't apply. So I suspect that Kelly betting until near the end, when you start slightly adjusting away from Kelly betting based on some crude heuristics, and then doing an explicit expected value calculation for the last couple rounds, might be a good strategy to get close to optimal expected utility.

Incidentally, I think it's also possible to take a limit where Kelly betting gets you optimal utility in the end by making the favorability of the bets go to zero simultaneously with the number of rounds going to infinity, so that improving your strategy on a single bet no longer makes a difference.

I think that for all finite , the expected utility at timestep  from utility-maximizing bets is higher than that from Kelly bets. I think this is the case even if the difference converges to 0, which I'm not sure it does.

Why specifically higher? You must be making some assumptions on the utility function that you haven't mentioned.

You could argue that the latter is more important, since getting high expected utility in the end is the whole point. But on the other hand, when trying to decide on a bet size in practice, there's a limit to the precision with which it is possible to measure your edge, so the difference between optimal bet and Kelly bet could be small compared to errors in your ability to determine the Kelly bet size, in which case thinking about how optimal betting differs from Kelly betting might not be useful compared to trying to better estimate the Kelly bet.

So like, this seems plausible to me, but... yeah, I really do want to distinguish between

  • This maximizes expected utility
  • This doesn't maximize expected utility, but here are some heuristics that suggest maybe that doesn't matter so much in practice

If it doesn't seem important to you to distinguish these, then that's a different kind of conversation than us disagreeing about the math, but here are some reasons I want to distingish them:

  • I think lots of people are confused about Kelly, and speaking precisely seems more likely to help than hurt.
  • I think "get the exact answer in spherical cow cases" is good practice, even if spherical cow cases never come up. "Here's the exact answer in the simple case, and here are some considerations that mean it won't be right in practice" seems better than "here's an approximate answer in the simple case, and here are some considerations that mean it won't be right in practice".
    • Sometimes it's not worth figuring out the exact answer, but like. I haven't yet tried to calculate the utility-maximizing bet for those other utility functions. I haven't checked how much Kelly loses relative to them under what conditions. Have you? It seems like this is something we should at least try to calculate before going "eh, Kelly is probably fine".
  • I've spent parts of this conversation confused about whether we disagree about the math or not. If you had reliably been making the distinction I want to make, I think that would have helped. If I had reliably not made that distinction, I think we just wouldn't have talked about the math and we still wouldn't know if we agreed or not. That seems like a worse outcome to me.

Why specifically higher? You must be making some assumptions on the utility function that you haven't mentioned.

Well, we've established the utility-maximizing bet gives different expected utility from the Kelly bet, right? So it must give higher expected utility or it wouldn't be utility-maximizing.

Yeah, I wasn't trying to claim that the Kelly bet size optimizes a nonlogarithmic utility function exactly, just that, when the number of rounds of betting left is very large, the Kelly bet size sacrifices a very small amount of utility relative to optimal betting under some reasonable assumptions about the utility function. I don't know of any precise mathematical statement that we seem to disagree on.

Well, we've established the utility-maximizing bet gives different expected utility from the Kelly bet, right? So it must give higher expected utility or it wouldn't be utility-maximizing.

Right, sorry. I can't read, apparently, because I thought you had said the utility-maximizing bet size would be higher than the Kelly bet size, even though you did not.

I wonder if you can recover Kelly from linear utility in money, plus a number of rounds unknown to you and chosen probabilistically from a distribution.

No, it's fairly straightforward to see this won't work

Let N be the random variable denoting the number of rounds. Let x = p*w+(1-p)*l where p is probability of winning and w=1-f+o*f, l=1-f the amounts we win or lose betting a fraction f of our wealth.

Then the value we care about is E[x^N], which is the moment generating function of X evaluated at log(x). Since our mgf is increasing as a function of x, we want to maximise x. ie our linear utility doesn't change

The simple reason to use Kelly is this.

With 100% odds, any other strategy will lose to Kelly in the long run.

This can be shown by applying the strong law of large numbers to the random walk that is the log of your net worth.

Now what about a finite game? It takes surprisingly few rounds before Kelly, with median performance, pulls ahead of alternate strategies. It takes rather more rounds before, say, you have a 90% chance of beating another strategy. So in the short to medium run, Kelly offers the top of a plateau for median returns. You can deviate fairly far from it and still do well on average.

So should you still bet Kelly? Well, if you bet less than Kelly, you'll experience lower average returns and lower variance. If you bet more than Kelly, you'll experience lower average returns and higher variance. Variance in the real world tends to translate into, "I don't have enough left over for expenses and I'm broke." Reducing variance is generally good. That's why people buy insurance. It is a losing money bet that reduces variance. (And in a complex portfolio, can increase expected returns!) So it makes sense to bet something less than Kelly in practice.

There is a second reason to bet less than Kelly in practice. When we're betting, we estimate the odds. We're betting against someone else who is also estimating the odds. The average of many people betting is usually more accurate than individual bettors. We believe that we're well-informed and have a better estimate than others. But we're still likely biased towards overconfidence in our chances. That means that betting Kelly based on what we think the odds are means we're likely betting too much.

Ideally you would have enough betting history tracked to draw a regression line to figure out the true odds based on the combination of what you think, and the market things. But most of us don't have enough carefully tracked history to accurately make such judgments.

If you bet more than Kelly, you'll experience lower average returns and higher variance.

No. As they discovered in the dialog, average returns is maximized by going all-in on every bet with positive EV. It is typical returns that will be lower if you don't bet Kelly.

Dang it. I meant to write that as,

If you bet more than Kelly, you'll experience lower returns on average and higher variance.

That said, both median and mode are valid averages, and Kelly wins both.

The reason I brought this up, which may have seemed nitpicky, is that I think this undercuts your argument for sub-Kelly betting. When people say that variance is bad, they mean that because of diminishing marginal returns, lower variance is better when the mean stays the same. Geometric mean is already the expectation of a function that gets diminishing marginal returns, and when it's geometric mean that stays fixed, lower variance is better if your marginal returns diminish even more than that. Do they? Perhaps, but it's not obvious. And if your marginal returns diminish but less than for log, then higher variance is better. I don't think any of median, mode, or looking at which thing more often gets a higher value are the sorts of things that it makes sense to talk about trading off against lowering variance either. You really want mean for that.

The reason why variance matters is that high variance increases your odds of going broke. In reality, gamblers don't simply get to reinvest all of their money. They have to take money out for expenses. That process means that you can go broke in the short run, despite having a great long-term strategy.

Therefore instead of just looking at long-term returns you should also look at things like, "What are my returns after 100 trials if I'm unlucky enough to be at the 20th percentile?" There are a number of ways to calculate that. The simplest is to say that if p is your probability of winning, the expected number of times you'll win is 100p. The variance in a single trial is p(1-p). And therefore the variance of 100 trials is 100p(1-p). Your standard deviation in wins is the square root, or 10sqrt(p(1-p)). From the central limit theorem, at the 20th percentile you'll therefore win roughly 100p - 8.5sqrt(p(1-p)) times. Divide this by 100 to get the proportion q that you won. Your ideal strategy on this metric will be Kelly with p replaced by that q. This will always be less than Kelly. Then you can apply that to figure out what rate of return you'd be worrying about if you were that unlucky.

Any individual gambler should play around with these numbers. Base it on your bankroll, what you're comfortable with losing, how frequent and risky your bets are, and so on. It takes work to figure out your risk profile. Most will decide on something less than Kelly.

Of course if your risk profile is dominated by the pleasure of the adrenaline from knowing that you could go broke, then you might think differently. But professional gamblers who think that way generally don't remain professional gamblers over the long haul.

(Variance is "expected squared difference between observation and its prior expected value", i.e. variance as a concept is closely linked to the mean and not so closely linked to the median or mode. So if you're talking about "average" and "variance" and the average you're talking about isn't the mean, I think at best you're being very confusing, and possibly you're doing something mathematically wrong.)

I'm sorry that you are confused. I promise that I really do understand the math.

In repeated addition of random variables, all of these have a close relationship. The sum is approximately normal. The normal distribution has identical mean, median, and mode. Therefore all three are the same.

What makes Kelly tick is that the log of net worth gives you repeated addition. So with high likelihood the log of your net worth is near the mean of an approximately normal distribution, and both median and mode are very close to that. But your net worth is the exponent of the log. That creates an asymmetry that moves the mean away from the median and mode. With high probability, you will do worse than the mean.

The comment about variance is separate. You actually have to work out the distribution of returns after, say 100 trials. And then calculate a variance from that. And it turns out that for any finite n, variance monotonically increases as you increase the proportion that you bet. With the least variance being 0 if you bet nothing, to being dominated by the small chance of winning all of them if you bet everything.

average returns

I think the disagreement here is on what "average" means. All-in maximises the arithmetic average return. Kelly maximises the geometric average. Which average is more relevant is equivalent to the Kelly debate though, so hard to say much more