# 37

I suggest augmenting the classic "bet or update" with "Kelly bet or update".

Epistemic status: feels like maybe going too far, but worth considering?

On the one hand, we have a line of thinking in rationalist discourse which says probability is willingness to bet. This line of thinking suggests that many bad thinking patterns and misapplications of probability theory can broadly be discouraged by a culture of betting.

On the other hand, there's the longstanding discussion of Aumann Agreement and modest epistemology. According to this line of thinking, beliefs should be contagious among honest and rational folk, so long as they believe each other to be honest and rational. One should never agree to disagree; where two disagree, at least one is wrong (irrationally wrong -- or dishonest). This line of thinking has been developed extensively by Robin Hanson and others, and criticized heavily in Eliezer's Inadequate Equilibria, as well as some earlier essays.

These two perspectives are reconciled in the creedo bet or update, which has been proposed as a norm for rationalists: in any significant disagreement, you should either come to an agreement (at least of the Aumann sort, where you may not understand each other's exact reasons, but assign enough outside-view credibility to each other that your final probabilities align), or, failing that, you should bet. The Hansons of the world can take the outside view and update to agree with each other, while the Yudkowskys of the world put their money where their mouth is.

But how much should you bet?

# Risk Aversion

A common norm in the rationalist circles I've frequented is to bet small amounts. I think there are some arguments in favor of this:

• This makes winning feel good and losing hurt, without putting too much on the line. We want opportunities to learn, we want some useful social pressure against overconfidence and other biases, but we're not out to bankrupt anyone.
• Often, bets have to do with things we have control over, such as placing a bet about when you'll finish an important project. Bets which are small in relationship to the project's importance help guarantee that no one gains a perverse financial incentive to delay or sabotage an important project. (On the other hand, people usually avoid perverse bets anyway; and, sometimes it is useful to use bets in the opposite way -- setting up extra incentives in virtuous directions. So I'm not clear on how big this advantage is.)

However, small bets might also be the result of irrational risk aversion. This seems like the more likely causal explanation, at least.

I was recently reminded that I tend to act like I'm much more risk-averse than Kelly betting would advise, with no justification I can think of.

At this point, I would suggest the reader play around with the Kelly formula a little, and imagine betting that way with your savings as the bankroll. The formula frequently recommends betting a significant fraction of savings, on bets with a real chance of not paying out. If you're like me, this feels reckless.

Most investors seem to be similar. Even though Kelly betting itself captures fairly extreme risk-aversion (equating an empty bank account with death), "fractional Kelly", where one bets some percentage of what a true Kelly better would, appears to be much more popular than true Kelly betting.

One interpretation of this is not that investors are more risk-averse than true Kelly, but that they are not so confident of their own probability assessments. This seems sensible in the abstract, but doesn't line up with the math of fractional Kelly.

Let's say I'm betting on horse races and I use a mathematical model which I feel 90% confident in (that is, I think there's a 10% chance I made a dumb mistake in my math; otherwise, I think the model is a good accounting of my subjective uncertainty). I don't know what the other 10% of my beliefs look like, so I do a worst-case analysis on all my bets, acting like my probabilities of winning are 90% of whatever the model tells me.

My Kelly bets would normally be  of my bankroll, where  are my probabilities for winning and losing respectively, and  are the house's calculation of those probabilities. Adjusting for my 10% uncertainty in my math, this becomes . This can take me over the line from betting to not betting at all, for example if  and ; fractional Kelly, on the other hand, never changes willingness to bet, only quantity.

EDIT: Liam Donovan points out that fractional Kelly can be justified by averaging between the market's beliefs and your own. This makes a lot of sense. But I still want to point out that this implies updating toward the market belief, which is different from having a true best-estimate belief which differs from the market and fractional-kelly betting without updating. I'm fine with tracking an inside-view position plus an outside-view adjustment. But, for example, 25%-Kelly betting implies putting only 25% credence on my inside view. It's worth asking myself the question, is my credence on my inside view really 25%? Or am I experiencing irrational loss aversion, focusing on the potential losses more than the potential gains? Am I tracking the probability that the market knows better in a sensible fashion?

# The Strong Position

From an expected value standpoint, small bets are almost like no bets at all. To put it severely: it's as if rationalists noticed that VNM implies willingness to bet, and proceeded to bet as little as possible as a symbolic act, not further noticing that VNM also implies willingness to bet substantially.

Mirroring the "probability is willingness to bet" argument I mentioned at the beginning, the purist argument for Kelly bet or update is that "probability is willingness to Kelly bet". Within a VNM expected value framework, this argument becomes: if you don't Kelly bet, then you have to either admit that your probability is not what you said it was, or you have to give a good reason why your utility function is not approximately logarithmic.

Now, there could be lots of reasons why your utility function is not approximately logarithmic. Ability to actually buy something you within a fixed time period basically creates lots of discrete step-functions that go into your actual utility.

However, utility logarithmic in money does seem like a pretty good approximation of most people's values, and I doubt that more detailed analysis will reveal a justification of the extreme risk aversion which accompanies most bets. (If you think I'm wrong here, I'm very curious to hear the reason!)

# A Weaker Case

OK, but all said and done, Kelly-betting with my savings still seems too risky.

Here's an alternative proposal. Make a special account in which you put some amount of money, say, $100. Call this your betting fund. Kelly bet with this. You might have rules such as "I can take money out if I accumulate a lot from betting, but I can never add more" -- this means that if you lose the majority of your seed money, you have no choice but to crawl back up past$100 by betting well. (I'm not sure about this, just throwing it out there.)

The size of your betting account compared to its starting seed could be a mark of shame/honor among those who chose to engage in this kind of practice.

This is sort of like fractional Kelly, which I've already argued is irrational... but ah well.

Jacobian recently argued that we should Kelly bet more. One thing I noticed after reading that article is that I'm pretty bad at the Kelly formula. I barely ever approach Kelly-like calculations because I don't really know how; I have to look up the formula every time, and I'm always using different versions of it, and have to double-check the meaning of the variables.

So, it seems like a good exercise for me to at least do the math more often.

# 37

21 comments, sorted by Highlighting new comments since
New Comment

I'm not sure of the math OTTOMH but isn't fractional Kelly equivalent to having some credence that your model is correct and some credence that the market is correct?

In your example, it seems like you're assuming that the event has a 0% chance of occuring in the worlds where your model is wrong, but that doesn't make a lot of sense -- the worst case from a betting perspective is that the true odds are equal to the odds that the market assigns, because that means that any bet you make will be -EV.

Here's an interesting case study for fractional Kelly in real world betting scenarios

I'm not sure of the math OTTOMH but isn't fractional Kelly equivalent to having some credence that your model is correct and some credence that the market is correct?

Ah, seems promising!

So, with my credence in my own model being x, and my credence in the market being y,  becomes:

So, yeah, makes sense.

I barely ever approach Kelly-like calculations because I don't really know how; I have to look up the formula every time

FWIW the version that I think I'll manage to remember is that the optional fraction of your bankroll to bet is the expected net winnings divided by the net winnings if you win.

Kelly betting is the optimal strategy to maximise log wealth, given a fixed number of betting opportunities. The number of betting opportunities is often not fixed. If all bets take time to pay off, and you have a limited amount of starting capital, the optimal strategy is to take many tiny bets.

Another reason to avoid large bets more strongly than kelly is correlated failure. Kelly betting implicitly assumes your bets are independent. (at least bets that run at the same time.)

Then the probability of winning the bet is dependant on the amount of money involved. People are more likely to go to substantial effort to ensure they win (or to cheat) if substantial amounts of money are involved.

Plus, I suspect there is a sense in which separating fools and their money doesn't feel good. If you found a whole load of flat earthers who were quite willing to bankrupt themselves with stupid bets (about say where in the sky the moon would be next week), would you grab all their money and feel good about it? Or would you feel you were using peoples ignorance against them, it wasn't their fault their stupid?

IMO the fact Kelly betting is so aggressive compared to what intuitively seems reasonable is probably just another symptom of Bayesianism being insufficiently risk-averse.

I like the idea of the having a fixed betting pool, as a way to give betting more stakes while accounting for "man, I'm not sure I'm actually good enough at this rationality thing to go around kelly betting my savings."

I'm not sure this is the right next-step-in-the-dance for me, but it feels like it's maybe pointing in a useful direction, and I'm curious what the right-next-step-in-the-dance-for-me is.

One problem is I'm neither fluent enough in probability and betting, generally, to feel like I wouldn't be screwing up in basic ways. (Forget Kelly Betting – I still struggle a bit with basic betting market terminology to feel comfortable having a conversation about it. I also haven't yet gotten to a point where I feel well calibrated in the first place on my probabilities.)

I could get better at that. But, also...

...I just don't find myself having interesting enough beliefs that a) I'm confident in and b) other people are confident in-other-directions in.

...I just don't find myself having interesting enough beliefs that a) I'm confident in and b) other people are confident in-other-directions in.

I could be wrong, but this sounds similar to the way nobody seems to have any disagreements when CFAR tries to get them to practice Double Crux, but then it's lunch time and people get into all kinds of arguments while eating.

Surely you have opinions about, EG, the design of LW, which come up during LW meetings? Some of these can be turned into bets about user behavior or such.

I myself don't bet much, but every so often there will be a big discussion at MIRI about some mathematical question or some silly Fermi-estimation example, and people will place bets. People might hypothetically place (fractional?) Kelly bets in these situations. Maybe I'll try it.

LessWrong team does indeed disagree on all sorts of things. But... once you actually operationalize things, bets often turn out to be pretty boring. (The heated disagreement tends to come more from how people are emotionally relating to something).

We did recently have a Prediction Day where we made lots of predictions about various LessWrong outcomes (i.e. how many people would purchase the upcoming book, how many people would participate in the 2019 Review, etc).

There were disagreements, but few disagreements where people had strong enough opinions that survived operationalization to make bets. But, I do think $100 Reserve Fractional Kelly Betting would be appropriate. A fun subtlety about "bet or update" that just came to my mind. If you refuse to bet that X is true, you're supposed to update enough that the bet becomes unprofitable. But that doesn't always mean updating away from X - sometimes you update toward X. Imagine there's a million doors, with a pot of gold behind one of them. The host indicates one door and asks "would you like to bet a dollar at even odds that the gold is behind this door?" You know the host's algorithm was as follows: he selected the true door with the gold and two other random doors, then selected randomly between the three. Then you would refuse to bet (because you lose a dollar with probability 2/3) and also refuse to update away (in fact you'd update strongly toward the door being the one with the gold, as now it has probability 1/3 instead of 1/million). "Bet or update" assumes the possibility of taking either side of the bet. In this case I would happily take the other side of the offered bet. “Bet or update” assumes the possibility of taking either side of the bet. It doesn't. I wrote the "bet or update" post, so I'd know =) Ha, so you did! Reading it a bit more carefully, I guess for one-sided bets there's a chance that you are already in the position that the bet is not profitable so you already don't need to update. I guess the title threw me off a bit - with two sided bets you have to do one or the other (or both), with one sided you don't. Isn't the point of "bet or update" is that you should be either updating on your counterparty's credences or taking a bet that your counterparty thinks is +EV? Here, the player is updating upon observing the host point to the door, not on the bet itself. After the player has updated on the host pointing to the door, you can require the player to take the offered bet or update as normal. Assuming the host is offering the bet as a function of his credences*, the player should update from P(gold) = 1/3 to P(gold) ~= 0, because the player knows that the host knows where the gold is. *as opposed to e.g. offering the bet to provide entertainment to the audience. If the host doesn't vary the terms of the bet based on his private knowledge of where the gold is, then the player should bet rather than update, because the bet offer doesn't transmit any info about the host's credences or the actual state of the world. Betting substantitally under full Kelly to me seems like an adaptation to an environment with adversarial information processes occurring. Consider that most of the bets that a person considers would be bets offered by someone who, as evidenced by the fact that they are offering it, has put more thought into the bet than they have. By making a bunch of small bets I am also gathering other information about my environment. yeah maybe the best framing is fractional kelly is doing 2 things -- updating your beliefs based on fact that the market is offering you these odds, then betting full kelly based on your new posterior belief. I think it's reasonable that your unwillingness to Kelly bet more, say with your savings, is reflecting some conditions not being fully accounted for in your bets, like a lack of true risk neutrality due to a reasonable expectation of negative outcomes if you lost your bankroll. Not that you might still be too conservative, only I also expect the average person to be making what we could reasonably consider a rational choice to avoid going bust given the consequences of doing that which are otherwise not being factored in to these calculations. reasonable expectation of negative outcomes if you lost your bankroll One of the necessary conditions for the Kelly Criterion to be optimal is that losing your bankroll is infinitely bad, so a reasonable expectation of negative outcomes isn't enough to deviate from Kelly. I don't really understand the conclusion. If your betting fund is 100$ we are back to small stakes betting. For many rationalists, a 'real deal' Kelly fund would need to be five or six figures. I do think you should consider fractional Kelly betting in some situations and am in favor of your worldview strongly influencing your investment portfolio. But even 25% Kelly betting your savings is a huge amount of work, you need to think you have a pretty good edge.

I am honestly unsure what the right norms are for 'doing this for real'.

The final section is supposed to be for people who consider the argument and still don't feel up to large-stakes bets.

But yeah, I'm still very unsure what the right norms would be, too.

But even 25% Kelly betting your savings is a huge amount of work, you need to think you have a pretty good edge.

The kind of bet I encounter when hanging out with rationalists is like:

• does [mathematical object] have [property]?
• did [x] happen before or after 1950?
• is the number of smokers in America above or below [number]?

Etc.

IE, these are just factual questions which can either be looked up, or convincingly estimated from lookup-able info, or proven/disproven by thinking about it enough.

The disagreements are often substantial enough that, if taken at face value, they'd imply significant Kelly bets.

I think I have even more opportunities like this when talking to non-rationalists (IE, larger factual disagreements), but don't personally try to turn these into bets (but some people take this approach).

Obviously even if you personally decide to use Kelly in such situations, other people might not be willing; but then the Kelly prescription is just to get as large a bet out of them as you can.

Thanks for giving details of the disagreements you are considering. The only community with norms even remotely conducive to large bets of this type is probably the poker community? There is a lot of big betting on 'props'. I don't know how often they bet on factual disagreements. But there have been a lot of big bets on random stuff like 'can X person do Y pushups' or 'does Scott think God exists. For a certain well known player doing pushups and a well known Scott both bets are real.

I liked your example of being uncertain of your probabilities. I note that if you are trying to make an even money bet with a friend (as this is a simple Schelling point), you should never Kelly bet if you have discounted rate of 2/3 or less of your naïve probabilities.

The maximum bet for  is when  is 1, which is  which crosses below 0 at