Kelly and the Beast

by sen 3 min read28th Jun 201721 comments

0


Poor models

Consider two alternative explanations to each of the following questions:

  • Why do some birds have brightly-colored feathers? Because (a) evolution has found that they are better able to attract mates with such feathers or (b) that's just how it is.
  • Why do some moths, after a few generations, change color to that of surrounding man-made structures? Because (a) evolution has found that the change in color helps the moths hide from predators or (b) that's just how it is.
  • Why do some cells communicate primarily via mechanical impulses rather than electrochemical impulses? Because: (a) for such cells, evolution has found a trade-off between energy consumption, information transfer, and information processing such that mechanical impulses are preferred or (b) that's just how it is.
  • Why do some cells communicate primarily via electrochemical impulses rather than mechanical impulses? Because: (a) for such cells, evolution has found a trade-off between energy consumption, information transfer, and information processing such that electrochemical impulses are preferred or (b) that's just how it is.

Clearly the first set of explanations are better, but I'd like to say a few things in defense of the second.

  • The preference of evolution towards one against the other could very likely have nothing to do with mating, predators, energy consumption, information transfer, or information processing. Those are the best theorized guesses we have, and they have no experimental backing.
  • Evolution works as a justification for contradictory phenomena.
  • The second set of explanations are simpler.
  • The second set of explanations have perfect sensitivity, specificity, precision, etc.

If that’s not enough to convince you, then I propose as a middle-ground another alternative explanation for any situation where evolution alone might be used as such: "I don't have a clue." It's more honest, more informative, and it does more to get people to actually investigate open questions, as opposed to pretending those questions have been addressed in any meaningful way.

Less poor models

When people use evolution as a justification for a phenomenon, what they tend to imagine is this:

  • Changes are gradual.
  • Changes occur on the time scale of generations, not individuals.
  • Duplication and termination are, respectively, positively and negatively correlated with the changing thing.

If you agree, then I’m sure the following questions regarding standard evopsych explanations of social signaling phenomenon X should be easy to answer:

  • What indication is there that the change in adoption of X was gradual?
  • What indication is there that change in adoption of X happens on the time scale of generations and not individuals (i.e., that individuals have little influence in their own local adoption of X)?
  • What constitutes duplication and termination? Is the hypothesized chain of correlation short enough or reliable enough to be convincing? 

If you agreed with the decomposition of “evolution” and disagreed with any of the subsequent questions, then your model of evolution might not be consistent, or you may have a preference for unjustified explanations. In conversation, this isn’t really an issue, but perhaps there are some downsides to using inconsistent models for your personal worldviews.

Optimal models

In March 1956, John Kelly described an equation for betting optimally on a coin-toss game weighted in your favor. If you were expected to gain money on average, and if you could place bets repeatedly, then the Kelly bet let you grow your principal at the greatest possible rate.

You can read the paper here: http://www.herrold.com/brokerage/kelly.pdf. It’s important for you to be able to read papers like this… 

The argument goes like this. Given a coin that lands on heads with probability p and tails with probability q=1-p, I let you bet k (a fraction of your total) that the coin will land heads. If you win, I give you b*k. If you lose, I take your k. After n rounds, you will have won on average p*n times, and you will have lost q*n times. Your new total will look like this:

Your bet is optimized when the gradient of this value with respect to k is zero and decreasing in both directions away.

You can check easily that the equation is always concave down when the odds are in your favor and when k is between 0 and 1. Note that there is alway one local maximum: the value of k found above. There is also one undefined value, k=1 (all-in every time), which if you plug into the original equation results in you being broke.

The Kelly bet makes one key assumption: chance is neither with you nor against you. If you play n games, you will win n*p of them, and you will lose n*q. With this assumption, which often aligns closely with reality, your principal will grow fairly reliably, and it will grow exponentially. Moreover, you will never go broke with a Kelly bet.

There is a second answer though that doesn’t make this assumption: go all-in every time. Your expected winnings, summed over all possible coin configurations, will be:

If you run the numbers, you’ll see that this second strategy often beats the Kelly bet on average, though most outcomes result in you being broke.

So I’ll offer you a choice. We’ll play the coin game with a fair coin. You get 3U (utility) for every 1U you bet if you win, and you lose your 1U otherwise. You can play the game with any amount of utility for up to, say, a trillion rounds. Would you use Kelly’s strategy, with which your utility would almost certainly grow exponentially to be far larger than your initial sum, or would you use the second strategy, which performs far, far better on average, though through which you’ll almost certainly end up turning the world into a permanent hell?

This assumes nothing about the utility function other than that utility can reliably be increased and decreased by specific quantities. If you prefer the Kelly bet, then you’re not optimizing for any utility function on average, and so you’re not optimizing for any utility function at all.

0