Oscar_Cunningham

Kelly *is* (just) about logarithmic utility

One other argument I've seen for Kelly is that it's optimal if you start with $a and you want to get to $b as quickly as possible, in the limit of b >> a. (And your utility function is linear in time, i.e. -t.)

You can see why this would lead to Kelly. All good strategies in this game will have somewhat exponential growth of money, so the time taken will be proportional to the logarithm of b/a.

So this is a way in which a logarithmic utility might arise as an instrumental value while optimising for some other goal, albeit not a particularly realistic one.

Kelly isn't (just) about logarithmic utility

Why teach about these concepts in terms of the Kelly criterion, if the Kelly criterion isn't optimal? You could just teach about repeated bets directly.

Are index funds still a good investment?

Passive investors own the same proportion of each stock (to a first approximation). Therefore the remaining stocks, which are held by active investors, also consist of the same proportion of every stock. So if stocks go down then this will reduce the value of the stocks held by the average active investor by the same amount as those of passive investors.

If you think stocks will go down across the market then the only way to avoid your investments going down is to not own stocks.

Scoring 2020 U.S. Presidential Election Predictions

I just did the calculations. Using the interactive forecast from 538 gives them a score of -9.027; using the electoral_college_simulations.csv data from The Economist gives them a score of -7.841. So The Economist still wins!

Scoring 2020 U.S. Presidential Election Predictions

Does it make sense to calculate the score like this for events that aren't independent? You no longer have the cool property that it doesn't matter how you chop up your observations.

I think the correct thing to do would be to score the single probability that each model gave to this exact outcome. Equivalently you could add the scores for each state, but for each use the probability conditional on the states you've already scored. For 538 these probabilities are available via their interactive forecast.

Otherwise you're counting the correlated part of the outcomes multiple times. So it's not surprising that The Economist does best overall, because they had the highest probability for a Biden win and that did in fact occur.

My suggested method has the nice property that if you score two perfectly correlated events then the second one always gives exactly 0 points.

Did anybody calculate the Briers score for per-state election forecasts?

Does it make sense to calculate the score like this for events that aren't independent? You no longer have the cool property that it doesn't matter how you chop up your observations.

I think the correct thing to do would be to score the single probability that each model gave to this exact outcome. Equivalently you could add the scores for each state, but for each use the probabilities conditional on the states you've already scored. For 538 these probabilities are available via their interactive forecast.

Otherwise you're counting the correlated part of the outcomes multiple times. So it's not surprising that The Economist does best overall, because they had the highest probability for a Biden win and that did in fact occur.

EDIT: My suggested method has the nice property that if you score two perfectly correlated events then the second one always gives exactly 0 points.

PredictIt: Presidential Market is Increasingly Wrong

Maybe there's just some new information in Trump's favour that you don't know about yet?

Classifying games like the Prisoner's Dilemma

I've been wanting to do something like this for a while, so it's good to see it properly worked out here.

If you wanted to expand this you could look at games which weren't symmetrical in the players. So you'd have eight variables, W, X, Y and Z, and w, x, y and z. But you'd only have to look at the possible orderings within each set of four, since it's not necessarily valid to compare utilities between people. You'd also be able to reduce the number of games by using the swap-the-players symmetry.

North Korea were caught cheating in 1991 and given a 15 year ban until 2007. They were also disqualified from the 2010 IMO because of weaker evidence of cheating. Given this, an alternative hypothesis is that they have also been cheating in other years and weren't caught. The adult team leaders at the IMO do know the problems in advance, so cheating is not too hard.