Nassim Taleb on Election Forecasting

by NatashaRostova 2 min read26th Nov 201616 comments

8


Nassim Taleb recently posted this mathematical draft of election forecasting refinement to his Twitter.

The math isn’t super important to see why it’s so cool. His model seems to be that we should try to forecast the election outcome, including uncertainty between now and the end date, rather than build a forecast that takes current poll numbers and implicitly assumes nothing changes.
The mechanism of his model focuses on forming an unbiased time-series, formulated using stochastic methods. The mainstream methods as of now focus on multi-level Bayesian methods that look to see how the election would turn out if it were run today.
 
That seems like it makes more sense. While it’s safe to assume a candidate will always want to have the highest chances of winning, the process by which two candidates interact is highly dynamic and strategic with respect to the election date.

When you stop to think about it, it’s actually remarkable that elections are so incredibly close to 50-50, with a 3-5% victory being generally immense. It captures this underlying dynamic of political game theory.

(At the more local level this isn’t always true, due to issues such as incumbent advantage, local party domination, strategic funding choices, and various other issues. The point though is that when those frictions are ameliorated due to the importance of the presidency, we find ourselves in a scenario where the equilibrium tends to be elections very close to 50-50.)

So back to the mechanism of the model, Taleb imposes a no-arbitrage condition (borrowed from options pricing) to impose time-varying consistency on the Brier score. This is a similar concept to financial options, where you can go bankrupt or make money even before the final event. In Taleb's world, if a guy like Nate Silver is creating forecasts that are varying largely over time prior to the election, this suggests he hasn't put any time dynamic constraints on his model.

The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50. This set of assumptions would have to be empirically tested. Still, stepping aside from the math, it does feel intuitive that an election forecast with high variation a year away from the event is not worth relying on, that sticking closer to 50-50 would offer a better full-sample Brier score.


I'm not familiar enough in the practical modelling to say whether this is feasible. Sometime the ideal models are too hard to estimate.

I'm interested in hearing any thoughts on this from people who are familiar with forecasting or have an interest in the modelling behind it.

I also have a specific question to tie this back to a rationality based framework: When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out? Whether it's prediction market variations, or adjustments based on perceiving changes in nationalism or politician specific skills (e.g. Scott Adams claimed to be able to predict that Trump would persuade everyone to vote for him. While it's tempting to write him off as a pundit charlatan, or claim he doesn't have sufficient proof, we also can't prove his model was wrong either.) I'm interested in learning the reasons we may disagree or be reasonably skeptical of polls, knowing it of course must be tested to know the true answer.

This is my first LW discussion post -- open to freedback on how it could be improved

8