- Track your personal forecasts. Get a notebook, spreadsheet or fatebook.io and write what you think will happen and a % or odds chance. Follow up on it later.
- Bet fake money on manifold.markets. It's still pretty addictive, so if you have a gambling problem, please avoid.
- Take part in the monthly estimation game. Test your ability to estimate quantities. This is correlated with your ability to navigate the world well
- Forecast on metaculus.com. Questions are often pretty focused on geopolitics
- Read Superforecasting by Philip Tetlock. This is if books are a way you learn well.
Some updates for the community These are attempted to be consensus updates, but the LessWrong wiki doesn't really have a process for that. So feel free to edit. Forecasting tells you information about forecasters, not about the future. As Michael Story writes here "Most of the useful information you produce [in forecasting] is about the people, not the world outside... The book Superforecasting was all about the forecasters themselves, not the future we spent years of effort predicting as part of the study, which I haven’t heard anyone reference other than as anecdotes about the forecasters". We should think that we should trust forecasters more, rather than trust their forecasts, which often are too specific to give a clear picture of the future. It is difficult to forecast things policy makers actually care about. Forecasting sites forecast things like "will Putin leave power" rather than "If Putin leaves power between July18th and the end of Aug how will that affect the likelihood of a rogue nuclear warhead". This question probably still isn't specific enough to be useful - it doesn't forecast specific policy outcomes. And if it were, decision makers would have to trust the results, which they currently largely don't. Decision makers largely don't trust forecasts. Even if you had the perfect set of 1000 forecast that gave policy recommendations, decision makers would need to want to act on them. That they don't is a significant bottleneck even in the magical world where we have this. Forecasting beyond 3 years is not good. Anything above .25 is worse than random. Many questions are too specific and too far away for forecasting to be useful to them. Forecasting may or may not be overrated in rationalism/EA. It's easy to feel as if it is, but I think other than the overlap with Manifold or Metaculus, I'd struggle to give evidence for that. There aren't a lot of forecasts shown on LessWrong and many top LessWrong users don't have clear forecasting track records. In our actions it seems we are not that pro forecasting