What scope do you have in mind when you refer to forecasting? Is it specifically Tetlockian forecasting / prediction market style forecasting where most of the value is a forecasted number answering a well-defined question, and the methdology often involves aggregating a bunch of people’s views, each who didn’t spend much time?
If so, then I agree directionally and in particular agree the current track record isn’t great, though I think this sort of forecasting will be plausibly quite useful for AI stuff as we get more close to AGI/ASI, and thus it may be easier to operationalize important questions that don’t require long chains of conceptual thinking, there will be lots of important sub-questions to cover, some of which may be more answerable by superforecaster-like techniques as we have better trends / base rates to extrapolate since we are closer to the events we care about. And also having a bunch of AI labor might help.
But overall I am at least currently much more excited about stuff like AI 2027 or OP worldview investigations than Tetlockian forecasting, i.e. I’m excited about work involving deep thinking and for which the primary value doesn’t come from specific quantitative predictions but instead things like introducing new frameworks (which is why I switched what I was working on). I’m not sure if AI 2027 or OP worldview investigations work is meant to be included in your post.
The most recent Tetlockian forecasting style thing I've spent substantial time on is the 2025 and 2026 AI forecasting surveys, in which hundreds of people each year have made predictions a year out on benchmarks, and other indicators such as revenue.
The theory of change is to (a) establish common knowledge about how fast things are going relative to people's expectations (and we collect data on people's overall views on when AGI will be reached so we can sort of see if we're "on track" for that), and (b) identify which people seem to be making the most accurate predictions. Importantly, it is not to elicit predictions that are directly useful for important decisions.
I've observed some evidence of this working, e.g. re: (a) establishing common knowedge, Anson of Epoch wrote an analysis that I've seen referenced a few times. I'm glad to have a data point against the common refrain of "people underpredict benchmark scores and overpredict real-world impact" from revenuue outpacing people's predictions (though it is a narrow and single data point).
Re: (b) identifying who is making the most accurate predictions, I found it informative that in Anson's analysis (footnote 1), forecasters with pre and post-2030 timelines performed similarly. I've seen some people cite Ryan G and Ajeya's #2 and #3 performance as evidence that we should listen to them, which is maybe good but I think people might be over-updating on the results with so few questions (I certainly pay attention to Ryan and Ajeya's forecasts, but almost entirely for other reasons).
Overall, it's unclear to me how this impactful this has been. I decided to run the 2026 survey because it seems at least a bit impactful and it doesn't take that much time (I logged 18 hours on setting up the 2026 version, I'd guess that some others who helped spent a total of 20-60 hours). But the decision was borderline.
I've long had some sense like this, though not the expertise to make a claim like this.
My impression is that a lot of the conceit of forecasting and prediction markets boils down to
Have things like this happened? E.g.
Related comment I made 2 years ago and ensuing discussion: https://forum.effectivealtruism.org/posts/ziSEnEg4j8nFvhcni/new-open-philanthropy-grantmaking-program-forecasting?commentId=7cDWRrv57kivL5sCQ
3 unrelated points:
I'd love to see this argument expanded further but also appreciate what you've written here.
You sort of mention this, but it strikes me that the argument doesn't need to be "are prediction markets useful for doing good" but just needs to be "does the improvements to prediction markets and infrastructure made by EA money and resources actually meaningfully increase the amount of good prediction markets do?".
Lastly, may I suggest cross-posting this to the EA forum?
I liked this post. Strong upvote. I'm neutral on the funding/not funding issue but the points you make about not starting with the tool to solve some problem but with the problem and then selecting the tool(s) is so very important. I frequently see that as a problem in so much public debate and policy making.
How do you rate the educational benefit to participating in prediction markets for about a year? You mention that trivial/gambling markets on short-term BTC movements don't sharpen skills, what about non-trivial markets? How does it compare to other educational/community activities like commenting on LessWrong or attending meetups?
People have a tool they want to use, whether that be cryptocurrency or forecasting, and then try to solve problems with it because they really believe in the solution, but I think this is misguided.
It is true that this (almost) never works, and has resulted in many, many wasted investment dollars.
Unfortunately it is also true that when a problem comes along, it is very convenient if someone else has already partially developed the underlying technology that can be repurposed to solve it. Cuts years off the timeline to a solution. Use of blockchain for critical mineral passports, for example, is starting to happen because the tool was already there.
It is also also true that complex technologies do often see decades or generations of failed implementations before they clear the last hurdle and make a big impact. Light bulbs and carbon fiber and steam engines are examples of that.
I'm not saying this is a good reason to fund any given project/company/technology that you think is failing. I'm definitely not saying that current implementations of prediction markets are fulfilling the potential anyone hoped for, or even that they're net-positive in their impact today. And efficiently killing off failures and studying their remains is a core part of the innovation process. But I do think ways to aggregate hard-to-share information from many minds are the kind of thing our future selves might be glad we experimented with early and repeatedly.
Summary
EA and rationalists got enamoured with forecasting and prediction markets and made them part of the culture, but this hasn’t proven very useful, yet it continues to receive substantial EA funding. We should cut it off.
My Experience with Forecasting
For a while, I was the number one forecaster on Manifold. This lasted for about a year until I stopped just over 2 years ago. To this day, despite quitting, I’m still #8 on the platform. Additionally, I have done well on real-money prediction markets (Polymarket), earning mid-5 figures and winning a few AI bets. I say this to suggest that I would gain status from forecasting being seen as useful, but I think, to the contrary, that the EA community should stop funding it.
I’ve written a few comments throughout the years that I didn’t think forecasting was worth funding. You can see some of these here and here. Finally, I have gotten around to making this full post.
Solution Seeking a Problem
When talking about forecasting, people often ask questions like “How can we leverage forecasting into better decisions?” This is the wrong way to go about solving problems. You solve problems by starting with the problem, and then you see which tools are useful for solving it.
The way people talk about forecasting is very similar to how people talk about cryptocurrency/blockchain. People have a tool they want to use, whether that be cryptocurrency or forecasting, and then try to solve problems with it because they really believe in the solution, but I think this is misguided. You have to start with the problem you are trying to solve, not the solution you want to apply. A lot of work has been put into building up forecasting, making platforms, hosting tournaments, etc., on the assumption that it was instrumentally useful, but this is pretty dangerous to continue without concrete gains.
We’ve Funded Enough Forecasting that We Should See Tangible Gains
It’s not the case that forecasting/prediction markets are merely in their infancy. A lot of money has gone into forecasting. On the EA side of things, it’s near $100M. If I convince you later on in this post that forecasting hasn’t given any fruitful results, it should be noted that this isn’t for lack of trying/spending.
The Forecasting Research Institute received grants in the 10s of millions of dollars. Metaculus continues to receive millions of dollars per year to maintain a forecasting platform and conduct some forecasting tournaments. The Good Judgment Project and the Swift Centre have received millions of dollars for doing research and studies on forecasting and teaching others about forecasting. Sage has received millions of dollars to develop forecasting tools. Many others, like Manifold, have also been given millions by the EA community in grants/investments at high valuations, diverting money away from other EA causes. We have grants for organizations that develop tooling, even entire programming languages like Squiggle, for forecasting.
On the for-profit side of things, the money gets even bigger. Kalshi and Polymarket have each raised billions of dollars, and other forecasting platforms have also raised 10s of millions of dollars.
Prediction markets have also taken off. Kalshi and Polymarket are both showing ATH/growth in month-over-month volume. Both of them have monthly volumes in the 10s of billions of dollars. Total prediction market volume is something like $500B/year, but it just isn’t very useful. We get to know the odds on every basketball game player prop, and if BTC is going to go up or down in the next 5 minutes. While some people suggest that these trivial markets help sharpen skills or identify good forecasters, I don’t think there is any evidence of this, and it is more wishful thinking.
If forecasting were really working well and was very useful, you would see the bulk of the money spent not on forecasting platforms but directly on forecasting teams or subsidizing markets on important questions. We have seen very little of this, and instead, we have seen the money go to platforms, tooling, and the like. We already had a few forecasting platforms, the market was going to fund them itself, and yet we continue to create them.
There has also been an incredible amount of (wasted) time by the EA/rationality community that has been spent on forecasting. Lots of people have been employed full-time doing forecasting or adjacent work, but perhaps even larger is the amount of part-time hours that have gone into forecasting on Manifold, among other things. I would estimate that thousands of person-years have gone into this activity.
Hits-based Giving Means Stopping the Bets that Don’t Pay Off
You may be tempted to justify forecasting on the grounds of hits-based giving. That is to say, it made sense to try a few grants into forecasting because the payoff could have been massive. But if it was based on hits-based giving, then that implies we should be looking for big payoffs, and that we have to stop funding it if it doesn’t.
I want to propose my leading theory for why forecasting continues to receive 10s of millions per year in funding. That is, it has become a feature of EA/rationalist culture. Similar to how EAs seem to live in group houses or be polyamorous, forecasting on prediction markets has become a part of the culture that doesn’t have much to do with impact. This is separate from parts of EA culture that we do for impact/value alignment reasons, like being vegan, donating 10%+ of income, writing on forums, or going to conferences. I submit that forecasting is in the former category.
At this point, if forecasting were useful, you would expect to see tangible results. I can point to you hundreds of millions of chickens that lay eggs that are out of cages, and I can point to you observable families that are no longer living in poverty. I can show you pieces of legislation that have passed or almost passed on AI. I can show you AMF successes with about 200k lives saved and far lower levels of malaria, not to mention higher incomes and longer life expectancies, and people living longer lives that otherwise wouldn’t be because of our actions. I can go at the individual level, and I can, more importantly, go at the broad statistical level. I don’t think there is very much in the way of “this forecasting happened, and now we have made demonstrably better decisions regarding this terminal goal that we care about”. Despite no tangible results, people continue to have the dream that forecasting will inform better decision-making or lead to better policies. I just don’t see any proof of this happening.
Feels Useful When It Isn’t
Forecasting is a very insidious trap because it makes you think you are being productive when you aren’t. I like to play bughouse and a bunch of different board games. But when I play these games, I don’t claim to do so for impact reasons, on effective altruist grounds. If I spend time learning strategy for these board games, I don’t pretend that this is somehow making the world better off. Forecasting is a dangerous activity, particularly because it is a fun, game-like activity that is nearly perfectly designed to be very attractive to EA/rationalist types because you get to be right when others are wrong, bet on your beliefs, and partake in the cultural practice. It is almost engineered to be a time waster for these groups because it provides the illusion that you are improving the world’s epistemics when, in reality, it’s mainly just a game, and it’s fun. You get to feel that you are improving the world’s epistemics and that therefore there must be some flow-through effects and thus you can justify the time spent by correcting a market from 57% to 53% on some AI forecasting question or some question about if the market you are trading on will have an even/odd number of traders or if someone will get a girlfriend by the end of the year.
Conclusion
A lot of people still like the idea of doing forecasting. If it becomes an optional, benign activity of the EA community, then it can continue to exist, but it should not continue to be a major target for philanthropic dollars. We are always in triage, and forecasting just isn’t making the cut. I’m worried that we will continue to pour community resources into forecasting, and it will continue to be thought of in vague terms as improving or informing decisions, when I’m skeptical that this is the case.