In quantitative finance, the value of forecasting is obvious. No decisions are made without forecasts, and every forecast is created for some decision.
On Metaculus1, the signal is just as sharp, but the market for the signal is fuzzier. The gap isn’t quality: it’s that the value proposition is implicit.
This post is my dimensionalization of forecast value: an identification of the unique factors that determine the importance of answering a question. These factors are Clarity, Leverage, and Efficiency (CLE).
I use CLE to decide when an answer will be valuable, before I spend time asking or answering the question.
To find value, I focus on questions where:
Forecasts are useful for making decisions under uncertainty. This is a specific context. It is not always the relevant context.
Much existing content on forecasting assumes a question worth asking, and focuses on methodology improvements. Predicting the future is an unsolved, important problem; it’s good that smart people are working on these improvements.
But suppose I can get a decent forecast answer to any question about the future. What questions should I actually try to answer?
I don’t need a forecast if:
Even in a decision-making context, forecasts are also less helpful when:
Forecasts are more helpful when:
How do I take this mess of factors and figure out what questions are worth answering?
This section is a proposed framework for prioritizing forecasts.
There are many sources of uncertainty, and many decisions to make. How do I know what questions I want answered?
I will focus on three sources of value from forecasts:
These are the core forecasting value drivers2. They are intended to be roughly orthogonal, and roughly measurable3. This enables prioritization of forecasts based on the value they could create if I had them.
Forecasting is only useful when I don't already know what to do. The more uncertain I am about what to do, the more I need Clarity. The more obvious a forecast made my decision, the more that forecast provided Clarity.
On the contrary, if I won’t change my mind in response to new information, then new information (e.g. a forecast) has little value. If I don’t change my mind after seeing a forecast, it didn’t provide much value.
Valuable forecasts provide Clarity.
What drives Clarity?
I want to know when an answer will provide Clarity before I spend time asking the question. I dimensionalize Clarity as follows:
Stronger Clarity drivers make a forecast likely to change my mind.
How to measure Clarity?
Bayes Shortcut: if I know my prior uncertainty (σ) and forecast noise (σᶠ) and assume normality (bold), the likelihood of crossing a decision boundary is:
Either way: it is possible to quantify how likely it is that I will change my mind in response to a forecast before knowing what the forecast is.
Low Clarity: “What would I even do with this information?”
If I see a question that seems cryptic, I know it is low in Clarity.
High Clarity: “A good answer would make my decision for me.”
If I know what I would do given an answer to a question, I know it is high in Clarity.
Forecasting is useful when making the right decision is critical. The more valuable it is to make the right choice, the more a forecast could provide Leverage. The more I benefit from a forecast inducing the right choice, the more it has provided Leverage.
On the contrary, even a razor‑sharp forecast is meh if the stakes are pocket change or reversible.
Valuable forecasts provide Leverage.
What drives Leverage?
Again, I want to know when an answer will provide Leverage before I spend time asking the question. I dimensionalize Leverage as follows:
Stronger Leverage drivers make it important to (rely on forecasts to) decide correctly, because the cost of being wrong is huge.
How to measure Leverage?
Utility Delta: if willing to do some scenario analysis, take the max difference between best-case potential choice and current choice. Roughly:
You can review the EVSI literature for a formal treatment.
Either way: it is possible to quantify how important changing my mind in response to a forecast could be, before knowing what the forecast is.
Low Leverage: “Why do I care about knowing this?”
If I see a question that seems pointless, I know it is low in Leverage.
High Leverage: “If I could know this, my whole life would change.”
If I know I need to get the answer right, I know it is high in Leverage.
Forecasting is most valuable when work done once continues to pay off. If the insights, data, or tooling generated today will power many future decisions, then the Efficiency leg of the stool is sturdy. When the same effort is a one‑off sunk cost, Efficiency collapses, even if Clarity and Leverage look great on paper.
What drives Efficiency?
Again, I want to know when an answer will provide Efficiency before I spend time asking the question. I dimensionalize Efficiency as follows:
In short: high spillover, low acquisition cost = high Efficiency.
How to measure Efficiency?
Efficiency compares future indirect benefits to current direct costs.
If the ratio is ≫ 1, the question “pays rent.” If ≪ 1, skip or defer.
Low Efficiency: “Will this even cover its own bill?”
If the answer will be expensive to get with limited reuse potential, I know the question is low in Efficiency.
High Efficiency: “This work amortizes fast.”
If getting the answer produces a process or artifact that can be cheaply reused, I know the question is high in Efficiency.
Efficiency in Context
Even a crystal‑clear, high‑leverage forecast can be a net loss if it takes a platoon of analysts and a seven‑figure data purchase to run once. Conversely, a medium‑Clarity, medium‑Leverage signal can be a no‑brainer when it drops out of telemetry I already collect.
Why multiply?
But there are plenty of other reasonable modeling choices once I have the inputs.
CLE is a very general approach for assessing the value of answering a question.
I am actively using this framework to surface valuable questions in my work. I hope it helps you!
I am an advisor to Metaculus, but all views here are my own.
This is not the only possible model of forecast value. All models are wrong, but some are useful. This one is useful to me, and I hope it will be useful to you.
I say “roughly” because there are many choices about how to bucket factors into Clarity/Leverage/Efficiency, and many choices for quantification.