Forecasters vary on at least three dimensions: 

  1. accuracy- as measured in (e.g.) average brier score over time (brier score is a measure of error where if you think (say) p is 0.7 likely and p turns out to be true, then your brier score on this forecast is (1 - 0.7)^2). 
  2. calibration - how close are they to perfect calibration where for any x, if they assign a probability of x% to a given statement, in x% of cases, they are right?
  3. reliability - how much evidence does a given forecast of yours provide for the proposition in question being true? I think of this as "for a given confidence level c, whats the bayesfactor P(you say the probability of x is c|x)/P(you say the probability of x is c|not-x)?"

I wonder how these three properties relate to each other. 

(A) Assume that you are perfectly calibrated at 90% and you say "It will rain today with 90% probability" - how should I update on your claim given I know your perfect calibration? My first intuition is that, given your perfect calibration, 
P(you say rain with 90%|rain) is 90% and P(you say rain with 90%| no rain) is 10% likely. But that doesn't follow from the fact that you are perfectly calibrated, does it? Does your calibration have any bearing at all on your reliability (apart from the fact that both positively correlate with forecasting competence)? If it doesn't - why do we care about being calibrated?

(B) How does accuracy relate to reliability? Can infer something about your reliability from knowing your over-time brier score? 
 

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

Anon User

Sep 11, 2022

10

It would seem that by definition, perfectly calibrated forecasters are equally reliable, and among the perfectly calibrated forecasters the more accurate ones are those that are forecasting more extreme probabilities more often.

3 comments, sorted by Click to highlight new comments since: Today at 2:47 AM

I am extremely interested in these sorts of questions myself (message me if you would want to chat more about them). In terms of the relation between accuracy and calibration, I think you might be able to see some of this relation from Open Philanthropy's report on the quality of their predictions. In footnote 10, I believe they decompose Brier score into a term for miscalibration, a term for resolution, and a term for entropy.

Also, would you be able to explain a bit how it would be possible for someone who is perfectly calibrated at predicting rain to predict rain at 90% probability but the Bayes factor based on that information to not by 9? To me it seems like for someone to be perfectly calibrated at the 90% confidence level the ratio of it having rained to it not having rained whenever they predict 90% rain has to be 9:1 so P(say rain 90% | rain) = 90% and P(say rain 90% | no rain)=10%?

Hey, thanks for the answer and sorry for my very late response. In particular thanks for the link to the OpenPhil report, very interesting! To your question - I now changed my mind again and tentatively think that you are right. Here's how I think about it now, but I still feel unsure whether I made a reasoning error somewhere:

There's some distribution of your probabilistic judgments that shows how frequently you report a given probability in a proposition that turned out to be true. It might show e.g. that for true propositions you report 90% probability in 10% of all your probability judgements. This might be the case even if you are perfectly calibrated as long as, for false propositions, you report 90% in (10/9) % of all your probability judgements. Then, it would still be the case that 90% of your 90% probability judgements turn out to be true - and hence you are perfectly calibrated at 90%. 

So, given these assumptions, what would the Bayes factor for your 90% judgement in "rain today" be? 
P(you give rain 90%|rain) should be 10% since I'm sort of randomly sampling your 90% judgement from the distribution where your 90% judgement occurs 10% of the time. For the same reason, P(you give rain 90%|no rain) = 10/9 %. Therefore, the Bayes factor is 10%/(10/9)% = 9. 

I suspect that my explanation is overly complicated feel free to point out more elegant ones :)

By no means an expert, but I think point estimates of probability miss a lot of relevant information that is captured by confidence/credibility intervals and distributions, like the point estimate's reliability. In general it probably pays to think about forecasting not as something new and unique humans do, but in terms of predictions made, say, in physics, where people calculate probability distributions and confidence intervals for particle masses, cosmological constant value, element abundance on exoplanets etc. 

TL;DR: this community tends to reinvent the wheel a lot.