Kevin S. Van Horn


Sorted by New

Wiki Contributions


A Proper Scoring Rule for Confidence Intervals

I'm used to seeing normal (or log-normal) distributions fit to subjective confidence intervals -- because the confidence intervals are being used to do some subjective probabilistic analysis. I assumed that was what you were doing, given that you were using the actual attained value x, and not just which of the three possibilities A:(x < left), B:(left < x < right), and C:(right < x) occurred.

Hmmm... you seem to have evaded the theorem about the only strictly proper local scoring rule being the logarithmic score, by only seeking to find the confidence interval, but using more information than just the region (A, B, or C) the outcome belongs to.

It would help to see a proof of the claim; do you have a reference or a link to a URL giving the proof?

A Proper Scoring Rule for Confidence Intervals

Not exactly. Its expected value is the same as the expected value of the Brier score, but the score itself is either 0 or 1.

A Proper Scoring Rule for Confidence Intervals

[Edit: I'm retracting this comment, as I made some incorrect assumptions about Scott's claim.] This is wrong. It is well known that the only strictly proper scoring rule that depends only on the probability at the actually occurring value is the logarithmic scoring rule (if there are more than two alternatives), or translations and/or positive scaling of the same. In this case, that would be log(Normal(x | mu, sigma)), where x is the value that occurs, and mu and sigma^2 are the mean and variance of the normal distribution that fits the interval you defined at the given confidence level. This may be simplified to

-log(sigma^2) - (x - mu)^2 / sigma^2.

Your scoring rule is not a translation and/or positive scaling of the logarithmic scoring rule.