You probably already know that you can incentivise honest reporting of probabilities using a proper scoring rule like log score, but did you know that you can also incentivize honest reporting of confidence intervals?
To incentize reporting of a confidence interval, take the score , where is the size of your confidence interval, and is the distance between the true value and the interval. is whenever the true value is in the interval.
This incentivizes not only giving an interval that has the true value of the time, but also distributes the remaining 10% equally between overestimates and underestimates.
To keep the lower bound of the interval important, I recommend measuring and in log space. So if the true value is and the interval is , then is and is for underestimates and for overestimates. Of course, you need questions with positive answers to do this.
To do a confidence interval, take the score .
This can be used to make training calibration, using something like Wits and Wagers cards more fun. I also think it could be turned into app, if one could get a large list of questions with numerical values.
EDIT: I originally said you can do this for multiple choice questions, which is wrong. It only works for questions with two answers.
(In a comment, to keep top level post short.)
One cute way to do calibration for probabilities, is to construst a spinner. If you have a true/false question, you can construct a spinner which is divided up according to your probability that each answer is the correct answer.
If you were to then spin the spinner once, and win if it comes up on the correct answer, this would not incentize constructing the spinner to represent your true beliefs. The best strategy is to put all the mass on the most likely answer.
However, if you spin the spinner twice, and win if either spin lands on the correct answer, you are actually incentivized to make the spinner match your true probabilities!
One reason this game is nice is that it does not require having a correctly specified utility function that you are trying to maximize in expectation. There are only two states, win and lose, and as long as winning is prefered to losing, you should construct your spinner with your true probabilities.
Unfortunately this doesnt work for the confidence intervals, since they seem to require a score that is not bounded below.
Two spins only works for two possible answers. Do you need N spins for N answers?
You are correct. It doesn't work for more than two answers. I knew that when I thought about this before, but forgot. Corrected above.
I dont have a nice algorithm for N answers. I tried a bunch of the obvious simple things, and they dont work.
I think an algorithm for N outcomes is: spin twice, gain 1 every time you get the answer right but lose 1 if both guesses are the same.
One can "see intuitively" why it works: when we increase the spinner-probability of outcome i by a small delta (imagining that all other probabilities stay fixed, and not worrying about the fact that our sum of probabilities is now 1 + delta) then the spinner-probability of getting the same outcome twice goes up by 2 x delta x p[i]. However, on each spin we get the right answer delta x q[i] more of the time, where q[i] is the true probability of outcome i. Since we're spinning twice we get the right answer 2 x delta x q[i] more often. These cancel out if and only if p[i] = q[i]. [Obviously some work would need to be done to turn that into a proof...]
Just to be clear: if you spin twice and both come up right, you're gaining 2 and then losing 1? (I.e., this is equivalent to what you wrote in an earlier version of the comment?)
That's right.
(Why does the two-spin work?)
In a true/false question that is true with probability p, if you assign probability q, your probability of losing is p(1−q)2+(1−p)q2. (The probabily the answer is true and you spin false twice plus the probability the answer is false and you spin true twice.)
This probability is minimized when its derivative with respect to q is 0, or at the boundary. This derivative is −2p(1−q)+2(1−p)q, whis is 0 when q=p. We now know the minimum is achieved when q is 0, 1, or p. The probability of losing when q=0 is p. The probability of losing when q=1 is 1−p. The probability of losing when q=p is p(1−p), which is the lowest of the three options.
Copied without LaTeX:
In a true/false question that is true with probability p, if you assign probability q, your probability of losing is p(1−q)^2+(1−p)q^2. (The probabily the answer is true and you spin false twice plus the probability the answer is false and you spin true twice.)
This probability is minimized when its derivative with respect to q is 0, or at the boundary. This derivative is −2p(1−q)+2(1−p)q, whis is 0 when q=p. We now know the minimum is achieved when q is 0, 1, or p. The probability of losing when q=0 is p. The probability of losing when q=1 is 1−p. The probability of losing when q=p is p(1−p), which is the lowest of the three options.
This is called either Brier or quadratic scoring, not sure which.
Not exactly. Its expected value is the same as the expected value of the Brier score, but the score itself is either 0 or 1.
For some reason, the latex is not rendering for me. I can see it when I edit the comment, but not otherwise.
The comment has just started rendering for me.
Edit: Oh wait no, you just added another comment without LaTex.
Huh, that’s really weird. The server must somehow be choking on the specific LaTeX you posted. Will check it out.
Ok, I found the bug. I will fix it in the morning.
And you did! Cheers for your hard work. :)
This is an underappreciated fact! I like how simple the rule is when framed in terms of size and distance.
You mention both the linear and log rules. The log rule has the benefit of being scale-invariant, so your score isn't affect by the units the answer is measured in, but it can't deal with negatives and gets overly sensitive around zero. The linear rule doesn't blow up around zero, is shift-invariant, and can handle negative values fine. The best generic scoring rule would have all these properties.
Turns out (based on Lambert and Shoham, "Eliciting truthful answers to multiple choice questions") that all scoring rules for symmetric confidence intervals (a,b) with coverage probability 1−α can be represented (up to affine transformation) as
where x is the true value, I is the indicator function, and g(⋅) is any increasing function. Unsurprisingly, the linear rule uses g(x)=x and the log rule uses g(x)=log(x). If we want scale-invariance on the whole real line, first thing I'd be tempted to do is use log(x) for positive x and −log(|x|) for negative x except for that pesky bit about going off to ±∞ around zero. Let's paste in a linear portion around zero so the function is increasing everywhere: g(x)=I(|x|≤10)⋅(x/10)+I(|x|>10)⋅sign(x)⋅log10(|x|)
Using this g(⋅), the score is sensitive to absolute values around zero and sensitive to relative values on both sides of it. Since the rule expects more accuracy around zero, the origin should vary depending on question domain. Like if the question is about dates, accuracy should be the highest around the present year and get less accurate going into the past or future. That suggests we should set the origin at the present year. For temperatures, the origin should probably be room temperature. Are there any other standard domains that should have a non-zero origin? An alternate origin t can be added as a shift everywhere:
Not something you'd want to calculate by hand, but if someone implements a calibration app, this has more consistent scores. Going one step further, the scores could be made more intepretable by comparison to a perfectly calibrated reference score: 100+k⋅(Sα