A logarithmic scoring rule to elicit a probability distribution on a random variable is . Something that always seemed clear to me but I haven’t seen explicitly written anywhere is that the parameter is just the price of information on .
Firstly: for an agent with true belief , the expected score from making a report is where is cross-entropy. This is maximized when .
Well, this is just the standard proof that logarithmic scoring is proper. This max score itself is i.e. the entropy in . So your expected earning is exactly proportional to the information you have on (the negative of the entropy in your probability distribution for it), and the proportionality constant, the price of a bit of information on , is .
This can be made even clearer by considering the value of some other piece of information . If and you learn this fact, you will bet which would give you an expected score of . Taking the expectation over , your expected score if you acquire is which is the conditional entropy . Thus the expected profit from acquiring is .
So the value of is precisely multiplied by its mutual information with , i.e. is the price of one bit of information on .
I assume this is widely known. But I think it’s still pedagogically useful to actually think in these terms because it sheds light on things like:
[1] “Market Making with Decreasing Utility for Information” by Miroslav Dudik et al. https://arxiv.org/abs/1407.8161v1
[2] “Transaction costs: are they just costs?” by Yoram Barzel. http://www.jstor.org/stable/40750776
[3] “IP+ like barbed wire?” by Robin Hanson. https://www.overcomingbias.com/p/ip-like-barbed-wirehtml)