In Bayesian jargon, people talk about having a "flat prior" when they mean being very uncertain about a question. But more often you just give a probability that you think a question is true. We don't normally draw probability distributions to share with each other.

I'm wondering if there's a natural way to talk not just about the probability you think something is true, but an estimate of your confidence, in some quantitative way? And what would it mean to be "well-calibrated" in your uncertainty?

For example, I might be quite confident that a particular coin is fair and will come up heads 50% of the time (because I've gathered a lot of data), while being much less confident about another 50% bet even though I think it's as likely as not.

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

not just about the probability you think something is true, but an estimate of your confidence, in some quantitative way?

I don't think these are actually different things.

The coin example is misleading. Your confidence in the next toss being heads is exactly the same as any other independent 50% bet. Your confidence that "this is a fair coin", which could be approximated by, say, getting between 45-55 heads in the next 100 tosses, is a different bet and will give a different answer than 50%.

1 comment, sorted by Click to highlight new comments since: Today at 3:19 PM

Aleatory and epistemic uncertainty often get wrapped up together so these estimates are not always proper probabilities nor measuresof confidence. You're separating them, good for you!