Why don't probabilities come with error margins, or other means of describing uncertainty in their assessments?

If I evaluate a prior probability P(new glacial period starting within the next 100 years) to, say, 0.1, shouldn't I then also communicate how certain I feel about that judgement?

A scientist might make the same estimate but be more sure about it's accuracy than I.

In our everyday judgements we often use such package deals:

A: where's Jamie?

B: I think he went to the club house, but you know Jamie - he could be anywhere.

High P, high uncertainty

A: Where's Susie? Do you think she ran astray after that hefty argument?

B: no I'm certain she would *never* do that. She must have gone to a friends place.

High P, low uncertainty.

Relevant previous LW posts on the A_p distribution and model stability

http://lesswrong.com/lw/igv/probability_knowledge_and_metaprobability/

http://lesswrong.com/lw/h78/estimate_stability/

http://lesswrong.com/lw/hnf/model_stability_in_intervention_assessment/

thanks :)