## LESSWRONGLW

I had a conversation as a tangent to the previous open thread that left off with an unanswered question, so I'm reposting the question here.

It seems like the scheme I've been proposing here is not a common one. So how do people usually express the obvious difference between a probability estimate of 50% for a coin flip (unlikely to change with more evidence) vs. a probability estimate of 50% for AI being developed by 2050 (very likely to change with more evidence)?

Showing 3 of 4 replies (Click to show all)

We can simplify this even further, to a fair coin versus an unknown weighted coin.

One way of viewing the difference is to say that you have different causal models of the two situations - with an unknown weighted coin there is an extra parameter to gather evidence about, therefore gathering evidence does more to your model of the world.

1RolfAndreassen6yI don't know if this is common, but perhaps you can use error bars on the probability estimates? So the coin is 50% +- 0.1%, but the AI is 50% +- 20%.
5one_forward6yYour scheme seems to be Jaynes's Ap distribution, discussed on LW here [http://lesswrong.com/lw/igv/probability_knowledge_and_metaprobability/].