I had a conversation as a tangent to the previous open thread that left off with an unanswered question, so I'm reposting the question here.

It seems like the scheme I've been proposing here is not a common one. So how do people usually express the obvious difference between a probability estimate of 50% for a coin flip (unlikely to change with more evidence) vs. a probability estimate of 50% for AI being developed by 2050 (very likely to change with more evidence)?

Showing 3 of 4 replies (Click to show all)

We can simplify this even further, to a fair coin versus an unknown weighted coin.

One way of viewing the difference is to say that you have different causal models of the two situations - with an unknown weighted coin there is an extra parameter to gather evidence about, therefore gathering evidence does more to your model of the world.

1RolfAndreassen6yI don't know if this is common, but perhaps you can use error bars on the probability estimates? So the coin is 50% +- 0.1%, but the AI is 50% +- 20%.
5one_forward6yYour scheme seems to be Jaynes's Ap distribution, discussed on LW here [http://lesswrong.com/lw/igv/probability_knowledge_and_metaprobability/].

Open thread, 18-24 August 2014

by David_Gerard 1 min read18th Aug 201481 comments

5


Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.