In Hero Licensing, Eliezer Yudkowsky states that in 2010, he would have given himself 10% chance of HPMoR being very successful.

He then goes on to explain why he thought that instead of something lower; but I don't understand why he thought that instead of something higher: given that HPMoR did end up successful, it looks like it actually had higher than 10% chance of happening. Or maybe I'm by coincidence in the 1 in 10 worlds where it ended up successful? How can I tell?

If I try to use Bayes's law: let's call A "HPMoR is successful in 2022" and B "In 2010, there's at most 10% chance that HPMoR will be successful".
I want to update B based on A: P(B|A) = P(B)P(A|B)/P(A)
P(A|B) = 10%, P(A) = ~1
So it appears 1/10 as likely that EY was correct in 2010 about his prediction now than it appeared then.

Is this calculation correct?

Obviously, I'm not that interested about this particular result. In general, how do I improve my own prediction-making based on evidence?

New Answer
New Comment

1 Answers sorted by

The issue is that probabilities for something that will either happen or not don't really make sense in a literal way (any single macro-scale event has ~0% of ~100% chance of happening).

I think when EY says he had a 10% chance of HPMoR being successful, the claim should be taken in the context of calibration, not that he's actually going to attempt it 10 times and then see how often he succeeds:

To see if it's accurate, you'd need to take some other predictions in his 10% probability bucket, find out how often they all happened, and then see how far that is from 10%. I'm not sure if EY does this, but you can see an example from Scott here:

1 comment, sorted by Click to highlight new comments since: Today at 7:35 PM

I think he was running the same algorithm he used when the LW community "failed" to buy bitcoin in bulk. Here's the response if you are interested in reading about a similar case as this.