## LESSWRONGLW

For a non-uniform distribution we can use the similar formula `(1.0 - p(before 2011)) / (1.0/0.9 - p(before 2011))` which is analogous to adding a extra blob of (uncounted) probability density (such that if the AI is "actually built" anywhere within the distribution including the uncounted bit, the prior probability (0.9) is the ratio `(counted) / (counted + uncounted)`), and then cutting off the part where we know the AI to have not been built.

For a normal(mu = 2050, sigma=10) distribution, in Haskell this is `let ai year = (let p = cumulative (normalDistr 2050 (10^2)) year in (1.0 - p) / (1.0/0.9 - p))`¹. Evaluating on a few different years:

• P(AI|not by 2011) = 0.899996
• P(AI|not by 2030) = 0.8979
• P(AI|not by 2050) = 0.8181...
• P(AI|not by 2070) = 0.16995
• P(AI|not by 2080) = 0.012
• P(AI|not by 2099) = 0.00028

This drops off far faster than the uniform case, once 2050 is reached. We can also use this survey as an interesting source for a distribution. The median estimate for P=0.5 is 2050, which gives us the same mu, and the median for P=0.1 was 2028, which fits with sigma ~ 17 years². We also have P=0.9 by 2150, suggesting our prior of 0.9 is in the ballpark. Plugging the same years into the new distribution:

• P(AI|not by 2011) = 0.899
• P(AI|not by 2030) = 0.888
• P(AI|not by 2050) = 0.8181...
• P(AI|not by 2070) = 0.52
• P(AI|not by 2080) = 0.26
• P(AI|not by 2099) = 0.017

Even by 2030 our confidence will have changed little.

¹Using Statistics.Distribution.Normal from Hackage.

²Technically, the survey seems to have asked about unconditional probabilities, not conditional on that AI is possible, whereas the latter is what we want. We may want then to actually fit a normal distribution so that cdf(2028) = 0.1/0.9 and cdf(2050) = 0.5/0.9, which would be a bit harder (we can't just use 2050 as mu).

This drops off far faster than the uniform case, once 2050 is reached.

The intuitive explanation for this behavior where the normal distribution drops off faster is because it makes such strong predictions about the region around 2050 and once you've reached 2070 with no AI, you've 'wasted' most of your possible drawers, to continue the original blog post's metaphor.

To get a visual analogue of the probability mass, you could map the normal curve onto a uniform distribution, something like 'if we imagine each year at the peak corresponds to 30 years in a... (read more)

1[anonymous]9yCool! Would it be easy for you to repeat this replacing the normal distribution with an exponential distribution? I think that's a more natural way to model "waiting for something".