Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened *in spite of* much advanced technology, you begin to revise your estimate *downward *with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?

For a non-uniform distribution we can use the similar formula

`(1.0 - p(before 2011)) / (1.0/0.9 - p(before 2011))`

which is analogous to adding a extra blob of (uncounted) probability density (such that if the AI is "actually built" anywhere within the distribution including the uncounted bit, the prior probability (0.9) is the ratio`(counted) / (counted + uncounted)`

), and then cutting off the part where we know the AI to have not been built.For a normal(mu = 2050, sigma=10) distribution, in Haskell this is

`let ai year = (let p = cumulative (normalDistr 2050 (10^2)) year in (1.0 - p) / (1.0/0.9 - p))`

¹. Evaluating on a few different years:This drops off far faster than the uniform case, once 2050 is reached. We can also use this survey as an interesting source for a distribution. The median estimate for P=0.5 is 2050, which gives us the same mu, and the median for P=0.1 was 2028, which fits with sigma ~ 17 years². We also have P=0.9 by 2150, suggesting our prior of 0.9 is in the ballpark. Plugging the same years into the new distribution:

Even by 2030 our confidence will have changed little.

¹Using Statistics.Distribution.Normal from Hackage.

²Technically, the survey seems to have asked about unconditional probabilities, not conditional on that AI is possible, whereas the latter is what we want. We may want then to actually fit a normal distribution so that cdf(2028) = 0.1/0.9 and cdf(2050) = 0.5/0.9, which would be a bit harder (we can't just use 2050 as mu).

The intuitive explanation for this behavior where the normal distribution drops off faster is because it makes such strong predictions about the region around 2050 and once you've reached 2070 with no AI, you've 'wasted' most of your possible drawers, to continue the original blog post's metaphor.

To get a visual analogue of the probability mass, you could map the normal curve onto a uniform distribution, something like 'if we imagine each year at the peak corresponds to 30 years in a... (read more)