anson.ho

# Sequences

A Tour of AI Timelines

# Comments

Grokking “Semi-informative priors over AI timelines”

To make sure I'm understanding you correctly, do you think the largest problem comes from (1) thinking of AGI development as a sequence of Bernoulli trials, or (2) each Bernoulli trial having constant probability, or (3) both?

It's not obvious to me that (1) is hugely problematic - isn't Laplace's rule of succession commonly applied to forecasting previously unseen events? Are you perhaps arguing that there's something particular to AGI development such that thinking of it as a series of Bernoulli trials is completely invalid?

I'm more sympathetic to your criticism of (2), but I'll note that Davidson actually relaxes this assumption in his model extensions, and further argues (in appendix 12) that the effect of the assumption is actually pretty small - most of the load of the model is carried by the first-trial probability and the reference classes used to generate it.