LESSWRONG
LW

2628
anson.ho
81Ω16310
Message
Dialogue
Subscribe

ansonwhho.github.io

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
A Tour of AI Timelines
Grokking “Semi-informative priors over AI timelines”
anson.ho3y10

To make sure I'm understanding you correctly, do you think the largest problem comes from (1) thinking of AGI development as a sequence of Bernoulli trials, or (2) each Bernoulli trial having constant probability, or (3) both?

It's not obvious to me that (1) is hugely problematic - isn't Laplace's rule of succession commonly applied to forecasting previously unseen events? Are you perhaps arguing that there's something particular to AGI development such that thinking of it as a series of Bernoulli trials is completely invalid?

I'm more sympathetic to your criticism of (2), but I'll note that Davidson actually relaxes this assumption in his model extensions, and further argues (in appendix 12) that the effect of the assumption is actually pretty small - most of the load of the model is carried by the first-trial probability and the reference classes used to generate it.

Reply
71The longest training run
Ω
3y
Ω
12
97Announcing Epoch: A research organization investigating the road to Transformative AI
Ω
3y
Ω
2
15Grokking “Semi-informative priors over AI timelines”
3y
7
38Grokking “Forecasting TAI with biological anchors”
Ω
3y
Ω
0
24Compute Trends — Comparison to OpenAI’s AI and Compute
4y
3
94Compute Trends Across Three eras of Machine Learning
Ω
4y
Ω
13
37Estimating training compute of Deep Learning models
Ω
4y
Ω
4
14What role should evolutionary analogies play in understanding AI takeoff speeds?
4y
0