[ Question ]

Greatest Lower Bound for AGI

8


(Note: I assume that the timeline between AGI and an intelligence explosion is an order of magnitude shorter than the timeline between now and the first AGI. Therefore, I might refer indifferently to AGI/intelligence explosion.)

Take a grad student deciding to do a PhD (~3-5y). The promise of an intelligence explosion in 10y might make him change his mind.

More generally, estimating a scientifically sound infimum for AGI would favor coordination and clear thinking.

My baselines for lower bounds on AGI have been optimists' estimates. Actually, I stumbled upon the concept of singularity through this documentary, where Ben Goertzel asserts in 2009 that we can have a positive singularity in 10 years "if the right amount of effort is expanded in the right direction. If we really really try" (I later realized that he made some similar statement in 2006).

Ten years after Goertzel's statements, I'm still confused about how long it would take humanity to reach AGI in a context of global coordination. This leads me to this post's question:

According to your model, in which year will we reach a 1% probability of AGI (between January and December), and why?

I'm especially curious about arguments that don't (only) rely on compute trends.


EDIT: First answers seem to agree on some value between 2019 and 2021. This surprises me, as I think outside of the AI Safety bubble, AI researchers would be really surprised (less than 1% chance) to see AGI in less than 10 years.

I think my confusion about short timelines comes from the dissonance between estimates in AI Alignment research and the intuition of top AI researchers. In particular, I vividly remember a thread with Yann Le Cun where he confidently dismissed short timelines, comment after comment.

My follow-up question would therefore be:

"What is an important part of your model you think top ML researchers (such as Le Cun) are missing?"

8

New Answer
Ask Related Question
New Comment

1 Answers

Given the sheer amount of effort DeepMind and OpenAI are putting into the problem, and the fact that what they are working on need not be clear to us, and the fact that forecasting is hard, I think it's hard to place less than 1% on short timelines. You could justify less than 1% on 2019, maybe even 2020, but you should probably put at least 1% on 2021.

(This is assuming you have no information about DeepMind or OpenAI besides what they publish publicly.)