This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
On the whole, experts think human-level AI is likely to arrive in your lifetime.
It’s hard to precisely predict the amount of time until human-level AI.[1] Approaches include aggregate predictions, individual predictions, and detailed modeling.
Aggregate predictions:
Individual predictions:
Models:
These forecasts are speculative, depend on various assumptions, predict different things (e.g., transformative versus human-level AI), and are subject to selection bias both in the choice of surveys and the choice of participants in each survey.[4] However, they broadly agree that human-level AI is plausible within the lifetimes of most people alive today. What’s more, these forecasts generally seem to have been getting shorter over time.[5]
Further reading
We concentrate here on human-level AI and similar levels of capacities such as transformative AI, which may be different from AGI. For more info on these terms, see this explainer.
Metaculus is a platform that aggregates the predictions of many individuals, and has a decent track record at making predictions related to AI.
The author estimates the number of operations done by biological evolution in the development of human intelligence and argues this should be considered an upper bound on the amount of compute necessary to develop human-level AI.
Scott Alexander points out that researchers that appear prescient one year sometimes predict barely better than chance the next year.
One can expect people with short timelines to be overrepresented in those who study AI safety, as shorter timelines increase the perceived urgency of working on the problem.
There have been many cases where AI has gone from zero-to-solved. This is a problem; sudden capabilities are scary.