A little while ago I was reading a book review from the economist Kevin Bryan about AI’s likely impacts. He not only provides an excellent overview of the economic questions, but also makes a considerable effort to reconstruct why a large cluster of people (“AI people”) believe in much larger AI impacts than do most economists.
I think there is one big reason for the gap in beliefs that is rarely communicated, but quite simple:
AI believers have long had a specific “model” of how transformative AI would arrive. To AI believers, the current era’s developments look like confirmation.
The book review summarizes AI researchers’ beliefs as follows:
The conviction among many AI researchers that it will unleash unique economic effects derives from two decades of empirical computer science breakthroughs.
This is not wrong; the recent strength of conviction would not exist without the research progress of the 2010s and 2020s. If nothing else, the “AI winter” is now definitively past.
But impressive recent progress is insufficient to explain why so many AI believers expect AI’s economic impact to be large in absolute terms, and not just large relative to past AI impacts.
Instead, I would argue that the absolute scale of expectations today stems rather from deeply held beliefs in the endpoint of AI as a technology.
And today’s AI believers have specific beliefs not only about the endpoint of AI, but about when it will arrive. Their expectations of timing have been remarkably consistent with the actual course of events so far. Thus, it is natural that they would view it as a confirmation of their model.
The end state of AI has been a subject of discourse for ages, long before “deep learning” first came onto the scene.
From near the beginning of the computer era, AI has vastly exceeded other technologies in the magnitude of its anticipated consequences. As early as 1965, the computer scientist I.J. Good hypothesized that an “ultraintelligent machine”, if invented, could well be humankind’s last invention. Vernor Vinge, author of the seminal 1993 essay “The Coming Technological Singularity”, independently thought of the idea as early as 1960, while still a teenager (Mondo, 1989, page 116).
This is the singularity idea: the idea that once AI is built, it will be so generally capable that it will quickly cause the world to become unimaginably different.
To Vinge, the prospect of a Singularity felt like an “opaque wall across the future”, an “unknowable”. He felt this most acutely when practicing his profession as a science fiction writer: When I try to do a hard science fiction extrapolation, I run out of humanity quickly.
Despite vast uncertainty, all variants of the “singularity idea” envision the following basic sequence of events:
Standard economic models predict huge, transformative AI impacts if AI truly becomes a human-level substitute for labor (Korinek).
The more interesting question is why AI believers think this will happen soon. As I will explain, their expectations of timing, though highly uncertain, are anchored by a compute extrapolation model that originated in the community in the 1970s and 1980s.
The idea of the Singularity was developed long ago, and serious researchers now rarely mention the word. Why should we think that it is still influencing people’s beliefs?
What is most relevant, I believe, is that many in the community — including serious researchers at frontier AI labs — are often asked about their “timelines” (how many years it will be until human-level AI is invented); this question is so salient in the community that the phrase “your timelines” has come to mean “your AI timeline” by default. There are even prediction markets on the question.
So, many in the community still conceptualize the arrival of transformative AI as a discrete event. This is exactly what the idea of the Singularity involves. Thus, I believe the “singularity idea” is still very much “in the water” in the community. Certainly, if you ask a community member “when do you think the singularity will be”, they will understand what you mean.
The community I want to explain the beliefs of — the community of people who have high expectations for AI having a transformative impact soon — does not have a consistent name.
The relevant community does not overlap well with any of the following:
The group with the most overlap are the people that the community calls “short timelines people”, but even this term fails to include historical figures like Vinge, who predicted multi-decade timelines when they were writing, but clearly believed that a Singularity would arrive sooner than most expected. In addition, “short timelines” tends to suggest transformative-AI timelines of <5 years, while the community I wish to discuss ought to also include people with transformative-AI timelines of 10 to 15 years, as those timelines are still far more aggressive than the beliefs of the general public (or the general population of economists).
To avoid confusion, I will rely on the following less-ambiguous terms:
Some other important terms are:
It should not be surprising that there is little consensus or agreement on prospects for human-level AI. This question is only semi-scientific at best, as the event in question has never occurred before. Despite this obstacle, the AI-believer community does have concrete (if styled) models of AI timelines.
Hans Moravec (1988) was the first to forecast a specific date for the arrival of human-level AI. In his book Mind Children, Moravec utilized the known facts about the retina’s function to extrapolate the computational capacity of an entire human brain. He predicted that human-level AI would likely arrive in the 2020s (!), based on extrapolating Moore’s Law, as Moore’s Law projected that human-brain-level compute would first become affordable around that time.
Moravec’s estimate was then cited by Vinge in his 1993 essay. In addition to explicitly citing Moravec (in cite 16) and agreeing with his predicted timeline of roughly 2020, Vinge assumed it was obvious that computer–brain parity in computing power is centrally important for timeline estimation, in his description of a 1992 workshop he attended that was organized by the Thinking Machines Corporation.
Compute extrapolation was central to a 1998 debate about Singularity prospects on the Extropians mailing list, in which the point at which “machine compute will reach a similar scale to human compute” was widely believed to be a critical threshold.
Ray Kurzweil further popularized the compute extrapolation argument in his 2005 book The Singularity Is Near. Earlier, his arguments had inspired Bill Joy, a co-founder of Sun Microsystems, to write a widely-read article on “Why the Future Doesn't Need Us”.
These compute extrapolation models influenced Shane Legg, as documented on his vetta project blog. Shane Legg would become a co-founder of Google DeepMind.
In the 2000s, a number of events conspired to push the compute-extrapolation idea into the background. With the end of the optimistic 1990s zeitgeist, slowdowns in CPU speedups, and a shortage of concrete AI progress, the idea of a Singularity naturally felt less relevant.
Importantly, however, the compute extrapolation model was never actually contradicted so much as half-forgotten. The extrapolators had essentially predicted that something big would happen in 20 years, while people’s attention spans were far shorter. The ideas would come back in a big way in the 2020s, with the arrival of the “scaling era” of AI.