1693

LESSWRONG
LW

1692
AI Believers
AI

10

Transformative AI expectations are not new

by ParrotRobot
16th Nov 2025
7 min read
0

10

AI

10

Next:
What the new generation of AI believers sees
No comments2 karma
Log in to save where you left off
New Comment
Moderation Log
More from ParrotRobot
View more
Curated and popular this week
0Comments
Mentioned in
2What the new generation of AI believers sees

A little while ago I was reading a book review from the economist Kevin Bryan about AI’s likely impacts. He not only provides an excellent overview of the economic questions, but also makes a considerable effort to reconstruct why a large cluster of people (“AI people”) believe in much larger AI impacts than do most economists.

I think there is one big reason for the gap in beliefs that is rarely communicated, but quite simple:

AI believers have long had a specific “model” of how transformative AI would arrive. To AI believers, the current era’s developments look like confirmation.

Recent technical progress is an incomplete explanation

The book review summarizes AI researchers’ beliefs as follows:

The conviction among many AI researchers that it will unleash unique economic effects derives from two decades of empirical computer science breakthroughs.

This is not wrong; the recent strength of conviction would not exist without the research progress of the 2010s and 2020s. If nothing else, the “AI winter” is now definitively past.

But impressive recent progress is insufficient to explain why so many AI believers expect AI’s economic impact to be large in absolute terms, and not just large relative to past AI impacts.

  • In terms of magnitude, recent progress is not sufficient to explain the industry’s much more extreme optimism for AI when compared to other major technologies, such as the Internet.
  • In terms of timing, recent progress is not sufficient to explain why many expect a massive impact in just a few years, instead of, say, expecting success in 30 years rather than 40.

Instead, I would argue that the absolute scale of expectations today stems rather from deeply held beliefs in the endpoint of AI as a technology.

And today’s AI believers have specific beliefs not only about the endpoint of AI, but about when it will arrive. Their expectations of timing have been remarkably consistent with the actual course of events so far. Thus, it is natural that they would view it as a confirmation of their model.

The idea of transformative AI is not new

The end state of AI has been a subject of discourse for ages, long before “deep learning” first came onto the scene.

From near the beginning of the computer era, AI has vastly exceeded other technologies in the magnitude of its anticipated consequences. As early as 1965, the computer scientist I.J. Good hypothesized that an “ultraintelligent machine”, if invented, could well be humankind’s last invention. Vernor Vinge, author of the seminal 1993 essay “The Coming Technological Singularity”, independently thought of the idea as early as 1960, while still a teenager (Mondo, 1989, page 116).

This is the singularity idea: the idea that once AI is built, it will be so generally capable that it will quickly cause the world to become unimaginably different.

To Vinge, the prospect of a Singularity felt like an “opaque wall across the future”, an “unknowable”. He felt this most acutely when practicing his profession as a science fiction writer: When I try to do a hard science fiction extrapolation, I run out of humanity quickly.

Despite vast uncertainty, all variants of the “singularity idea” envision the following basic sequence of events:

  • Before the singularity
    • Steady progress, including in AI
  • Around the time of the singularity
    • Human-level AI (“AGI”)
  • After the singularity
    • Superhuman AI
    • Transformative impacts
    • “The human era will be ended” (Vinge)

Standard economic models predict huge, transformative AI impacts if AI truly becomes a human-level substitute for labor (Korinek).

The more interesting question is why AI believers think this will happen soon. As I will explain, their expectations of timing, though highly uncertain, are anchored by a compute extrapolation model that originated in the community in the 1970s and 1980s.

How do we know that the singularity idea is still “live”?

The idea of the Singularity was developed long ago, and serious researchers now rarely mention the word. Why should we think that it is still influencing people’s beliefs?

What is most relevant, I believe, is that many in the community — including serious researchers at frontier AI labs — are often asked about their “timelines” (how many years it will be until human-level AI is invented); this question is so salient in the community that the phrase “your timelines” has come to mean “your AI timeline” by default. There are even prediction markets on the question.

So, many in the community still conceptualize the arrival of transformative AI as a discrete event. This is exactly what the idea of the Singularity involves. Thus, I believe the “singularity idea” is still very much “in the water” in the community. Certainly, if you ask a community member “when do you think the singularity will be”, they will understand what you mean.

Notes on terminology

The community I want to explain the beliefs of — the community of people who have high expectations for AI having a transformative impact soon — does not have a consistent name.

The relevant community does not overlap well with any of the following:

  • Not AI researchers. Many researchers (most, perhaps) do not believe in imminent transformative AI, while many non-researchers do.
  • Not AI safety people. Many non-safety people (e.g., many “capabilities” researchers) also believe in imminent transformative AI.
  • Not “Singulatarians”. Decades ago, many people did call themselves by this, but the label has fallen out of use, and (for a lack of a better word).

The group with the most overlap are the people that the community calls “short timelines people”, but even this term fails to include historical figures like Vinge, who predicted multi-decade timelines when they were writing, but clearly believed that a Singularity would arrive sooner than most expected. In addition, “short timelines” tends to suggest transformative-AI timelines of <5 years, while the community I wish to discuss ought to also include people with transformative-AI timelines of 10 to 15 years, as those timelines are still far more aggressive than the beliefs of the general public (or the general population of economists).

To avoid confusion, I will rely on the following less-ambiguous terms:

  • AI believers: All people, past and present, who believe(d) that human-level AI would/will arrive relatively soon and quickly have transformative impact.
    • Does not include people who believe that human-level AI will arrive soon but have little economic or societal impact for many years.
    • Does include people who effectively believe in transformative human-level AI but don’t like to use the term “AGI”, such as Dario Amodei.
    • It is not always knowable whether any given individual is an AI believer. For example, most executives of frontier AI labs must believe in fast AI progress in an important sense, but many of them have not publicly expressed “AI believer” beliefs explicitly.
  • “The community”: Used in the context of a particular point in time, refers to set of AI believers active at that time, who generally shared similar core beliefs and frequently discussed AI with one another. By default, when I refer to a “community”, I mean this community.

Some other important terms are:

  • The Singularity idea: The idea that human-level AI will be created at some point, and that this will quickly cause humans to plummet in relevance, inaugurating a “post-human era”.
    • Does not imply any particular beliefs about timing (how many years out it is), nor any particular beliefs about “takeoff speed” (as in “fast takeoff” versus “slow takeoff”).
    • Does not require that the believers call themselves singulatarians or use the term Singularity; the term Singularity was more popular in the past than today, but it’s the idea of it that I am interested in.
  • Compute extrapolation: The idea that one can estimate the timing of arrival of human-level AI by estimating the date at which some human-brain-equivalent quantity of compute (e.g., FLOPS) will be feasible.
    • Requires assumptions about how much compute a human brain is equivalent to
    • Requires assumptions about the pace of Moore’s Law and related productivity growth

The compute extrapolation model

It should not be surprising that there is little consensus or agreement on prospects for human-level AI. This question is only semi-scientific at best, as the event in question has never occurred before. Despite this obstacle, the AI-believer community does have concrete (if styled) models of AI timelines.

Hans Moravec (1988) was the first to forecast a specific date for the arrival of human-level AI. In his book Mind Children, Moravec utilized the known facts about the retina’s function to extrapolate the computational capacity of an entire human brain. He predicted that human-level AI would likely arrive in the 2020s (!), based on extrapolating Moore’s Law, as Moore’s Law projected that human-brain-level compute would first become affordable around that time.

Moravec’s estimate was then cited by Vinge in his 1993 essay. In addition to explicitly citing Moravec (in cite 16) and agreeing with his predicted timeline of roughly 2020, Vinge assumed it was obvious that computer–brain parity in computing power is centrally important for timeline estimation, in his description of a 1992 workshop he attended that was organized by the Thinking Machines Corporation.

Compute extrapolation was central to a 1998 debate about Singularity prospects on the Extropians mailing list, in which the point at which “machine compute will reach a similar scale to human compute” was widely believed to be a critical threshold.

Ray Kurzweil further popularized the compute extrapolation argument in his 2005 book The Singularity Is Near. Earlier, his arguments had inspired Bill Joy, a co-founder of Sun Microsystems, to write a widely-read article on “Why the Future Doesn't Need Us”.

These compute extrapolation models influenced Shane Legg, as documented on his vetta project blog. Shane Legg would become a co-founder of Google DeepMind.

The lull

In the 2000s, a number of events conspired to push the compute-extrapolation idea into the background. With the end of the optimistic 1990s zeitgeist, slowdowns in CPU speedups, and a shortage of concrete AI progress, the idea of a Singularity naturally felt less relevant.

Importantly, however, the compute extrapolation model was never actually contradicted so much as half-forgotten. The extrapolators had essentially predicted that something big would happen in 20 years, while people’s attention spans were far shorter. The ideas would come back in a big way in the 2020s, with the arrival of the “scaling era” of AI.