My previous post aimed to make AI believers’ key ideas accessible, given that they are not always made explicit. I see these ideas as “fuel” that prime the community to expect imminent transformative AI in response to events.
In this post, I will discuss recent events from that perspective. What has happened in recent years, and how does it fit AI believers’ models of how transformative AI may come about? In short, current trends are consistent with a certain variant of the basic compute extrapolation model. This is consistent with many future trajectories, but is consistent with both a high importance of compute and a high rate of growth in AI impacts.
I very much hope for economists and AI believers to be able to productively discuss their expectations for AI capabilities. In my next post, I will make some suggestions for how this can be done.
In my previous post, I outlined the broad family of compute extrapolation models that has anchored the thinking of AI believers.
However, compute extrapolation models have a critical ambiguity. Sufficiently long before the threshold of human equivalence is reached, they predict that AI should have negligible impact; long afterwards, AI should have a transformative impact. But in the middle, there must be a transition. They say nothing about how that transition should look.
AI believers had widely divergent intuitions about transition dynamics. Eliezer Yudkowsky believed in a “fast takeoff”, while others, most notably Robin Hanson, believed in “slow takeoff”. For a long time, this “takeoff debate” could not be meaningfully settled, as AI was simply not “taking off” in any meaningful way.
The modern Scaling Era of AI has provided the AI community with a wealth of data that was unavailable to the old AI believers.
The AI community is now able to estimate all of the following:
Today, discussions of AI progress and AI timelines have far stronger grounding in data.
In the 2010s and 2020s, the AI community — not just AI believers, but the wider AI R&D community — has taken note of a number of “stylized facts”.
In rough chronological order:
Today, inputs and capabilities and revenue are all expanding together at a fast exponential rate. (The shape of capabilities growth depends on how “capability levels” are defined, but a natural scale-free definition is the length of tasks an AI system can autonomously complete, and the METR long tasks benchmark shows this metric to have been growing exponentially for many years.)
Today’s trends are consistent with a simple model:
This trajectory is consistent with the basic compute extrapolation model. (Remember, the compute extrapolation model predicts that AI revenue is very small in the past, and very large in the future, and of an indeterminate functional form in the present.)
If we further assume the simplest compute-extrapolation model consistent with the data, it is also natural to predict that capability and revenue growth will not slow down until human-level AI is reached. This would require introducing a nonmonotonicity in the trajectory without justification.
In short, AI believers who are anchored by the compute extrapolation model would naturally see compute growth as the ultimate cause of the current AI boom, and would have no reason to expect a slowdown in AI growth.
To a compute extrapolation believer, the Scaling Era shows that the compute extrapolation model has passed a test. AI servers finally reached a plausibly human brain equivalent level of compute, and the AI industry was almost immediately able to create humanlike AI systems.
The coincidence has not gone unnoticed by the new generation of AI researchers. In 2023, for example, “roon” (an influential poster in the AI community on Twitter, and now an employee of OpenAI) tweeted that Kurzweil had been “vindicated again”, along with a meme depicting “brilliant AI research” as a mask hiding what was really underneath: Moore’s Law.
Day to day and month to month, the engineers and researchers at frontier labs do not need to consider the singularity idea, or the compute extrapolation idea. But it is a subtext of their entire enterprise.
The singularity idea influences major funding decisions, with a large SoftBank investment in OpenAI having been motivated in part by Masayoshi Son’s interest in the Singularity. Further back, Sam Altman said in 2019 (to much ridicule) that he believed more in the potential of generally capable AI than in any specific product or any specific way of making money off of it.
The compute extrapolation idea is also rarely cited in its original form (who has mentioned Moravec?) but is very consistent with frontier labs’ giant investments in AI compute.
What is perhaps most consistently believed is that the current trends can be expected to continue.
(It’s worth keeping in mind that many of the leading AI believers have more visibility into key growth rates than does any member of the public. For instance, leaders of frontier AI labs, who are some of the most prominent AI believers, know in almost real time how much demand they are getting, how much they are spending, and how well their research pipeline is going.)
The industry’s belief in exponential trends has a long history. The universally-known Moore’s Law, for example, was fundamental to the tech optimism of the 1990s and is central to the beliefs of Singularity believers like Kurzweil. It is well-known that Moore’s Law was far longer-lived than the individual innovations that drove it; Moore’s Law can thus be viewed as independent that are larger than any individual company or technology.
In AI, we’re seeing hints of similar “independence from individual innovations” already; the exponential METR trend, for example, was initially sustained by scaling of model parameter counts, but now relies on the “second wind” from RL training of reasoning models, and has sustained a comparable (or possibly even higher) rate of growth.
Moore’s Law is salient enough to be widely referenced as an analogy for other trends in tech. Sam Altman, in particular, has stated a belief in exponential trends on multiple occasions, in 2019 saying that to be successful it was important to “trust the exponential”, and in 2021 proposing a vision of “Moore’s Law for Everything”.
We can see that trend extrapolation is the common theme behind the thinking of all generations of AI believers. Trend extrapolation is never proof — trends can always stop — but it is not baseless, either.
The missing piece is: what is the ultimate endpoint for AI capabilities and revenues? None of the current trends directly address whether we’ll hit fundamental limits or reach human-level AI.
Compute extrapolation tells you the trends will continue; singularity beliefs tell you they culminate in systems that can do most economically valuable cognitive work — not just “AI becomes a large industry” but “AI automates most jobs”. This is why OpenAI’s stated mission is not “build useful AI tools” but “ensure AGI benefits all of humanity”.
The specific expectation of many AI believers — human-level AI within 5-10 years, with corresponding civilization-scale economic and social impacts — can be explained by the combination of (1) validated compute extrapolation showing exponential progress, and (2) singularity beliefs about the endpoint.
I believe the disconnect between economists and AI believers — which persists despite both camps taking trend extrapolation seriously — comes from the fact that the two camps largely care about the trends of completely different variables. If two people never talk about the same thing, it is no surprise if they always talk past each other.