1610

LESSWRONG
Petrov Day
LW

1609
AIWorld Modeling

17

Exponential increase is the default (assuming it increases at all) [Linkpost]

by Noosphere89
29th Sep 2025
2 min read
0

17

This is a linkpost for https://x.com/daniel_271828/status/1959336441563210227
AIWorld Modeling

17

New Comment
Moderation Log
More from Noosphere89
View more
Curated and popular this week
0Comments

Following in the tradition of @Algon, which linkposted an important thread from Daniel Eth about how AI companies are starting to seriously lobby, and have gotten early successes, I'll linkpost another thread from Daniel Eth, this time about how exponential increases are the default form of increase, assuming something's increasing at all.

In essence, I'm providing the theory for this post Almost all growth is exponential growth.

My sense is there’s generally a power law between “inputs” and “outputs” to technological progress. In this context, that manifests as “exponential increases in inputs over time yields smooth exponential increase in time horizons over time” (ie straight line on semi-log plot)

Why should there be a power law? We actually see this sort of dynamic come up all the time in technological progress - from experience curve effects (think declining PV prices) to GDP growth to efficiency improvements in various AI domains over time to AI scaling laws

Image

And there are theoretical reasons to expect a power law, too. If ideas get harder to find over time, exponential inputs are needed for “consistent” progress. If each idea provides some proportionate improvement, then “consistent” progress cashes out as exponential growth.

I go into some detail defending a view along these lines in the appendix of my report w/ @TomDavidsonX on a software intelligence explosion. The point there was justifying the formulation of ‘r’, but it also may explain the METR Evals result

Will AI R&D Automation Cause a Software Intelligence Explosion?

So then if there’s a power law, the question becomes “is there exponential growth in inputs, and if so, why?” This seems more clearly true (approximately) - considering investment capital, to compute, to researchers in the field, etc

Okay, but why? Couple reasons. There’s exponential growth in some underlying inputs from the outside world (eg Moore for compute costs) - incidentally, I’d argue a similar power law explains that! Second, AI improvement drives hype which drives more researchers & investment

Now, this second reason is a bit fuzzier, since hype could drive non-exponential growth. Empirically, investment & number of researchers do seem to be growing ~exponentially. Same with decisions of scaling up large training runs by multiples of previous runs.

BTW, this is why I'm predicting the messier tasks faced by AI, where they currently struggle will be on the same curve, it's just that their time horizons currently are much shorter.

So AI will conquer the messy tasks similarly to how they've essentially conquered the clean, verifiable tasks, it will just take somewhat longer.

One caveat is that while Moore's law will still continue for the next 2 decades at least, the very fast compute scale-up driven by allocating more compute that we currently have to AI will not extend past 2030, and it's very plausible that it already slows down by 2028, so conditional on us not reaching TAI in 2030, progress in AI will be slower than in the 2020s (though still decently fast).