I'm the chief scientist at Redwood Research.
Shouldn't a 32% increase in prices only make a modest difference to training FLOP? In particular, see the compute forecast. Between Dec 2026 and Dec 2027, compute increases by roughly an OOM and generally it looks like compute increases by a bit less than 1 OOM per year in the scenario. This implies that a 32% reduction only puts you behind by like 1-2 months.
Is this an accurate summary:
So, by "recent model progress feels mostly like bullshit" I think you basically just mean "reasoning models didn't improve performance on my application and Claude 3.5/3.6 sonnet is still best". Is this right?
I don't find this state of affairs that surprising:
The list doesn't exclude Baumal effects as these are just the implication of:
- Physical bottlenecks and delays prevent growth. Intelligence only goes so far.
- Regulatory and social bottlenecks prevent growth this fast, INT only goes so far.
Like Baumal effects are just some area of the economy with more limited growth bottlenecking the rest of the economy. So, we might as well just directly name the bottleneck.
Your argument seems to imply you think there might be some other bottleneck like:
- There will be some cognitive labor sector of the economy which AIs can't do.
But, this is just a special case of "will there be superintelligence which exceeds human cognitive performance in all domains".
In other words, it stipulates what Vollrath (in the first quote below) calls "[the] truly unbelievable assumption that [AI] can innovate precisely equally across every product in existence." Of course, if you do assume this "truly unbelievable" thing, then you don't get Baumol effects – but this would be a striking difference from what has happened in every historical automation wave, and also just sort of prima facie bizarre.
Huh? It doesn't require equal innovation across all products, it just requires that the bottlenecking sectors have sufficiently high innovation/growth that the overall economy can grow. Sufficient innovation in all potentially bottlenecking sectors != equal innovation.
Suppose world population was 100,000x higher, but these additional people magically didn't consume anything or need office space. I think this would result in very fast economic growth due to advancing all sectors simultaneously. Imagining population growth increases seems to be to set a lower bound on the implications of highly advanced AI (and robotics).
As far as I can tell, this Baumol effect argument is equally good at predicting that 3% or 10% growth rates are impossible from the perspective of people in agricultural societies with much lower growth rates.
So, I think you have to be quantitative and argue about the exact scale of the bottleneck and why it will prevent some rate of progress. The true physical limits (doubling time on the order of days or less, dyson sphere or even consuming solar mass faster than this) are extremely high, so this can't be the bottleneck - it must be something about the rate of innovation or physical capital accumulation leading up to true limits.
Perhaps your view is: "Sure, we'll quickly have a Dyson sphere and ungodly amounts of compute, but this won't really result in explosive GDP growth as GDP will be limited by sectors that directly interface with humans like education (presumably for fun?) or services where the limits are much lower." But, this isn't a crux for the vast majority of arguments which depend on the potential for explosive growth!
Seems like if it thinks there's a 5% chance humans explored X, but (if not, then) exploring X would force it to give up its current values
This is true for any given X, but if there are many things like X which are independently 5% likely to be explored, the model is in trouble.
Like the model only needs to be somewhat confident that its exploring everything the humans explored in the imitation data, but for any given case of choosing not to explore some behavior it needs to be very confident.
I don't have the exact model very precisely worked out in my head and I might be explaining this in a somewhat confusing way, sorry.
Sure, but note that the story "tariffs -> recession -> less AI investment" doesn't particularly depend on GPU tariffs!