AI improvements to date may come from picking low-hanging fruit. It can’t do math reliably? Let it use a calculator. It improves with more parameters? Scale it up and see if it helps even more.
These improvements rely on the availability of significant, well-defined problems with concrete solutions.
As these polite problems are solved, we may find that vendors find that the issues user report are increasingly hard to define, do not have clear solutions, or only have solutions that entail significant tradeoffs.
The rush by companies to deploy massive capex to scale further despite how the model training horse race destroys profitability supports this hypothesis. If they had cheap alternative ways to scale, they’d be prioritizing that. They may be out of other ideas and hoping that spending $ will lead to a magical, mysterious AI-guided ultimate victory. Hence, they call it a “scaling law,” instead of a “recently observed historic trend.”
Supply is elastic. It may also be that companies will discover new concrete and cheap dimensions of improvement as they scale, and as AI adoption continues and as production practices are reoriented around AI and anticipation of AI improvements.
But we do see true asymptotes in other industries. In transport, drivers and airline passengers are not getting around appreciably faster now than they were in 1980. The most we hope for is supersonic.
Food is much better than it was in the past, but nobody is expecting that food will eventually be ten times tastier and more nutritious than it is now due to innovation in food technology.
Shipping was revolutionized by containers and improved by guidance technologies, but again, nobody is anticipating dramatic improvements in that sector either.
Video games have improved, but it seems clear that the obvious dimensions of improvement have been mostly explored, and that innovations now appeal primarily to niche interests due to innovations in game markets and engines that enable those ni