I have tried looking at this from the perspective that we have had AGI since 2022 and ChatGPT. Creating ChatGPT didn't require an ecosystem, did it? Just a well-resourced nonprofit/startup with good researchers.
So according to me, we've already had AGI for 2.5 years. We are still in a period of relative parity between humans and AI, in which we're still different enough, and it is still weak enough, that humans have the upper hand, and we're focused on exploring all possible variations on the theme of AI and human-AI relationship.
The real question is when and how will it escape human control. That will be the real sign that we have "achieved" ASI. Will that result from an ecosystem and not a single firm?
There seems to be an assumption that ASI will be achieved by continuing to scale up. All these analyses revolve around the economics of ever larger data centers and training runs, whether there's enough data and whether synthetic data is a good enough substitute, and so on.
But surely that is just the dumbest way to achieve ASI. I'm sure some actors will keep pursuing that path. But we're also now in a world of aggressive experimentation with AI on every front. I have been saying that the birthplace of ASI will not be ever larger foundation models, but rather research swarms of existing and very-near-future AIs. Think of something that combines China's Absolute Zero Reasoner with the darwinism of Google AI's AlphaEvolve. Once they figure out how to implement John-von-Neumann-level intelligence in a box, I think we're done - and I don't think the economics of that requires an ecosystem, though it may be carried out by a Big Tech company that has an ecosystem (such as Google).
Common to many models of AGI development is the view that AGI (and a possible artificial superintelligence) is developed by a single firm, with decision making power centralised in a small number of stakeholders. We argue here that this view is misleading even in scenarios where AI progress occurs rapidly and discontinuously via an intelligence explosion. This is for two key reasons
The AI 2027 scenario is the best and most fleshed out case for an intelligence explosion dynamic, providing concrete forecasts and arguments that this piece will look to question. AI 2027 features few economic constraints, with the leading lab understood to have effectively unlimited access to capital and rapid revenue growth. I argue that economics will likely act as an important brake on the pace of AI progress. Developing an AGI will require an ecosystem of actors providing capital, supply chains and end-demand. This ecosystem will affect the decision making capabilities of the leading lab, likely pushing towards greater focus on monetisation and competition in the market, requiring more compute dedicated to external needs, reducing the potential for internal recursive development.
Labs don’t exist in isolation
The AI 2027 largely assumes that all the key decisions by the leading AI lab are undertaken by its executive team, with little external input in this process. Once models progress to significantly greater capabilities through 2027 there is an increasing role played by the US executive and national security state, but otherwise the key decision makers remain the lab executives. The board of OpenAI is understood to have entirely lost control of the firm, with little visibility over internal developments.
To an extent this makes sense considering the very short timelines within the scenario. The nature of bureaucratic corporate structures means that the lab CEO, if they desired to, could act as a fairly free agent versus its various principles. Though we’d argue that the AI 2027 scenario still takes this too far.
At no point in the AI 2027 scenario is the leading firm profitable, rather its revenue is being quickly sunk into further compute expenditures, to justify the many hundreds of billions being spent on capex to fulfill these compute needs. Through the period of the scenario OpenAI would need to be continually raising more capital to fund this expansion. This would require at the least engagement with its existing investors, but likely also bringing in new investors and further tapping debt markets. Given the rate of capabilities advance it would be unlikely to struggle in accessing this capital, but the process would remain an important constraint on the time and focus of the executive team, and an important conduit for external insight into the firm’s activities and decision making.
The scale of the compute infrastructure build out would also require engagement with other partners within this ecosystem. This access to compute is likely to be both more significant and more difficult than the access to capital. Both OpenAI or Anthropic would remain strongly bound to their hyperscale compute providers, even in worlds where they increasingly self-build, as OpenAI is trying to achieve with Stargate. This self-build means cutting out one partner with significant leverage over you (Microsoft) but requires a new larger equity partner (Softbank) and new suppliers further down the supply chain (Oracle for servers, TSMC for in-house chips etc.).
In AI 2027 the only actor capable of slowing or stopping the leading lab is understood to be the US executive. In reality the leading lab would either need to be leaning on a hyperscale compute partner to take on the capex cost, or bring in a far larger amount of capital to fund this build themselves. In either case these players would be additional stakeholders that could influence the leading lab’s development and deployment decisions. Or it could be the case that Google is the leading lab, where its public listing would become a further condition limiting its ability to unilaterally decide and execute its chosen strategy.
While the OpenAI boardroom drama demonstrated how boards can be fairly impotent principals versus a more cunning agent, it also showed the importance of external players. Sam Altman’s actions would not have been possible without the support of Microsoft. Even given large capabilities advancement and a clear lead a firm like OpenAI would not be able to shed its various economic dependencies on its compute and capital providers.
Monetisation is far more of a challenge than assumed
AI 2027 relies on work done by FutureSearch on how feasible it would be for OpenAI to scale to a $100bn ARR by mid-2027. FutureSearch forecast the scale and composition of OpenAI’s revenue, and compare OpenAI’s time to $100bn against that of other previous tech companies, and find that it is in line with the trend. In doing so FutureSearch largely assumes that OpenAI functions like these previous tech platform companies and in doing so misses likely some important features that make AI monetisation different.
ByteDance was able to scale from $1bn to $100bn in only six years as it was scaling a viral consumer platform with very little marginal cost, and a lucrative advertising model. The same is not true for OpenAI. Its consumer business has much more meaningful marginal costs and a far less lucrative model. While OpenAI is planning to work towards monetising free users, almost certainly through some variety of advertising, for the time being these are simply a cost, with OpenAI needing users to upgrade to their $20 subscription.
FutureSearch compare OpenAI’s subscription business to Netflix to sense check the scale it could have achieved by mid-2027. This is reasonable but raises another example of how poor OpenAI’s economics are, as where Netflix upon reaching a large enough scale was able to keep its content spend fixed and reach much higher levels of profitability, the compute needs of supplying a ChatGPT sub mean this will never be the case to such an extent for OpenAI.
The FutureSearch model also opts to disregard API revenue, and instead assume the bulk of OpenAI’s revenue will be coming from products that directly automate work. They argue that API revenue as it depends on the growth of firms building on OpenAI’s models is less certain, and that it is unclear how far API revenue has a competitive moat against open-source models or other closed-source developers.
This competition does not occur however with the automating agents that OpenAI creates, with their being able to charge $20,000 a month(!) for vaguely defined R&D research agents, $10,000 for software engineers, and $2,000 for knowledge workers. In the scenario OpenAI is leading but other labs are <1 year behind and OpenAI is often prioritising its best models for internal use rather than actively looking to commercialise them. OpenAI seems highly unlikely to be able to charge the average salary of a US software engineer as its pricing point, with competition pushing this down significantly.
They also model OpenAI’s Enterprise revenue as separate from these agents that are replacing workers, though it is not clear quite where the boundaries of the two would be for categories such as knowledge workers. Why would there be such a gulf between a $50 a month ChatGPT Enterprise sub and a $2,000 a month replacement knowledge worker? Presumably the growth in replacement knowledge workers would partly cannibalise the spending on ChatGPT Enterprise as the keen adopters that had been driving that revenue were the first to switch up to more powerful agents.
Economics is an important brake on AI 2027 style scenarios
AI 2027 largely assumes that future markets for AI are largely going to look like B2B SaaS for cognitive labour. OpenAI could rapidly scale its profitability by selling more advanced chatbots and standardised automation agents. The reality is likely to be much more challenging, with model-making companies competing fiercely and margin being a product of either traditional tech moats (large scale consumer/enterprise platforms) or an ability to effectively monetise the frontier of intelligence. The latter is very possible, but is far harder to do. It requires an ecosystem of firms above the model layer that can mould and shape that intelligence to fit the shapes of existing organisations and processes and extract real value in the economy.
It is that ecosystem that can transform the economy. Jack Wiseman in his review of AI 2027 explains very well the significance of understanding these market dynamics:
Overall research output is most sensitive to growth in R&D compute, because of its effects on experimental throughput. But the authors’ expectations for R&D compute budgets are downstream of ungrounded expectations for automation and revenue. With more grounded expectations for automation, R&D budgets would be lower, and so research output would be less, so capabilities progress more slowly, so automation happens at a more reasonable pace.
AI systems will keep getting more powerful, and will be capable of accelerating AI R&D, but this process will be limited by the need to fund and monetise these models. Some of Sam Altman’s time will be dedicated to efforts to speed up internal R&D, but much will go to convincing a jumpy Softbank to pony up another $10bn or TSMC to build out a new advanced packaging plant. The CEO of Accenture will want a call before committing to some huge new sum to roll out OpenAI’s newest agent, the California AG will have a new question about new appointments to the non-profit board.
The companies developing these powerful AI systems are still ‘normal’ companies. In meaningful ways this limits their ability to continuously concentrate more and more power within themselves.
After the development of Artificial Super Intelligence this equation increasingly breaks down. But prior to that point these economic realities will provide an important break on the rate of AI progress and crucial policy mechanisms for steering the decision making at labs away from RSI strategies.