The projected revenues are provisional, but so is a lot of the capex. About 70% of a datacenter is compute equipment, which is the last thing to be installed. But permitting, power, and to some extent buildings and infrastructure need to be started much further in advance, already planning for the eventual scale, so the eventual scale would be gestured at much earlier than its capex needs to materialize. Only about a year before completion does most of the capex need to be real, while concrete projections might appear several years in advance.
If the project fails to go forward a year before completion, only 5-20% of capex would be spent, and given the uncertainty many such projects would attempt to do everything that's still predictably buildable on schedule at the last possible moment. If the project fails (or ends up downscaled), the infrastructure that's already prepared by that point could still wait a few years for the project to resume in some form, without losing a lot of its value. This is analogous to how currently some AI datacenters are being built at sites initially prepared for cryptomining.
Positing 50% margins might be important for AI company valuations, but a "struggling" AI company that undershot its $80bn revenue target and is only making $50bn would still be able to fulfill its $40bn per year obligations on the 4 GW of compute it ordered two years prior, for which $135bn in capex was spent starting a year prior on compute equipment by its cloud partner, which looked at the AI company's finances and saw that it's likely going to clear the $40bn bar it cares about. About $40bn was spent starting 1.5 years prior on buildings and infrastructure by its datacenter real estate partner, which similarly looked into the AI company's finances then. Perhaps only $25bn out of the total $200bn of the overall capex for the 4 GW of compute was spent on preliminary steps based mostly on the AI company's decision 2 years ago. Even that is not obviously wasted if the project fails to materialize on schedule.
And 2 years ago, what was announced was not the 4 GW project spanning 2 years, but instead a 15 GW $750bn project spanning 5 years, further muddying the waters. The real estate and infrastructure partners felt they could use the time to prepare, the cloud partners felt they could safely downscale the project in 3 years if it looked like it's not going to work out in full, and the AI company felt the buzz from the press release economy could improve the outcome of its next funding round.
This post was originally posted my Substack. I can be reached on LinkedIn and X.
The US economy is all-in on AI, with data center capex responsible for 92% of US GDP growth in the first half of 2025. OpenAI alone is expected to spend $1.4T on infrastructure over the next few years, and caused a controversy on X when asked about its revenue-capex mismatch. So, can AI deliver the required ROI to justify the trillions in capex being put into the ground, or are we doomed to see a repeat of the Telecom bubble?
The industry is operating in a fog of war. On one hand, AI labs like OpenAI and hyperscalers all cite compute shortages as a limiter on growth, and Nvidia continues to crush earnings. But on the other, actual AI revenues in 2025 are an order of magnitude smaller than capex. This post attempts to dissect this fog of war and covers the following:
Let’s dive in.
To understand the current state of AI euphoria, it’s helpful to go all the way back to 2022, when the software industry was facing significant headwinds on the back of rising interest rates. The SaaS crash was short-lived, as the launch of ChatGPT in late 2022 rapidly reoriented investor interest around AI. The dominant narrative was that AI would replace headcount, not software budgets, unlocking trillions in net new revenues. There were also technical trends that drove demand for compute: the rise of reasoning models, AI agents, and post-training. As a result, we saw capex rapidly rise in the second half of 2024.
Fast forward to present day, AI revenues have indeed exploded. By the end of 2025, OpenAI expects to reach $20B in annualized revenues while Anthropic is expected to reach $9B. But cracks are beginning to appear: even the most dominant technology companies are seeing their balance sheets shrink due to their AI capex, requiring them to tap into the debt markets.
Are we in bubble territory? My mental model is to look at application layer AI spend, AI cloud economics, and hyperscalers’ own workloads to answer several quintessential questions.
At the app layer, while apps like ChatGPT are delivering real value, a material portion of AI is in a period of experimentation. The unintended effect is that AI revenues get artificially inflated, introducing systemic risk to cloud vendors as they end up deploying capex against potentially phantom demand. We see signs of this across both the private and public markets.
In startup-land, companies serving use cases beyond chatbots and coding are starting to reach scale (the Information estimates that aggregate AI startup revenues are ~$30B as of December 2025). But the quality of these revenues is unclear. Taking enterprise software as an example, AI spend can partially be explained by demand pull-forward as CIOs scramble to implement AI (e.g. AI agents for various use cases) and piloting products from multiple vendors. Already, there are concerns that these deployments are failing to deliver value. After this initial experimentation phase, customers will end up narrowing down their list of vendors anyway. My intuition here is that many enterprise use cases require several iterations of product development and heavy customization to deliver value, slowing the pace of AI diffusion.
Another major risk is the rapid decline in dry powder (capital available to be invested in startups), as startups that fail to raise new capital will also reduce their cloud spend. Given how much capital is deployed to AI startups, VC has an impact on potentially $50B+ of cloud spend. Early-stage startups will be the hardest hit as firms that have traditionally written checks into seed / series A startups are already having trouble fundraising. I expect that the bulk of the remaining dry powder will concentrate to the handful of AI labs that have reached escape velocity as well as “neolabs” (SSI, Thinking Machines, Richard Socher’s lab, etc.).
Moving on to the public markets, we see a re-acceleration and subsequent re-rating of AI “picks-and-shovels” names that provide building blocks like databases, networking, observability (companies in black). These companies sell to established companies and hypergrowth startups (e.g. Datadog selling to OpenAI) and have benefited from the current AI tailwinds. They will be impacted if AI application revenue growth stalls or declines.
In terms of SaaS pure-plays (companies in red), there are two ways we can look at things: the current narrative around SaaS and the actual AI revenues these companies are generating. From a narrative perspective, the consensus is that SaaS companies derive some proportion of their value from complex UIs and risk being relegated to “dumb databases” if AI performs tasks end-to-end. Customers are also experimenting with building software that they’ve traditionally paid for (e.g. building a CRM vs. paying for Salesforce). While some apps can be replaced, many companies are not accounting for the “fully-loaded” costs of running production software in-house (e.g. continuous upgrades, incident response, security).
As a result, even companies with strong growth rates, like Figma, have seen their stock price collapse. The revenue multiples for the vast majority of public software startups have now retreated to their historical medians (see below). One of the few counterexamples is Palantir, which is trading at higher multiples than OpenAI! My view is that investors are conflating two things: the threat of AI, and the natural saturation of SaaS (the plateauing of growth as certain categories of SaaS reach peak market penetration).
The second observation is that SaaS pure-plays are seeing a moderate growth re-acceleration due to AI:
What this tells us is that AI revenues are growing, but only currently make up a small portion of these companies’ overall revenues. Across public companies and startups, back-of-the-envelope estimates suggest that aggregate AI revenues are in the mid-tens of billions of dollars.
This leads us to the next question: what do AI revenues have to grow to in order to justify the trillions in capex that are coming online in the next few years (2027/2028)? We can get a sense of this mismatch from the chart below, which is a highly conservative model of the required lifetime revenues just to pay back cloud capex.
If we make things a bit more realistic:
Based on the calculations above, we’d need to fill a $1.5T revenue hole by 2028! To put things in perspective, the global SaaS market in 2024 was only ~$400B. The bet at this point is that broad swathes of white collar work will be automated. Even if OpenAI and Anthropic hit their projections in 2028 ($100B for OpenAI, $70B for Anthropic) and AI software collectively generates an additional ~$300B in revenues, we’re still $1T short! Said another way, if AI revenues are ~$50B now, we’d need to grow AI revenues by ~30x by 2028!
Furthermore, only a subset of these revenues actually “flow through” to cloud vendors (in blue).
This is how to understand the flow of revenues:
If application companies use AI labs’ APIs, their spend won’t directly flow into cloud vendors, as AI labs would be paying cloud hyperscalers as part of their COGS. If there’s a mass migration to open-source models, then AI labs won’t hit their revenue projections. Both scenarios require AI revenues to be greater than $1.5T to account for some of this “double counting”.
Now, even if AI revenues catch up, is the AI cloud business model even a good one? Here, the core concern is that AI hardware obsolescence happens faster than for traditional data center hardware. To understand the risks that AI clouds face, it’s helpful to understand what made traditional clouds profitable:
We can see how profitable a pre-ChatGPT era data center was using a hypothetical $5B data center build, with the following assumptions:
Our hypothetical model shows that we get a reasonably attractive ~15% unlevered IRR and ~1.5x cash-on-cash returns. Hyperscalers can juice these returns with leverage given their access to cheap debt. Another thing to note here is that cashflows “pay back” capex as well as other ancillary costs (e.g. R&D, etc.) in ~3 years, and FCF from the “out years” are pure profit.
AI data centers are very different. First, the speed at which Nvidia releases new architectures means that hardware performance has been increasing significantly relative to that of CPUs. Since 2020, Nvidia has released a major new architecture every ~2 years.
As a result, older GPUs rapidly drop in price (see below). The per-hour GPU rental price of the A100, released in 2020, is a fraction of the recently released Blackwell (B200). So, while the contribution margin of the A100 is high, the rapid decline in price / hour means that IRR ends up looking unattractive. Furthermore, a material portion of the inference compute online now is from the 2020-2023 era, when capex was in the ~$50-100B scale. Now that we’re doing $400-600B a year in capex, there is real risk that Mr. Market is overbuilding.
The other risk is the unrealistic depreciation schedules of AI hardware. Most cloud vendors now depreciate their equipment over 6 years instead of 4 or 5. This made sense in the old regime when there wasn’t a real need to replace CPUs or networking equipment every 4 years. In the AI era, 6 years means 3 major generations of Nvidia GPUs! If Nvidia maintains its current pace of improvements and more supply comes online, then serving compute from older GPUs might well become uneconomical, which is why clouds push for long-duration contracts.
Now, if there’s truly significant demand (Google has stated that it needs to double compute 2x every 6 months), then older GPUs will likely be fully utilized and generate good economics. If the counterfactual is true, then clouds without strong balance sheets risk being wiped out.
So far, we’ve established that app-layer AI spend is unlikely to generate enough inference demand, and that AI clouds may be a poor business model. This means that the only remaining buyers of compute are the Magnificent Seven (Alphabet, Google, Meta, Microsoft, and Amazon in particular) and AI labs (specifically their training workloads, as we’ve already accounted for their inference workloads). We can segment their workloads into three buckets:
In terms of net new product experiences, Google serves as a good example. In October, the search giant announced that it reached 1.3 quadrillion tokens processed monthly across its surfaces (which is purposefully vague marketing). We can estimate what revenue this translates to if Google “bought” this inference on the open market via the following assumptions:
This results in $17B in cloud revenues from serving AI across Google’s entire product surface, and is likely an overestimation because a portion of these “tokens processed” comes from things like the Gemini API / chatbot (so it’s already accounted for on Alphabet’s income statement).
Moving on to lift-and-shift workloads, Meta’s revenue re-acceleration in Q3 (26% YoY) serves as a case study of how AI was used to generate better recommendations and serve more precise ads. This resulted in a few billion dollars in incremental revenues. We’re likely still in the early innings of these lift-and-shift workloads (e.g. LLMs in advertising), so I expect revenues to continue growing until low-hanging fruits get picked.
Finally, we have R&D compute costs. By 2028, OpenAI’s R&D compute spend is expected to be around ~$40B. Assuming major players (e.g. Alphabet, Meta, OpenAI, Anthropic, etc.) spend similar amounts on R&D, a fair upper limit on total R&D compute is ~$200-300B.
Now, it’s difficult to predict how much will be spent on these three “buckets”, but from a game theory perspective, companies will spend whatever it takes to defend their empires and grow market share. More importantly, the dollars spent on internal workloads reduce the required AI app revenues. Assuming app-layer companies get 60% margins and internal workloads are delivered “at cost”, each dollar of internal workload equals $2.5 of AI app revenues. For example, if internal workloads in total only amount to $200B, then AI app revenues need to hit $1T. However, if internal workloads hit $500B, then app revenues only need to reach $250B! This means that the AI bubble question depends on the spending habits of ~5 companies!
My read from this exercise is that there’s likely a moderate overbuild, but not to the extent that alarmists are suggesting. And despite this fog of war, there are actionable insights for investors.
Public markets:
Startups:
Ultimately, near-term sentiment will hinge on whether OpenAI and Anthropic can sustain their revenue trajectories, as their offerings remain the clearest barometers of AI adoption. But over a longer horizon, the internal workloads of the Magnificent Seven and leading labs, not app-layer revenues, will determine whether today’s capex supercycle proves rational. In this environment, maintaining liquidity is of paramount importance. And if an AI winter does arrive, the dislocation will set the stage for generating exceptional returns.
Huge thanks to Will Lee, John Wu, Homan Yuen, Yash Tulsani, Maged Ahmed, Wai Wu, David Qian, and Andrew Tan for feedback on this article. If you want to chat about all things investing, I’m around on LinkedIn and Twitter!