New frontier models trained with ~10x more compute than GPT-4 (like Grok) haven’t wowed enough to justify spending another 10x more—~$1B—on pretraining.
Since GPT-4, some of the 2024 AIs (trained on 2023 compute) were already using more compute than original GPT-4, and so the current generation is only at 3x-5x up from those. From every 3x-5x being only a slight improvement, it doesn't follow that stacking multiple steps of such scaling doesn't lead to more significant improvement. In total, between 2022 and 2028, it's technologically and in principle financially feasible to get a 2000x increase in compute (starting with the original GPT-4), and it's impossible to narrowly estimate how much that improves capabilities from observing a single step of scaling compute by 3x-5x.
Also, the potential of 2024 compute hasn't yet been demonstrated in 2025 AIs to its fullest extent, since the base models for o3, Gemini 2.5 Pro, and Grok 3 are likely smaller than compute optimal ones. The models that are probably closer to compute optimal are GPT-4.5 and Opus 4, but a thinking variant for GPT-4.5 hasn't been released yet. And Opus 4 has plausibly undergone only minimal reasoning training, in order to be ready for release faster, in which case we'll only see it with the amount of reasoning training comparable to o3 or Gemini 2.5 Pro in later incremental releases.
That is, it's plausible that a larger model pretrained on 10x more compute makes it possible to train a reasoning model that is meaningfully more capable, and the same applies to the next 10x, and then the 10x after that.
(The specific numbers such as $1bn for training runs are quite misleading because the companies training the models need to build the training systems with a large upfront capital cost first, and the 2024 frontier AI training systems already cost $5-7bn, while the 2026 frontier AI training systems such as Stargate Abilene that are currently being built cost $35-45bn, much more than the $1bn from the weird cost of time calculations for the previous generation of models.)
Meanwhile hundreds of OpenAI's current and ex-employees sold their stock.
To be fair, this is pretty much always the right strategy when stock vests, for diversification reasons. Current employees likely have significantly more stock that will vest in the future.
Will AI progress slow down? Current frontier AI models cost $100M+ to train, but in the old scaling law pre-training paradigm this was a one-off cost that scaled well i.e. it had a low marginal cost per query. Now this paradigm has now hit diminishing returns: New frontier models trained with ~10x more compute than GPT-4 (like Grok) haven’t wowed enough to justify spending another 10x more—~$1B—on pretraining.
And so now, we’re in a new 'inference scaling paradigm’1 AI firms scale inference by executing long internal chains of thought —then discard the scratch‑pad and show the user a short answer. This takes a lot of compute. For one, OpenAI just released their latest inference model — o3-pro. While it shows modest gains over the o3 base model on reasoning, math, science and coding benchmarks,2 it often takes ~10 mins per query and costs $200 / month.
But I’ve found a promo code (see below) so you can trial o3-pro with ChatGPT Plus Team for just £1/$1/€1, so you can see what all the buzz is about! How’s this possible? For one, it’s a one month trial with a limited number of queries for each of the 5 accounts you can set up.
More crucially, it’s subsidized by VC money to inflate OpenAI’s user number metrics and to prop up their ~$300B valuation.3
Consider the ever faster increase in weekly ChatGPT users:
Feb '23: 100M4
Sep '24: 200M5 of which 11.5M paid, Enterprise: 1M6
Feb '25: 400M7 of which 15M paid8 / Enterprise: 2M
Mar ‘25: 500M of which 20M paid9,10
May ‘25: ~1B11 / Enterprise: 3M12
You can see that they converted 11.5M out of the first 200M users but only got 3.5M users out of the more recent 200M to pay for ChatGPT. Your $1 trial adds another five paying enterprise customers to these numbers. And if you use o3 a lot, you also add to the $15B in (mostly compute) costs OpenAI burns through every year—subsidized by VC money.13
Is that realistic? Only ~350M iPhone users from richer countries have downloaded the ChatGPT iOS app14 and many of the other 650M come from poorer countries, where many use ChatGPT via WhatsApp15 or even flip phones,16 and they won’t pay $20/month (e.g. in 2024, user numbers tripled in India, which is now OpenAI's second largest market17). Some create multiple free accounts to get around rate limits messages and image generation (e.g. turning photos into anime), which props up user numbers further.18,19 OpenAI incentivizes this: phone numbers or email addresses are no longer required during sign up20 (ChatGPT search requires no sign up at all now).21
Now OpenAI can claim:
While OpenAI uses the standard playbook to boost its valuation (Uber took years to become profitable and FB and Coinbase26 etc.27 inflated user numbers; even GoJo does it in a subplot in Succession), AI stocks could crash.28 Manifold users deem this realistic29 and indeed, the market expects AI to create trillions of value by 2027.30 Meanwhile hundreds of OpenAI's current and ex-employees sold their stock.31
To get $10B/y in revenue, you need to:
This seems unrealistic. The precedent of the fast growing social media winner-take-all industry had much stronger network effects that 'lock in' users longterm. LLMs will be commoditized like public utilities34 where users, and especially enterprise customers, will switch if a better/cheaper LLM comes out.35 And indeed, because recent AI progress has hit diminishing returns, there are now many frontier models that are of similar quality for most many use cases - many people use LLMs merely as a writing assistant 36 (e.g. Gemini, ChatGPT, Claude, Grok, LLAMA, DeepSeek, etc.) and there’s tough competition driving down profits.
And so, AI could be a bubble, which has implications for forecasting the progress of AI. Given that inference scaling is very costly, it’s even less clear how to break even for models that use 100x more compute that are predicted to emerge by 2027.37
But I digressed, here’s the Promo code (DON’T FORGET TO CANCEL THE TRIAL WITHIN THE TRIAL PERIOD UNDER THIS LINK AFTER YOU SIGN UP IF YOU DON’T WANT TO PAY $30 PER SEAT; tip: if you want to add more seats and you own bob@gmail.com, you also own b.ob@gmail.com, b.o.b@gmail.com, etc.)
1: Inference Scaling Reshapes AI Governance — Toby Ord
2: o3 Turns Pro - by Zvi Mowshowitz
3: OpenAI CFO talks possibility of going public, says Musk bid isn't a distraction
4: ChatGPT sets record for fastest-growing user base - analyst note
5: OpenAI says ChatGPT's weekly users have grown to 200 million
6: OpenAI hits more than 1 million paid business users
7: OpenAI tops 400 million users despite DeepSeek's emergence
8: ChatGPT Subscribers Nearly Tripled to 15.5 Million in 2024
9: OpenAI closes $40 billion funding round, largest private tech deal on record
10: ChatGPT Fuels $300 Billion Valuation, Waymo Taps Uber, AI Wins SXSW
11: ChatGPT Hits 1 Billion Users? ‘Doubled In Just Weeks’ Says OpenAI CEO
12: OpenAI tops 3 million paying business users, launches new features for workplace
13: OpenAI closes $40 billion funding round, largest private tech deal on record
14: ChatGPT's mobile users are 85% male, report says
15: ChatGPT is now available on WhatsApp, calls: How to access
17: India now OpenAI's second largest market, Altman says | Reuters
18: Anyone else have multiple accounts r/ChatGPT
19: How to Bypass ChatGPT Usage Limitations
20: OpenAI tests phone number-only ChatGPT signups
21: ChatGPT drops its sign-in requirement for search
22: OpenAI expects subscription revenue to nearly double to $10bn
23: OpenAI CFO Sarah Friar: $11 billion in revenue is 'definitely in the realm of possibility' this year
24: OpenAI sees roughly $5 billion loss this year on $3.7 billion in revenue
25: OpenAI revenue, growth rate & funding
26: Roblox: Inflated Key Metrics For Wall Street And A Pedophile Hellscape For Kids – Hindenburg Research
27: Claude DeepResearch on Strategic metric presentation in corporate America
28: AI stocks could crash - Benjamin Todd
29: Manifold Market on OpenAI 2025 revenue
30: The market expects AI software to create trillions of dollars of value by 2027
31: Hundreds of OpenAI's current and ex-employees are about to get a huge payday by cashing out up to $10 million each in a private stock sale
32: OpenAI wins $200M US defense contract
33: OpenAI might raise the price of ChatGPT to $44 by 2029
34: Andrej Karpathy: Software Is Changing (Again)
35: Satya Nadella – Microsoft’s AGI Plan & Quantum Breakthrough
36: The Widespread Adoption of Large Language Model-Assisted Writing Across Society
37: How Many AI Models Will Exceed Compute Thresholds? | Epoch AI