These numbers seem reasonable. My median for "full automation of AI R&D" is ~early 2031 and my 25th percentile is ~mid 2028.
I'm between Daniel's views (more aggressive) and Eli's views. (My superintelligence median is closer to Eli's, my superhuman coder median is closer to Daniel's. My TEDAI median is probably closer to Eli's.)
Thanks for sharing your updated forecasts.
Claude Code reached an annualized revenue of over $2.5 billion in early February, just 9 months after its release. Anthropic’s trend of 10xing annualized revenue each year has continued into the $10B range.
How would you forecast OAI and Anthropic annualized revenue by EOY 2026?
We've thought a bit about this question but don't have a confident answer. It's an interesting case of "The Gods of Straight Lines on Graphs" vs. common sense and napkin math. GOSLOG predicts that Anthropic will be at $100B ARR and OpenAI at a mere $60B ARR. But it's kinda hard to imagine Anthropic overtaking OpenAI so much despite having less compute; they'd have to either eat their seed corn (sacrificing research and training compute) or get much higher margins. In general in fact it seems like margins will probably have to go up? So maybe revenue growth rates will slow down, or at least maybe Anthropic's will.
(The napkin math based on 'but is the addressable market even big enough' seems to support ginormous growth in revenue, or at least is consistent with it. Advertising alone could potentially make $100B ARR for OpenAI, for example. So could coding agents.)
Thanks for posting, I find these updates very interesting. How will you know when the Automated Coder milestone is reached? It seems like there weren't be a point where it's clear AI companies would prefer to lay off their workers rather than stop using AI. Is there also a chance that AI could 10x or 100x development speeds before it is possible for the software engineers to be laid off (if for example one of the things currently holding it back is unable to solved), would you consider that automated coding?
We’re mostly focused on research and writing for our next big scenario, but we’re also continuing to think about AI timelines and takeoff speeds, monitoring the evidence as it comes in, and adjusting our expectations accordingly. We’re tentatively planning on making quarterly updates to our timelines and takeoff forecasts. Since we published the AI Futures Model 3 months ago, we’ve updated towards shorter timelines.
Daniel’s Automated Coder (AC) median has moved from late 2029 to mid 2028, and Eli’s forecast has moved a similar amount. The AC milestone is the point at which an AGI company would rather lay off all of their human software engineers than stop using AIs for software engineering.
The reasons behind this change include:1
In short, progress in agentic coding has been faster than we expected over the last 3-5 months. The METR coding time horizon trend has its flaws, but we still consider it the best individual piece of evidence for forecasting coding automation. On that metric, growth has continued at a rapid pace.
Meanwhile, in the real world, there may have been an even bigger shift; coding agents have exploded in usefulness and popularity. Claude Code reached an annualized revenue of over $2.5 billion in early February, just 9 months after its release. Anthropic’s trend of 10xing annualized revenue each year has continued into the $10B range.
Annualized revenue of AGI companies over time. Annualized revenue is revenue over the last month times 12. (source)
Additionally, according to our analysis of AI 2027’s predictions, things seem close to being on track; if events in reality continue to go roughly 65% as fast as they go in AI 2027, then AC will be achieved in 2028.
Finally, some AI company researchers that we respect continue to say that automated AI R&D is coming soon; sooner, in fact, than we ourselves think. Rather than walking back their predictions, they are doubling down, both in public and in private discussions. While we don’t put too much weight on such claims, noting that many other researchers have longer timelines, it does count for something.2
The bottom line result of our updates is to shift Daniel’s Automated Coder (AC) median from late 2029 to mid 2028, and to shift Eli’s from early 2032 to mid 2030.
Our medians for Top-Expert-Dominating AI (TED-AI) similarly shifted about 1.5 years sooner. A TED-AI is an AI that is at least as good as top human experts at virtually all cognitive tasks.
Daniel’s latest forecasts compared to his previous ones. View these forecasts here.
Eli’s latest forecasts compared to his previous ones. View these forecasts here.
Below, we include a plot and table that extend our analysis of how our views have changed since publishing AI 2027. When we refer to AGI in the below plot and table, we mean to use the TED-AI definition above, i.e. an AI that is at least as good as top human experts at virtually all cognitive tasks.
Underlying data here.
As always, on the AI Futures Model landing page, you can input your preferred parameter values to explore different possible futures.
1
Additional more minor changes include: updating our estimate of current parallel coding uplift due to passage of time, and minor changes to Daniel’s takeoff parameters which make his predictions slightly faster.
2
Imagine if, by contrast, no one at the AI companies thought they could get to AC by 2029. That would be a pretty good reason to think that AC won’t happen by 2029. So, the existence of some researchers who expect AC by then is some evidence (though far from conclusive) that it will.