It looks like AI 2027 was posted on April 3rd, 2025?
In that case, August was about 4 months away, which means late September is 20-25% slower than projected and we are still a few percentage points short - seems reasonable to expect the scores they predicted sometime in October or November, but that is still say 40-50% over their prediction.
The authors have emphasized repeatedly that AI 2027 was and is faster than their mode (EDIT: median) scenario, which makes doing this kind of evaluation annoying, but I would have to say that things look significantly behind the specific story in that piece. The reason I am saying this is that it is a bit of an overstatement to praise their predictive accuracy on mid-2025 predictions which they made in early-mid 2025, when their predictive accuracy is off on the scale of a month or two, and their predictions for 2025 were not viewed as particularly radical or unexpected at the time as far as I remember. It seems to me that even a hardcore skeptic of AI 2027 would have been unlikely to predict a much larger error.
(I believe I did myself leave a comment like "I expect this to start not happening right away" but in follow-up conversation specified that I was not talking about 2025).
Still, I appreciate that you are checking in on the accuracy of their story with real numbers.
The authors have emphasized repeatedly that AI 2027 was and is faster than their mode scenario, which makes doing this kind of evaluation annoying,
We've said that it was faster than our median, not our mode. I think it was close to most of our modes at the time of publication, mostly we were at around 2027-2028.
But the evaluation itself seems useful either way, in terms of checking in on how things are going relative to the trajectory that was our best guess conditional on the AGI timelines depicted.
Small point of information - as I've heard Daniel (and maybe you?) explain, the 'mode' in your language means repeatedly thinking 'what might happen next, what is most likely' and sampling forward in that way.
But that'll systematically underestimate the overall mode of a series of such nonnegative, right-skewed distributions (which would tend to look more like the median).
So I think it could be worth being a bit pedantic about how you describe this.
I was only referring to our AI timelines mode, in this case it’s defined as the most likely year in which superhuman coder arrives.
In general the concept of mode for most of the scenario decisions seems not well defined as e.g. for non-naturally-numeric choices it depends on how you define the categories and what past events you condition on (for the timelines mode we’re conditioning on the starting point but in other cases one might condition on all events thus far).
I would personally describe our process as some mixture of sampling what intuitively feels most likely at each point (which might e.g. correspond to the mode of a natural categorical breakdown or of a distribution conditional on all events thus far, but we mostly didn’t explicitly calculate this), while also optimizing for making things not too degenerate and overall intuitively feel like a plausible trajectory (because by default doing mode every time would look unlike what we actually expect in some sense, because in the real world there will be many surprises).
As an example of how much definitions matter here, if we just conditioned on the previous conditions for each month and sampled what big algorithmic improvements might happen treating this as a categorical variable which enumerated many possible improvements, we might never end up with any specific algorithmic improvements or end up with them quite late in the game. But if we instead assume that we think overall probably some will come before superhuman coder and then pick what we think are the most likely ones even though any individual one may be <50% this quickly (though not totally clear in this case) and <<50% in any individual month, then we end up with neuralese recurrence and shared memory bank right before SC.
Perhaps a simpler example of how categorization matters is that if we break down possible AIs’ goals very granularly then we have the most peobabilities of AIs being very well aligned, relative to any very specific misaligned goal. But we overall have more probability on misalignment in this scenario so we first make that high level choice, then we choose one of the most likely specific misaligned goals.
I want to say this more clearly and simply somehow. Something like 'adding up a series of conditional modes does not give the overall mode'? (And for nonnegative, right-skewed things like timelines, it'll systematically underestimate except maybe in the presence of very particular correlations?)
Here’s a try at phrasing it with less probability jargon:
The forecast contains a number of steps, all of which are assumed to take our best estimate of their most likely time. But in reality, unless we’re very lucky, some of those steps will be faster than predicted, and some will be slower. The ones that are faster can only be so much faster (because they can’t take no time at all). On the other hand, the ones that are slower can be much slower. So the net effect of this uncertainty probably adds up to a slowdown relative to the prediction.
Does that seem like a fair summary?
Generate an image randomly with each pixel black with 51% chance and white with 49% chance, independently. The most likely image? Totally black. But virtually all the probability mass is on images which are ~49% white. Adding correlations between neighbouring pixels (or, in 1D, correlations between time series events) doesn't remove this problem, despite what you might assume.
The core problem is that the mode of a high-dimensional probability distribution is typically degenerate. (Aside, it also causes problems for parameter estimation of unnormalized energy-based models, an extremely broad class, because you should sample from them to normalize; maximum probability estimates can be dangerous.)
Statistical mechanics points to the solution: knowing the most likely microstate of a box of particles doesn't tell you anything; physicists care about macrostates, which are observables. You define a statistic (any function of the data, which somehow summarizes it) which you actually care about, and then take the mode of that. For example, number of breakthrough discoveries by time t.
An intuition you might be able to invoke is that the procedure they describe is like greedy sampling from an LLM, which doesn’t get you the most probable completion.
It seems to me that even a hardcore skeptic of AI 2027 would have been unlikely to predict a much larger error.
As someone who could perhaps be termed as such, my expectations regarding the technical side of things only start to significantly diverge at the start of 2027. (I'm not certain of Agent-1 1.5x'ing AI research speed, but I can see that.[1] The rest seems more or less priced-in.) And indeed, the end of 2026 is the point where, the forecast itself admits, its uncertainty increases and its predictions get less grounded.
Specifically, the point where I get off the ride is this one:
OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time.
My understanding is that Agent-2 essentially "closes the loop" on automated AI R&D, and while human input is still useful due to worse taste, it's no longer required. That's the part that seems like a "jump" to me, not a common-sensical extrapolation, and which I mostly expect not to happen.
Because I am really confused about how much AI is accelerating research/programming now, so I have no idea what number to extrapolate. Maybe it gets so good at fooling people into thinking they're being incredibly productive by managing 50 agents at once that it slows research down by 50% instead?
Out of my own curiosity, if the real world plays out as you anticipate, and agent-2 does not close the loop, how much further back does that delay your timelines? Do you think that something like agent-3 or agent-4 could close the loop, or do you think it is further off than even that?
I agree we're behind the AI-2027 scenario and unlikely to see those really really fast timelines. But I'd push back on calling it 'significantly behind.'
Here's my reasoning: We nearly hit the August benchmarks in late September, roughly 5 months after AI-2027's release instead of 4 months. That's about 25% slower. If that rate difference holds constant, the 'really crazy stuff' that AI-2027 places around January 2027 (~21 months out) would instead happen around June 2027 (~26 months out). To me, a 5-month delay on exponential timelines isn't drastically different. Even if you assume that we are going say, 33% slower, we are still looking at August 2027 (~28 months out) for some really weird stuff.
That said, I'm uncertain whether this is the right way to think about it. If progress acceleration depends heavily on hitting specific capability thresholds at specific times (like AI research assistance enabling recursive improvement), then even small delays might compound or cause us to miss windows entirely. I'd be interested to hear if you think threshold effects like that are likely to matter here.
Personally, I am not sure I am convinced these effects will matter very much given that there was not supposed to be large scale speedups to AI research in 2025 in the scenario until early 2026 (where they projected a fairly modest 1.5x speedup). But perhaps you have a different view?
Sonnet 4.5 was nearly the final day of September which seems like 1.5 months out from generically “August”, and a 3% score difference is not necessarily insignificant (perhaps there are diminishing returns at >80%). I agree that we are quibbling over a thing that does not in itself matter much, but it IS important for assessing their predictive accuracy, and if their predictive accuracy is poor, it does not necessarily mean all of their predictions will be slow by the same constant factor. To be clear, all of these signals are very weak. I am only (modestly) disagreeing with the positive claim of the OP.
The signal that I am waiting for to assess very short timelines is primarily METR task lengths.
Sonnet 4.5 was nearly the final day of September which seems like 1.5 months out from genetically “August”
I interpret August as "by the end of August". Probably worth figuring out which interpretation is correct, maybe the authors can clarify.
it IS important for assessing their predictive accuracy, and if their predictive accuracy is poor, it does not necessarily mean all of their predictions will be slow by the same constant factor.
Yeah, I agree with this. I do think there is pretty good evidence of predictive accuracy between the many authors, but obviously people have conflicting views on this topic.
To be clear, all of these signals are very weak. I am only (modestly) disagreeing with the positive claim of the OP.
This is a place where somebody writing a much slower timeline through like, 2028, would be really helpful. It would be easier to assess how good a prediction this is with comparisons to other people's timelines about achieving these metrics (65% OSWorld, 85% SWEBench-Verified). I am not aware of anybody else's predictions about these metrics from a similar time, but that would be useful to resolve this probably.
I appreciate the constructive responses!
I am amused that we are, with perfect seriousness, discussing the dates for the singularity with a resolution of two weeks. I’m an old guy; I remember when the date for the singularity was “in the twenty first century sometime.” For 50 years, predictions have been getting sharper and sharper. The first time I saw a prediction that discussed time in terms of quarters instead of years, it took my breath away. And that was a couple of years ago now.
Of course it was clear decades ago that as the singularity approached, we have a better and better idea of its timing and contours. It’s neat to see it happen in real life.
(I know “the singularity” is disfavored, vaguely mystical, twentieth century terminology. But I’m using it to express solidarity with my 1992 self, who thought with that word.)
Claude Sonnet 4.5 scored an 82% on this metric, as of September 29th, 2025. Three percentage points below the 85% target, achieved one month late, again, remarkably close. Particularly given that in August, Opus 4.1 was already scoring 80% on this benchmark.
I disagree this is close for several reasons.
Claude Sonnet 4.5 scored a 62% on this metric, as of September 29th, 2025.
For OSWorld, these aren't even the same benchmarks. ai-2027 referred to the original osworld, while the sonnet 4.5 score of 61.4% is for osworld-verifed. Huge difference -- Sonnet 3.7 scored 28 on osworld original, while getting a 35.8% on osworld-verified. We might be at more like a 55.6% SOTA today (GTA1 w/ GPT-5) on OG osworld, a huge miss (~46% slower)
Overall, realized data suggests something more like an ai-2029 or even later.
It isn't clear that the "parallel test time" number even counts.
It is my view that it counts, my sense was that benchmarks like this measure capability and not cost. It is never a 1 to 1 comparison on cost between these models, but before this year, no matter how much your model cost, you could not achieve the results achieved with parallel compute. So that is why I included that score.
If parallel test time does count, projection is not close:
- A projection for 5 months away (beginning of Sep) of growing +15% instead grew +12% 6 months away. That's 33% slower growth (2% a month vs. 3% a month projected)
I wrote another comment about this general idea, but the highlights from my response are:
We nearly hit the August benchmarks in late September, roughly 5 months after AI-2027's release instead of 4 months. That's about 25% slower. If that rate difference holds constant, the 'really crazy stuff' that AI-2027 places around January 2027 (~21 months out) would instead happen around June 2027 (~26 months out). To me, a 5-month delay on exponential timelines isn't drastically different. Even if you assume that we are going say, 33% slower, we are still looking at August 2027 (~28 months out) for some really weird stuff.
With that in mind, I think that it's still a fairly reasonable prediction, particularly when predicting something with exponential growth. On top of that, we don't really have alternate predictions to judge against. Nonetheless, I think you are right that particularly this benchmark is behind what was projected by AI-2027. I am just not sure I believe 25%-33% behind is significant.
For OSWorld, these aren't even the same benchmarks. ai-2027 referred to the original osworld, while the sonnet 4.5 score of 61.4% is for osworld-verifed. Huge difference -- Sonnet 3.7 scored 28 on osworld original, while getting a 35.8% on osworld-verified.
This is an oversight on my part, and you are right to point out that this originally referred to a different benchmark. However, upon further research, I am not sure the extrapolation you draw from this, which is that the new osworld-verified is substantially easier than the old osworld, is true. OpenAI's operator agent actually declined in score (from 38% originally to 31% now). While the old test used 200 steps, vs the new test using 100 steps, Operator only improved by 0.1% when being given 100 steps instead of 50 steps on the osworld-verified, so I don't think that this matters.
All of this is to say, some models scores improved on the osworld-verified, and some declined in score. The redesign to osworld-verified was because the original test had bugs, not in order to make a brand new test (otherwise they would still be tracking the old benchmark). The osworld-verified is the spiritual successor to the osworld-verified, and knowledgeable human performance on the benchmark remains around 70%. I think for all intents and purposes, it is worth treating as the same benchmark, though I definitely will update my post soon to reflect that the benchmark changed since AI-2027 was written.
Finally, while researching the osworld benchmark, I discovered that in the past few days, a new high score was achieved by agent s3 w/ GPT-5 bBoN (N=10). The resulting score was 70%, which is human level performance, and it was achieved on October 3rd, 2025. I will also update my post to reflect that at the very beginning of October, a higher score than was projected for August was achieved on the osworld-verified.
Good response. A few things I do want to stress:
. I am just not sure I believe 25%-33% behind is significant.
I personally see the lower bound as 33% slower. That's enough to change 2 to 3 years which is significant.
And again, realistically progress is even slower. The parallel compute version only increased by 1.8% in 4 months. We might be another 6 months from hitting 85% at current rates - this is quite a prediction gap.
and knowledgeable human performance on the benchmark remains around 70%.
Is this true? They haven't updated their abstract claiming 72.36% (which was from the old version) and I'm wondering if they simply haven't re-evaluated.
But yes, looking at the GTA1 paper, you are correct that perf varies a bit between os-world and os-world-verified, so I take back that growth is obviously slower than projected.
All said, I trust swe-bench-verified more regardless to track progress:
in August, Opus 4.1 was already scoring 80% on this benchmark.
Can someone explain why the SWEBench Verified page still shows a top score of 75% which has not changed since June? Are they delayed, are they using different criteria, etc?
As I understand it, the official SWEBench-Verified page is consistently giving certain resources and setups to the models, but when a company like Anthropic or OpenAI releases their scores on the SWEBench-Verified, they use their own infrastructure which presumably performs better. There was some discussion already elsewhere in the comments about whether the Claude 4.5 Sonnet score I gave should even count, given that it used parallel test time compute, I justified by decision to include this score like this:
It is my view that it counts, my sense was that benchmarks like this measure capability and not cost. It is never a 1 to 1 comparison on cost between these models, but before this year, no matter how much your model cost, you could not achieve the results achieved with parallel compute. So that is why I included that score.
Thank you. You make a good case for including this as evidence that capabilities are increasing. I suppose the question is whether they are increasing at the rate needed for short timelines. I think it’s worth asking whether the same-infrastructure performance showing zero improvement in four months is something that would have been expected four months ago. Of course, this is only one metric, over a short timeframe.
Yeah, I definitely think the improvements on osworld are much more impressive than the improvements on sweverified. I also think same infrastructure performance is a bit of a misleading in the sense that when we get super intelligence, I think it is very unlikely it will have the same infrastructure we use today. We should expect infrastructure changes to result in improvements I think!
BTW, i put together a timeline of concrete predictions in AI 2027, for anyone keeping an eye on them as we enter next year: https://alexpear.github.io/pages/ai-2027.html
As Cole Wyeth said, the 2025 predictions were not radical (in other words, they were good predictions), but the 2026 ones anticipate very extreme productivity gains.
TLDR: AI-2027's specific predictions for August 2025 appear to have happened in September of 2025. The predictions were accurate, if a tad late, but they are late by weeks, not months.
Edit 1: Thanks to Aaron Staley for pointing out that the original osworld benchmark was referred to in AI-2027, while I used the successor benchmark, osworld-verified, to compare to it in this post. I have updated the post to acknowledge this, though I believe for all intents and purposes, these benchmarks can be seen as interchangeable (which I will also lay out in the post).
Edit 2: I discovered a new high score on the osworld-verified, achieved by agent s3 w/ GPT-5 bBoN (N=10), an agent framework powered by GPT-5 created by Simular. This score is higher than projected by AI-2027 for August, though it was achieved on October 3rd, 2025. This performance is essentially at parity with skilled human computer use.
Reading AI-2027 was the first thing that viscerally conveyed to me how urgent and dangerous advances in AI technology might be over the next few years. Six months after AI-2027's release, I decided to check in and see how the predictions are holding up so far, what seems like is happening faster than expected, and what seems like is happening slower than expected. I'll just go through the specific claims that seem evaluable in order.
The world sees its first glimpse of AI agents.
Advertisements for computer-using agents emphasize the term “personal assistant”: you can prompt them with tasks like “order me a burrito on DoorDash” or “open my budget spreadsheet and sum this month’s expenses.” They will check in with you as needed: for example, to ask you to confirm purchases. Though more advanced than previous iterations like Operator, they struggle to get widespread usage.
This prediction is panning out. With GPT-5 and Claude Sonnet 4.5, we now have agentic coders (Claude Code, GPT-5 Codex) and personal agents that can make purchases, though not yet on DoorDash, but on platforms like Shopify and Etsy. Widespread adoption definitely doesn't seem to be here yet, but that was expected by AI-2027. Arguably they undersold the degree to which this would already be used in software work, but they didn't make any specific claims about that.
There are a couple of more testable claims made in footnotes to this paragraph.
Specifically, we forecast that they score 65% on the OSWorld benchmark of basic computer tasks (compared to 38% for Operator and 70% for a typical skilled non-expert human).
It is worth noting that since April 2025, when AI-2027 was published, it was using the original OSWorld benchmark, not the current successor benchmark, OSWorld-Verified. However, there are good reasons the old benchmark was rolled back, and there are good reasons the old benchmark isn't used anymore. This is mainly due to bug fixes and improvements in the new benchmark. Some models, like Claude Sonnet 3.7, performed better on OSWorld-verified, increasing it's score from 28% to 36%. However, OpenAI's Operator's score actually declined from 38% to 31% between these benchmarks. Knowing that skilled human level performance is calibrated to around 70% in both benchmarks, I think that we can compare AI-2027's original predictions about the obsolete OSWorld benchmark to the newer OSWorld-Verified benchmarks. The signal they were trying to convey is still legible, and the new benchmark does not seem substantially easier or more difficult than the previous benchmark.
Claude Sonnet 4.5 scored a 62% on this metric, as of September 29th, 2025. The target was August; the metric was nearly achieved in late September. AI-2027 got agentic capabilities essentially right. One month late and three percentage points short is remarkably accurate.
On October 3rd, 2025, a agentic framework called agent s3 w/ GPT-5 bBoN (N=10), which as the name suggests, is powered by GPT-5, scored 70% on this benchmark. This is by far the new high score, and beats AI-2027's August projection soundly, while still being about a month late. This is more evidence that AI-2027's mid-2025 projections for frontier AI's computer use are accurate.
Another benchmark there was a specific projection about for August 2025 was the SWEBench-Verified.
For example, we think coding agents will move towards functioning like Devin. We forecast that mid-2025 agents will score 85% on SWEBench-Verified.
Claude Sonnet 4.5 scored an 82% on this metric, as of September 29th, 2025. Three percentage points below the 85% target, achieved one month late, again, remarkably close. Particularly given that in August, Opus 4.1 was already scoring 80% on this benchmark.
The August predictions are the only ones we can fully evaluate, but we can make preliminary assessments of the December 2025 predictions.
GPT-4 required 2⋅10^25 FLOP of compute to train. OpenBrain’s latest public model—Agent-0—was trained with 10^27 FLOP. Once the new datacenters are up and running, they’ll be able to train a model with 10^28 FLOP—a thousand times more than GPT-4. Other companies pour money into their own giant datacenters, hoping to keep pace.
The Agent-0 scenario looks increasingly plausible. We now know that GPT-5 was trained with less compute than GPT-4.5. While training compute increased reasonably from GPT-4 to GPT-5, evidence suggests OpenAI has an even more capable model in development. Some version of that will be due to release eventually, especially given the pressure that has been put on them with the very impressive Sonnet 4.5 release.
The evidence: OpenAI entered an 'experimental reasoning model' into the ICPC, which is a prestigious college-level coding contest. This experimental reasoning model performed better than all human contestants, achieving a perfect 12/12 score. GPT-5 solved 11 problems on the first attempt; the experimental reasoning model solved the hardest problem after nine submissions.
The capabilities that this model demonstrated may not be Agent-0 level, and it is possible that it used less than 10^27 FLOP of training compute. But we should watch for the next OpenAI release, which could come as soon as Monday, October 6, at DevDay. This is speculation, but it is grounded in recent announcements. Sam Altman indicated less than 2 weeks ago that several compute-intensive products would release over the coming weeks. We've already seen two such releases in under two weeks. There's Pulse, OpenAI's proactive daily briefing feature, which launched on September 25 but hasn't generated much discussion yet. I'm curious what people think of it. And then there's Sora 2, which represents a significant leap forward for OpenAI in video generation, impressive enough to have generated substantial attention. The Sora app reached #3 on the App Store within 48 hours of its September 30 release. I suspect something bigger is planned for DevDay, though there are no guarantees, especially given Altman's track record of generating hype. It's also worth noting that last year's announcements at DevDay were more practical than transformative, with o1's release coming a couple of weeks before the actual event. Nonetheless, it is difficult to rule out a near-term release of this improved reasoning model.
AI-2027's predictions for mid-2025 have been substantially vindicated. Progress is roughly one month behind the scenario, weeks, not months. Every prediction timed for August 2025 has been essentially realized by end of September 2025. While I remain uncertain about fast timelines, dismissing scenarios like AI-2027 seems unwarranted given how well these early predictions have held up. These were the easiest predictions to verify, but they set a high bar, and reality met it.