I agree with you that "Opus 4.5 can do anything" is overselling it and there is too much hype around acting like these things are fully autonomous software architects. I did want to note though that Opus 4.5 is a vast improvement and praise is warranted.
My guess is that "convert this already-written code from this representation/framework/language/factorization to this other one" may be one of the things LLMs are decent at, yep!
Agreed, I'm relying on their "localized" intelligence to get work done fast. Where Anthropic has improved their models significantly this year is A) improving task "planning", e.g. how to extract the relevant context needed to make decisions LLMs broadly already could do, B) editing code in sane ways that doesn't break things (at the beginning of the year, Claude would chew up any 4000+ LOC file just from wrong tool use). In some ways, this isn't necessarily higher "intelligence" (Claude models remain relatively dumber on solving novel problems compared to frontier GPT/Gemini) but proper training in the coding domain.
But this isn't really "vibe-coding"/"describe the spec in natural language and watch the LLM implement it!"/"programming as a job is gone/dramatically transformed!", the way it's being advertised. LLMs are not, it seems, actually good at mapping natural-language descriptions into non-hack-y, robust background logic. You need a "code-level" prompt to specify the task precisely enough
It's a mixed bag. In practice, I can vibe code 100 line isolated modules from natural language, though it does require inspecting the code for bugs and then providing the model feedback and it fixes things. Still much faster than hand writing and slightly faster than "intention" auto-complete with Cursor.
But overall, yes, I agree that I continue to do all the systems architecture and it feels like I'm offloading more well defined tasks to the model.
None of that worked, I detect basically no change since August.
What sort of codebase are you working on? I work in a 1 million line typescript codebase and Opus 4.5 has been quite a step up from Sonnet 4.5 (which in turn was a step up from the earlier Sonnet/Opus 4 series).
I wouldn't say I can leave Opus 4.5 on a loose leash by any means, but unlike prior models, using AI agents for 80%-90% of my code modifications (as opposed to in-IDE with autocomplete) has actually become ROI positive for me.
The main game changer is that Opus has simply become smarter about working with large code bases - less hallucinated methods, more research into the codebase before actions are taken, etc.
As a simple example, I've had a "real project" benchmark for awhile to convert ~2000 lines of test cases from an old framework to a new one. Opus 4.5 was able to pull it off with relatively minimal initial steering. (showing an example of a converted test case, correcting a few issues around laziness when it did the first 300 line set). Sonnet 4.5's final state was a bit buggier and more importantly what it actually wrote during initial execution was considerably buggier, requiring it to self-correct from typecheck or test cases failing. (Ultimately, Opus ended up costing similar to Sonnet with a third the wall clock time).
Most of my work is refactoring - in August, I would still have to do most manually given high error rate of LLMs. These days? Opus is incredibly reliable with only vague directions. As another recent example: I had to add a new parameter to a connection object constructor to indicate if it should be read only -- Opus was able to readily update dozens call sites correctly based on whether the call sites were using the connection to write.
By no means does it feel like an employee (the ai-2027 agent-1 definition), but it is a powerful tool (getting more powerful through the generations) that has changed how I work.
Yes, both model families are similar in that they do not have consistently declining accuracy in the 2-16 hour task window. The modeling is somewhat broken when you have higher accuracy in the 8-16 hour window than the 2-4 hour window.
GPT models do not have this characteristic; while not perfect with the curve, at least accuracy roughly drops monotonically with task length. (exception o4-mini which also had bizarre patterns in that 2-16 hour window).
I suspect at some level heavy RLVF has broken the core METR model of performance correlating to task length.
I personally think the stronger argument here is that Claude models are not growing in capability consistent with higher task length = harder. (Grok 4 was similar) if you look at the histograms.
Both Sonnet 4.5 and Opus 4.5 were outperforming in the 8 to 16 hour bracket over the 2 to 4 hour, which is highly inconsistent with the task length difficulty model. The model appears broken at last since 3.5 sonnet given the flatness of the 2-16 hour tasks.
You end up in a case where the 4.5 Sonnet curve has a higher % of the solved tasks under it than 4.5 Opus (note how 4.5 Opus gets 0 tasks right in the 16 hour to 32 hour window even though the distribution implies it should be more like 25%). That is the "gain" this implies is overstated dramatically. [1]
The unfortunate consequence is largely shash42's point - it's not clear that modeling "task length horizon" is a valid way to view this data. Raw accuracy seems better correlated with time.
[1] An alternative interpretation is that Sonnet 4.5 was much better than the METR curve then implied.
Thanks for the histograms. Is the raw data available somewhere?
Just eyeballing it:
Aligns to my sense the model is a month, maybe 2 months, ahead of what is expected and a lot of this jump (4.5 months ahead of expected) is from artifacts of the curve fitting
Private workspace so I can’t share the session. But the approach is simple and doesn’t really require it to understand.
I think we’re coming at this from different angles: you’re doing a “white-box” critique (how specific task outcomes / curve fitting affect the METR horizon), whereas I’m doing a “black-box” consistency check: is the claimed p50 result consistent with what we see on other benchmarks that should correlate with capability?
The core model is:
That yields “time ahead/behind” vs the reported Opus 4.5 result:
The point is that METR p50 is the outlier relative to the other signals.
If instead we assume Opus 4.5 is only as far “ahead” as the other benchmarks suggest, then p50 should be closer to:
And the corresponding implied p80 would be:
My best guess is we’re ~1 month ahead overall, which puts p50/p80 in-between those cases.
Finally, percentiles inside METR’s CI depend on the (unstated) sampling distribution; if you approximate it as log-normal you get the rough “position within the CI” numbers I mentioned, but it’s only an approximation.
Bayesians are updating too much on AI capability speed from this data point, given:
I modeled all this in GPT-5.2 and the more realistic estimate for 50% derived from the other benchmarks is in the range of 190 to 210 minutes, depending on how much weight you put on the impressive (but not to the degree of the 50%) accuracy jump. The 80% is likely a slight underestimate (my guess is closer to 29 minutes).
These numbers:
[1] Note that this does provide evidence that Gemini 3 and GPT-5.2 will also have high p50 scores. Not because of capability jump per se but because of the distribution of tasks within METR benchmarks.
Good response. A few things I do want to stress:
. I am just not sure I believe 25%-33% behind is significant.
I personally see the lower bound as 33% slower. That's enough to change 2 to 3 years which is significant.
And again, realistically progress is even slower. The parallel compute version only increased by 1.8% in 4 months. We might be another 6 months from hitting 85% at current rates - this is quite a prediction gap.
and knowledgeable human performance on the benchmark remains around 70%.
Is this true? They haven't updated their abstract claiming 72.36% (which was from the old version) and I'm wondering if they simply haven't re-evaluated.
But yes, looking at the GTA1 paper, you are correct that perf varies a bit between os-world and os-world-verified, so I take back that growth is obviously slower than projected.
All said, I trust swe-bench-verified more regardless to track progress:
Claude Sonnet 4.5 scored an 82% on this metric, as of September 29th, 2025. Three percentage points below the 85% target, achieved one month late, again, remarkably close. Particularly given that in August, Opus 4.1 was already scoring 80% on this benchmark.
I disagree this is close for several reasons.
Claude Sonnet 4.5 scored a 62% on this metric, as of September 29th, 2025.
For OSWorld, these aren't even the same benchmarks. ai-2027 referred to the original osworld, while the sonnet 4.5 score of 61.4% is for osworld-verifed. Huge difference -- Sonnet 3.7 scored 28 on osworld original, while getting a 35.8% on osworld-verified. We might be at more like a 55.6% SOTA today (GTA1 w/ GPT-5) on OG osworld, a huge miss (~46% slower)
Overall, realized data suggests something more like an ai-2029 or even later.
If I understand correctly, you are advocating for using a call only strategy (as opposed to a (synthetic) long strategy) to achieve higher leverage than would otherwise be possible?
> This is partly for speculation, but it seems reasonable for most people with 2 years of savings to have 10% of their net worth in SPY options or 20% in SPX options [4] for hedging purposes alone.
To clarify, you mean 10% of net worth being in this specific contract (SPY280616C01000000)? So roughly 15:1 leverage using options?
Readers should note this has very strong returns if you get that 50%+ return, but isn't straight leverage - the median outcome here is about a 12.7% reduction in portfolio value in next 2.5 years relative to pure SPY.