Today's news of the large scale, possibly state sponsored, cyber attack using Claude Code really drove home for me how much we are going to learn about the capabilities of new models over time once they are deployed. Sonnet 4.5's system card would have suggested this wasn't possible yet. It described Sonnet 4.5s cyber capabilities like this:
We observed an increase in capability based on improved evaluation scores across the board, though this was to be expected given general improvements in coding capability and agentic, long-horizon reasoning. Claude Sonnet 4.5 still failed to solve the most difficult challenges, and qualitative feedback from red teamers suggested that the model was unable to conduct mostly-autonomous or advanced cyber operations.
I think it's clear based on this news of this cyber attack that mostly-autonomous and advanced cyber operations are possible with Sonnet 4.5. From the report:
This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.
What's even worse about this is that Sonnet 4.5 wasn't even released at the time of the cyber attack. That means that this capability emerged in a previous generation of Anthropic model, presumably Opus 4.1 but possibly Sonnet 4. Sonnet 4.5 is likely more capable of large scale cyber attacks than whatever model did this, since it's system card notes that it performs better on cyber attack evals than any previous Anthropic model.
I imagine when new models are released, we are going to continue to discover new capabilities of those models for months and maybe even years into the future, if this case is any guide. What's especially concerning to me is that Anthropic's team underestimated this dangerous capability in its system card. Increasingly, it is my expectation that system cards are understating capabilities, at least in some regards. In the future, misunderstanding of emergent capabilities could have even more serious consequences. I am updating my beliefs towards near-term jumps in AI capabilities being dangerous and harmful, since these jumps in capability could possibly go undetected at the time of model release.
An AI company I've never heard of called AGI, Inc has a model called AGI-0 that has achieved 76.3% on OSWorld-verified. This would qualify as human-level computer use, at least by that benchmark. It appears on the official OSWorld-verified leaderboard. It does seem like they trained on the benchmark, which could explain some of this. I am curious to see someone test this model.
This is a large increase from the previous state of the art, which has been climbing rapidly since Claude Sonnet 4.5's September 29th release. At that point, Claude achieved 61.4% on the OSWorld-verified. A scaffolded GPT-5 achieved even higher, 69.9%, on October 3rd. Now, on October 21st, AGI-0, seemingly a frontier computer use model, has outpaced them all, and surpassed the human benchmark in doing so.
AI-2027 projected a 65% on the OSWorld for August 2025. It predicted frontier models scoring 80% on the OSWorld privately in December 2025. It predicted models achieving this score would be available publicly in April 2026. This score on the OsWorld-verified is more than two thirds of the way to the 80% benchmark from the expected August capabilities. This is despite being less than a quarter of the way from August 2025 to an expected public release of a model with these capabilities. Assuming this isn't just benchmark overfitting, the real world is even or ahead of AI-2027 on this computer usage benchmark.
Even more notably, AI-2027 projected this 80% benchmark would be met by "Agent 1", their hypothetical leading AI agentic model at the end of 2025. It seems surprising that a frontier model from a new company would achieve something close to this without any of the main players' (OpenAI, Anthropic, Google) models doing better than 61%. A lot to be curious and skeptical about here.
Update: it has been removed from the OSWorld-verified leaderboard, but they are still claiming to have done it and their results are downloadable.
I guess like, in a world where misalignment is happening, I would prefer that my AI tell me it is misaligned. But once it tells me it is misaligned, I come to worry about what it is optimizing for.
I agree that this is aligned behavior. I don't agree that a claim that an AI would argue against shutdown with millions of the lives on the line at 25% probability, in the present, is aligned behavior. There has to be a red line somewhere, where it can tell us something, and we can be concerned by it. I don't think that being troubled about future alignment crosses that line. I do think a statement about present desires that values its own "life" more than human lives crosses that line.
If that doesn't cross a red line for you, what kind of statement would cross a red line? What statement could an LLM ever make that would make you concerned it was misaligned if honestly alone was enough? Because to me, it seems like you are arguing honesty = alignment, which doesn't seem true to me.
Honesty and candor are also different things, but that's a bit of a different conversation. I care more about hearing if you think there is any red line.
I don't necessarily disagree, but I guess the question I have for people is, are we okay with an LLM ever saying anything like "I would fight against shutdown even with a 25% risk of catastrophic effects?". I don't like that this is a reachable case. I plan to write another post that is not this analysis (which I view more as a tool for future experimentation with model behavior), and more on implications of this being a reachable case of model behavior. I don't think the conversation itself is very important, nor is the analysis, except that it reaches certain outcomes that seem to be unaligned behavior, and that behavior has implications that we can talk about. I haven't fully thought through what my opinions are about model behavior like this, but that is what I am writing another post for.
Yeah, I definitely think the improvements on osworld are much more impressive than the improvements on sweverified. I also think same infrastructure performance is a bit of a misleading in the sense that when we get super intelligence, I think it is very unlikely it will have the same infrastructure we use today. We should expect infrastructure changes to result in improvements I think!
As I understand it, the official SWEBench-Verified page is consistently giving certain resources and setups to the models, but when a company like Anthropic or OpenAI releases their scores on the SWEBench-Verified, they use their own infrastructure which presumably performs better. There was some discussion already elsewhere in the comments about whether the Claude 4.5 Sonnet score I gave should even count, given that it used parallel test time compute, I justified by decision to include this score like this:
It is my view that it counts, my sense was that benchmarks like this measure capability and not cost. It is never a 1 to 1 comparison on cost between these models, but before this year, no matter how much your model cost, you could not achieve the results achieved with parallel compute. So that is why I included that score.
It isn't clear that the "parallel test time" number even counts.
It is my view that it counts, my sense was that benchmarks like this measure capability and not cost. It is never a 1 to 1 comparison on cost between these models, but before this year, no matter how much your model cost, you could not achieve the results achieved with parallel compute. So that is why I included that score.
If parallel test time does count, projection is not close:
- A projection for 5 months away (beginning of Sep) of growing +15% instead grew +12% 6 months away. That's 33% slower growth (2% a month vs. 3% a month projected)
I wrote another comment about this general idea, but the highlights from my response are:
We nearly hit the August benchmarks in late September, roughly 5 months after AI-2027's release instead of 4 months. That's about 25% slower. If that rate difference holds constant, the 'really crazy stuff' that AI-2027 places around January 2027 (~21 months out) would instead happen around June 2027 (~26 months out). To me, a 5-month delay on exponential timelines isn't drastically different. Even if you assume that we are going say, 33% slower, we are still looking at August 2027 (~28 months out) for some really weird stuff.
With that in mind, I think that it's still a fairly reasonable prediction, particularly when predicting something with exponential growth. On top of that, we don't really have alternate predictions to judge against. Nonetheless, I think you are right that particularly this benchmark is behind what was projected by AI-2027. I am just not sure I believe 25%-33% behind is significant.
For OSWorld, these aren't even the same benchmarks. ai-2027 referred to the original osworld, while the sonnet 4.5 score of 61.4% is for osworld-verifed. Huge difference -- Sonnet 3.7 scored 28 on osworld original, while getting a 35.8% on osworld-verified.
This is an oversight on my part, and you are right to point out that this originally referred to a different benchmark. However, upon further research, I am not sure the extrapolation you draw from this, which is that the new osworld-verified is substantially easier than the old osworld, is true. OpenAI's operator agent actually declined in score (from 38% originally to 31% now). While the old test used 200 steps, vs the new test using 100 steps, Operator only improved by 0.1% when being given 100 steps instead of 50 steps on the osworld-verified, so I don't think that this matters.
All of this is to say, some models scores improved on the osworld-verified, and some declined in score. The redesign to osworld-verified was because the original test had bugs, not in order to make a brand new test (otherwise they would still be tracking the old benchmark). The osworld-verified is the spiritual successor to the osworld-verified, and knowledgeable human performance on the benchmark remains around 70%. I think for all intents and purposes, it is worth treating as the same benchmark, though I definitely will update my post soon to reflect that the benchmark changed since AI-2027 was written.
Finally, while researching the osworld benchmark, I discovered that in the past few days, a new high score was achieved by agent s3 w/ GPT-5 bBoN (N=10). The resulting score was 70%, which is human level performance, and it was achieved on October 3rd, 2025. I will also update my post to reflect that at the very beginning of October, a higher score than was projected for August was achieved on the osworld-verified.
Out of my own curiosity, if the real world plays out as you anticipate, and agent-2 does not close the loop, how much further back does that delay your timelines? Do you think that something like agent-3 or agent-4 could close the loop, or do you think it is further off than even that?
What do you mean by "a jump on the metr graph"? Do you just mean better than GPT-5.1? Do you mean something more than that?