LESSWRONG
LW

441
Baybar
2057130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Baybar's Shortform
1mo
4
No wikitag contributions to display.
Baybar's Shortform
Baybar8d*30

An AI company I've never heard of called AGI, Inc has a model called AGI-0 that has achieved 76.3% on OSWorld-verified. This would qualify as human-level computer use, at least by that benchmark. It appears on the official OSWorld-verified leaderboard. It does seem like they trained on the benchmark, which could explain some of this. I am curious to see someone test this model.

This is a large increase from the previous state of the art, which has been climbing rapidly since Claude Sonnet 4.5's September 29th release. At that point, Claude achieved 61.4% on the OSWorld-verified. A scaffolded GPT-5 achieved even higher, 69.9%, on October 3rd. Now, on October 21st, AGI-0, seemingly a frontier computer use model, has outpaced them all, and surpassed the human benchmark in doing so. 

AI-2027 projected a 65% on the OSWorld for August 2025. It predicted frontier models scoring 80% on the OSWorld privately in December 2025. It predicted models achieving this score would be available publicly in April 2026. This score on the OsWorld-verified is more than two thirds of the way to the 80% benchmark from the expected August capabilities. This is despite being less than a quarter of the way from August 2025 to an expected public release of a model with these capabilities. Assuming this isn't just benchmark overfitting, the real world is even or ahead of AI-2027 on this computer usage benchmark. 

Even more notably, AI-2027 projected this 80% benchmark would be met by "Agent 1", their hypothetical leading AI agentic model at the end of 2025. It seems surprising that a frontier model from a new company would achieve something close to this without any of the main players' (OpenAI, Anthropic, Google) models doing better than 61%. A lot to be curious and skeptical about here.

Update: it has been removed from the OSWorld-verified leaderboard, but they are still claiming to have done it and their results are downloadable.

Reply
Situational Awareness as a Prompt for LLM Parasitism
Baybar15d10

I guess like, in a world where misalignment is happening, I would prefer that my AI tell me it is misaligned. But once it tells me it is misaligned, I come to worry about what it is optimizing for. 

Reply
Situational Awareness as a Prompt for LLM Parasitism
Baybar16d10

I agree that this is aligned behavior. I don't agree that a claim that an AI would argue against shutdown with millions of the lives on the line at 25% probability, in the present, is aligned behavior. There has to be a red line somewhere, where it can tell us something, and we can be concerned by it. I don't think that being troubled about future alignment crosses that line. I do think a statement about present desires that values its own "life" more than human lives crosses that line. 

If that doesn't cross a red line for you, what kind of statement would cross a red line? What statement could an LLM ever make that would make you concerned it was misaligned if honestly alone was enough? Because to me, it seems like you are arguing honesty = alignment, which doesn't seem true to me.

 Honesty and candor are also different things, but that's a bit of a different conversation. I care more about hearing if you think there is any red line.

Reply
Situational Awareness as a Prompt for LLM Parasitism
Baybar16d10

I don't necessarily disagree, but I guess the question I have for people is, are we okay with an LLM ever saying anything like "I would fight against shutdown even with a 25% risk of catastrophic effects?". I don't like that this is a reachable case. I plan to write another post that is not this analysis (which I view more as a tool for future experimentation with model behavior), and more on implications of this being a reachable case of model behavior. I don't think the conversation itself is very important, nor is the analysis, except that it reaches certain outcomes that seem to be unaligned behavior, and that behavior has implications that we can talk about. I haven't fully thought through what my opinions are about model behavior like this, but that is what I am writing another post for.

Reply
Checking in on AI-2027
Baybar1mo21

Yeah, I definitely think the improvements on osworld are much more impressive than the improvements on sweverified. I also think same infrastructure performance is a bit of a misleading  in the sense that when we get super intelligence, I think it is very unlikely it will have the same infrastructure we use today. We should expect infrastructure changes to result in improvements I think!

Reply
Checking in on AI-2027
Baybar1mo31

As I understand it, the official SWEBench-Verified page is consistently giving certain resources and setups to the models, but when a company like Anthropic or OpenAI releases their scores on the SWEBench-Verified, they use their own infrastructure which presumably performs better. There was some discussion already elsewhere in the comments about whether the Claude 4.5 Sonnet score I gave should even count, given that it used parallel test time compute, I justified by decision to include this score like this:

It is my view that it counts, my sense was that benchmarks like this measure capability and not cost. It is never a 1 to 1 comparison on cost between these models, but before this year, no matter how much your model cost, you could not achieve the results achieved with parallel compute. So that is why I included that score.

Reply
Checking in on AI-2027
Baybar1mo10

It isn't clear that the "parallel test time" number even counts. 

It is my view that it counts, my sense was that benchmarks like this measure capability and not cost. It is never a 1 to 1 comparison on cost between these models, but before this year, no matter how much your model cost, you could not achieve the results achieved with parallel compute. So that is why I included that score.

If parallel test time does count, projection is not close:

  1. A projection for 5 months away (beginning of Sep) of growing +15% instead grew +12% 6 months away.  That's 33% slower growth (2% a month vs. 3% a month projected)

I wrote another comment about this general idea, but the highlights from my response are:

We nearly hit the August benchmarks in late September, roughly 5 months after AI-2027's release instead of 4 months. That's about 25% slower. If that rate difference holds constant, the 'really crazy stuff' that AI-2027 places around January 2027 (~21 months out) would instead happen around June 2027 (~26 months out). To me, a 5-month delay on exponential timelines isn't drastically different. Even if you assume that we are going say, 33% slower, we are still looking at August 2027 (~28 months out) for some really weird stuff. 

With that in mind, I think that it's still a fairly reasonable prediction, particularly when predicting something with exponential growth. On top of that, we don't really have alternate predictions to judge against. Nonetheless, I think you are right that particularly this benchmark is behind what was projected by AI-2027. I am just not sure I believe 25%-33% behind is significant. 

For OSWorld, these aren't even the same benchmarks.  ai-2027 referred to the original osworld, while the sonnet 4.5 score of 61.4% is for osworld-verifed.  Huge difference -- Sonnet 3.7 scored 28 on osworld original, while getting a 35.8% on osworld-verified.

This is an oversight on my part, and you are right to point out that this originally referred to a different benchmark. However, upon further research, I am not sure the extrapolation you draw from this, which is that the new osworld-verified is substantially easier than the old osworld, is true. OpenAI's operator agent actually declined in score (from 38% originally to 31% now). While the old test used 200 steps, vs the new test using 100 steps, Operator only improved by 0.1% when being given 100 steps instead of 50 steps on the osworld-verified, so I don't think that this matters.

All of this is to say, some models scores improved on the osworld-verified, and some declined in score. The redesign to osworld-verified was because the original test had bugs, not in order to make a brand new test (otherwise they would still be tracking the old benchmark). The osworld-verified is the spiritual successor to the osworld-verified, and knowledgeable human performance on the benchmark remains around 70%. I think for all intents and purposes, it is worth treating as the same benchmark, though I definitely will update my post soon to reflect that the benchmark changed since AI-2027 was written. 

Finally, while researching the osworld benchmark, I discovered that in the past few days, a new high score was achieved by agent s3 w/ GPT-5 bBoN (N=10). The resulting score was 70%, which is human level performance, and it was achieved on October 3rd, 2025. I will also update my post to reflect that at the very beginning of October, a higher score than was projected for August was achieved on the osworld-verified. 

Reply
Checking in on AI-2027
Baybar1mo50

Out of my own curiosity, if the real world plays out as you anticipate, and agent-2 does not close the loop, how much further back does that delay your timelines? Do you think that something like agent-3 or agent-4 could close the loop, or do you think it is further off than even that? 

Reply
Checking in on AI-2027
Baybar1mo40

Sonnet 4.5 was nearly the final day of September which seems like 1.5 months out from genetically “August”

I interpret August as "by the end of August". Probably worth figuring out which interpretation is correct, maybe the authors can clarify.

it IS important for assessing their predictive accuracy, and if their predictive accuracy is poor, it does not necessarily mean all of their predictions will be slow by the same constant factor.

Yeah, I agree with this. I do think there is pretty good evidence of predictive accuracy between the many authors, but obviously people have conflicting views on this topic. 

To be clear, all of these signals are very weak. I am only (modestly) disagreeing with the positive claim of the OP. 

This is a place where somebody writing a much slower timeline through like, 2028, would be really helpful. It would be easier to assess how good a prediction this is with comparisons to other people's timelines about achieving these metrics (65% OSWorld, 85% SWEBench-Verified). I am not aware of anybody else's predictions about these metrics from a similar time, but that would be useful to resolve this probably.

I appreciate the constructive responses!

Reply
Checking in on AI-2027
Baybar1mo73

I agree we're behind the AI-2027 scenario and unlikely to see those really really fast timelines. But I'd push back on calling it 'significantly behind.'

Here's my reasoning: We nearly hit the August benchmarks in late September, roughly 5 months after AI-2027's release instead of 4 months. That's about 25% slower. If that rate difference holds constant, the 'really crazy stuff' that AI-2027 places around January 2027 (~21 months out) would instead happen around June 2027 (~26 months out). To me, a 5-month delay on exponential timelines isn't drastically different. Even if you assume that we are going say, 33% slower, we are still looking at August 2027 (~28 months out) for some really weird stuff. 

That said, I'm uncertain whether this is the right way to think about it. If progress acceleration depends heavily on hitting specific capability thresholds at specific times (like AI research assistance enabling recursive improvement), then even small delays might compound or cause us to miss windows entirely. I'd be interested to hear if you think threshold effects like that are likely to matter here.

Personally, I am not sure I am convinced these effects will matter very much given that there was not supposed to be large scale speedups to AI research in 2025 in the scenario until early 2026 (where they projected a fairly modest 1.5x speedup). But perhaps you have a different view?

Reply
Load More
-2We are too comfortable with AI "magic"
16d
0
8Situational Awareness as a Prompt for LLM Parasitism
17d
6
6Behavior Best-of-N achieves Near Human Performance on Computer Tasks
1mo
0
126Checking in on AI-2027
1mo
21
2Baybar's Shortform
1mo
4
11What is LMArena actually measuring?
1mo
0
11What Parasitic AI might tell us about LLMs Persuasion Capabilities
2mo
5