METR's task-horizon score on GPT-5 is 2h17m @ 50% success. For comparison, o3 was 1h32m and Grok 4 (prior SOTA) was 1hr50m. The 80% success score is 25m, prior SOTA was 20m from both o3 and Claude 4 Opus.
I expect below trend rather than above trend due to some early reports about GPT-5
Which reports, specifically?
Did we ever get any clarification as to whether Grok 4 did in fact use as much compute on posttraining as pretraining?
METR has finally tested Gemini 2.5 Pro (June Preview) and found its 50% success task horizon is only 39 minutes, far worse than o3 or Opus 4 which are at 90 and 80 minutes respectively. Probably shouldn't be a gigantic update given 2.5 Pro never scored amazingly at SWE-Bench, but still worse than I expected given how good the model is otherwise.
I feel like looking at unreleased models for doubling time mucks things up a bit. For instance I'm assuming the unreleased o3 model from December had a significantly longer time-horizon in math than the released o3, given its much higher benchmarks in FrontierMath, etc.
Worth noting this year's p3 was really easy, Gemini 2.5 pro even got it some of the time, and Grok 4 Heavy and Gemini Deep Think got problems rated as harder. Still an achievement, though.
From the author of the epoch article:
https://x.com/GregHBurnham/status/1946655635400950211
METR's task length horizon analysis for Claude 4 Opus is out. The 50% task success chance is at 80 minutes, slightly worse than o3's 90 minutes. The 80% task success chance is tied with o3 at 20 minutes.
Surprised that this hasn't gotten more discussion. There's some potentially big implications for the time horizons study, which has become fairly load-bearing in timelines discourse.