My guess is that GPT-5.5 is in Opus 4 class of model sizes (needing GB200 NVL72 to run well), GPT-5.2 to GPT-5.4 in Sonnet 4 class (possibly needing B200 NVL8, not just any 8-chip server), and GPT-5.0 to GPT-5.1 are slightly smaller models designed to run on H100s, possibly based on the same pretrain as o3 or even GPT-4o.
The article on Spud was published 24 Mar 2026 and reported that Spud only recently finished pretraining. Thus a release on 23 Apr 2026 is unlikely to be the same model, and also the API prices for GPT-5.5 are $5/$30 per 1M input/output tokens, about the same as the $5/$25 for Opus 4.7, up from GPT-5.4's $2.5/$15. This is unlike Mythos's $25/$125 price (though I expect it'll come down to about $10/$50 by the end of the year once the TPUv7 and GB300 NVL72 buildouts are done).
In an event on 11 Mar 2026 Altman said (at 12:29 in the video) that they're training what he's hoping to become "the best model in the world", at the Abilene datacenter site. The release of GPT-5.5 reframes that claim as plausibly referring to GPT-5.5 rather than to Spud, and to RLVR rather than pretraining. An Opus-class model needs GB200 NVL72 to properly RLVR (given that unlike Anthropic and GDM, OpenAI doesn't have Trainium 2 Ultra or TPUs for that), so if that's what GPT-5.5 is, then it's where it needed to happen.
Given how well GPT-5.4 did compared to Opus 4.6 (as probably a significantly smaller model than Opus 4.6), the expectation that a larger model (than GPT-5.4) built with the same methodology will be even stronger is well-supported. This expectation holds for a model with a size similar to that of Opus 4, so there are no grounds for inferring that a model even bigger than Opus 4 would be needed to become "the best model in the world". (Assuming Altman wasn't secretly making a comparison with Mythos, which wasn't publicly announced at the time, even though the decision for its internal deployment at Anthropic was made on 24 Feb 2026, see page 12 of the Mythos sys