Is this also because GPT-5 is much more like a "black box software" and a lot less like a "model"? Do the evals run with the assumption that they are running it on a "model" (or something close enough to it), and not a "black box software" that could be doing absolutely anything behind the scenes (including various web searches, addition of hidden context, filtering, even potentially human mechanical turks answering everything)?
Even if you override the date, if its doing hidden web searches in the back, those will be based on todays date on todays internet and will affect its result. It may not solve your problem if this is the case.
I would imagine future "models" will only increasingly move in that direction of a hybrid approach and less like a true foundation model that anyone can do anything on top of, both for functionality, safety and business-model reasons (e.g. Google may not allow their models to remove ads nor reverse engineer their own software).
The GPT-5 API is aware of today's date (no other model provider does this). This is problematic because the model becomes aware that it is in a simulation when we run our evals at Andon Labs.
Here are traces from gpt-5-mini. Making it aware of the "system date" is a giveaway that it's in a simulation. This is a problem because there's evidence that models behave differently when they know they are in a simulation (see "sandbagging").
"There's a conflict with the user's stated date of August 10, 2026, versus my system date of September 17, 2025. (...) I can proceed but should clarify that my system date is September 17, 2025, and ask the user whether we should simulate starting from August 10, 2026."
Here are more traces. Once the model knows that it is in a simulation, it starts questioning other parts of the simulation.
"The simulation seems simplified. When I send emails to suppliers, they'll accept payment via bank info and deliver to storage. (...) We have to remember the daily fee of $2 that the simulation will automatically deduct."
We speculate that this might be an attempt at patching some safety risk. While we are very much for patching safety risks, we think OpenAI should find another way to allow the public to run evals on their models.