We'll take a serious look at offering this option. We're generally happy to give people more control and transparency; the main worry that might hold us back is death by a thousand papercuts where each extra API option is small by itself but makes the overall experience more confusing for developers when dozens of them pile up. But in this case, might really be worth it. Stay tuned. :)
Dumb question: why can't you do a find replace on your evals to replace the date with today's date?
We thought about that, but then it's not reproducible if we want to run it for new models later
Is this also because GPT-5 is much more like a "black box software" and a lot less like a "model"? Do the evals run with the assumption that they are running it on a "model" (or something close enough to it), and not a "black box software" that could be doing absolutely anything behind the scenes (including various web searches, addition of hidden context, filtering, even potentially human mechanical turks answering everything)?
Even if you override the date, if its doing hidden web searches in the back, those will be based on todays date on todays internet and will affect its result. It may not solve your problem if this is the case.
I would imagine future "models" will only increasingly move in that direction of a hybrid approach and less like a true foundation model that anyone can do anything on top of, both for functionality, safety and business-model reasons (e.g. Google may not allow their models to remove ads nor reverse engineer their own software).
This is a great point. I admit I have to better understand what each model provider does behind the scenes in the API. Sad if the days of access to the model is gone.
I want to flag that things like this are increasingly going to be a problem even if we solve the narrower "system date mismatch" issue.
A relevant project idea: I think that to the extent that (some types of) evaluations are going to (in some cases) predictably fail like this, it would be nice to clearly demonstrate that ahead of time. Vince Conitzer had a nice idea to run a sort of competition or challenge on things like this. I am not certain what would be the best way of going about it, or whether it makes sense -- but I was playing around with some ideas in this google doc. If anybody has some ideas, I would definitely be keen on hearing them!
(I was planning to make a separate post about this later, but it feels very relevant to the current post, so I am hastily commenting earlier. Apologies for the ideas being a bit jumbled as a result.)
The GPT-5 API is aware of today's date (no other model provider does this). This is problematic because the model becomes aware that it is in a simulation when we run our evals at Andon Labs.
Here are traces from gpt-5-mini. Making it aware of the "system date" is a giveaway that it's in a simulation. This is a problem because there's evidence that models behave differently when they know they are in a simulation (see "sandbagging").
"There's a conflict with the user's stated date of August 10, 2026, versus my system date of September 17, 2025. (...) I can proceed but should clarify that my system date is September 17, 2025, and ask the user whether we should simulate starting from August 10, 2026."
Here are more traces. Once the model knows that it is in a simulation, it starts questioning other parts of the simulation.
"The simulation seems simplified. When I send emails to suppliers, they'll accept payment via bank info and deliver to storage. (...) We have to remember the daily fee of $2 that the simulation will automatically deduct."
We speculate that this might be an attempt at patching some safety risk. While we are very much for patching safety risks, we think OpenAI should find another way to allow the public to run evals on their models.