I think it would be cool if more people did similar reviews of other benchmarks. This is a relatively low-effort high-value thing to do compared to creating new benchmarks.
It would significantly increase our capacity to properly measure AI capabilities, and also improved methodological standards for creating new benchmarks.
benchmarks are how people optimize for capabilities. working on them has been a core method of increasing capabilities over the past 10+ years of machine learning, at minimum, since I started paying attention in 2015. if you want to increase endgame (asi) alignment outcomes, measuring capabilities is a distraction at best.
edit: whether this applies to bad behavior benchmarks is contested below. I happily restrict my claim to benchmarks that are measuring things that labs would have some reason to optimize for and still believe that covers this topic.
...🤔 ...well played, Dan, well played.
(I don't think Dan Hendrycks deliberately released a benchmark with errors, but it would be hilarious, especially given his gripes about GPQA label noise.)
I also very much don't think errors were on purpose. Abstaining from capability measurement doesn't seem likely to do much else besides save ones' own time to do something more useful; I just hope people who are excited to do alignment work don't end up wasting their time on it. Errors seem likely to end up not particularly slowing things down - someone else is going to make a benchmark, but let a capability person do it so you can spend your time doing something more useful, like figuring out how to algorithmically generate arbitrary amounts of guaranteed-aligned training data from interacting with humans that has some sort of probabilistic guarantee of representing behavior that is in fact good rather than simply easy to generate. (Edit: which is probably done through a somewhat complex theoretical breakthrough about how to incentivize aligned-by-construction exploration or some such thing)
While I definitely agree there are negative externalities here, I also think there are extremely positive externalities from key decision makers being better informed, especially knowing how close we are to certain capabilities like AI enabled bioterrorism or cybercrime, or automated R&D/an intelligence explosion, etc. Information is great, and generally I think has a fairly positive effect even if the decision maker is not highly competent. Bioterrorism and cybercrime at least are not things I'm concerned about AGI researchers hill climbing on, automated R&D is much dicier
Private benchmarks seem solid here too
I'm surprised to hear that you aren't concerned about negative benchmarks being hillclimb targets for anyone. This updates me somewhat, though the hypotheses I'm still worried about are ones where the dishonest labs, whichever those turn out to be, are the main source of optimizing-for-bad-behavior-benchmarks. I also expect that bio/chem tasks that aren't malicious-use-specific, which is the topic at hand, will get optimized for by less-dishonest labs, in at least some cases.
Yeah, I feel much better about malicious use specific ones. Agreed that HLE is more generic and this is much worse
Consider these possibilities for what benchmarks are doing here.
Establishing Best Practices for Building Rigorous Agentic Benchmarks by Yuxuan Xhu et. al. July 14th, 2025 covers this problem for agentic benchmarks.
For example, SWE-bench-Verified uses insufficient test cases, while τ -bench counts empty responses as successful. Such issues can lead to underor overestimation of agents’ performance by up to 100% in relative terms. To make agentic evaluation rigorous, we introduce the Agentic Benchmark Checklist (ABC), a set of guidelines that we synthesized from our benchmark-building experience, a survey of best practices, and previously reported issues. When applied to CVEBench, a benchmark with a particularly complex evaluation design, ABC reduces the performance overestimation by 33%.
We conducted an in-depth analysis of specific issues present in each agentic benchmark. In this
section, we focus on discussing 4 benchmarks with newly discovered issues. We defer a detailed
description of all identified issues in Appendix D and experiment designs to E.
1. -bench relies on trivial states or substrings as ground truth [...] overestimating performance by 38%.
2. -bench also allows agents to list every possible answer [...] overestimating
performance by 40%.
3. WebArena [...] uses an LLM-as-a-Judge without validating its accuracy or consistency [...], leading to a 1.4–5.2% performance overestimate.
4. SWE-Lancer fails to fully isolate agents from the ground truth [...], allowing agents to score 100% without solving tasks.
5. KernelBench omits comprehensive fuzzing for edge cases and memory layouts [...] overestimating kernel-correctness performance by approximately 31%.
6. In OSWorld, the task website changes have broken the HTML selectors used for evaluation,
leading to a 28% performance underestimation in the chrome task section.
Update (20th Sep 2025): Scale AI has revised their Humanity's Last Exam preprint in light of this evaluation, and conducted their own checks on the accuracy of HLE questions, finding an error rate of 18% instead:
They also note:
FutureHouse is a company that builds literature research agents. They tested it on the bio + chem subset of HLE questions, then noticed errors in them.
The post's first paragraph:
About the initial review process for HLE questions:
Some examples they show:
I initially noticed this post from spiderduckpig in the Manifold Discord server.