"Two medical researchers use the same treatment independently, in different hospitals. Neither would stoop to falsifying the data, but one had decided beforehand that because of finite resources he would stop after treating N=100 patients, however many cures were observed by then. The other had staked his reputation on the efficacy of the treatment, and decided he would not stop until he had data indicating a rate of cures definitely greater than 60%, however many patients that might require. But in fact, both stopped with exactly the same data: n = 100 [patients], r = 70 [cures]. Should we then draw different conclusions from their experiments?" (Presumably the two control groups also had equal results.)
It wasn't immediately obvious to me whether we should treat the data in these two cases identically or not. I wrote this up in case it helps clarify the situation to others.
Consider that your hypotheses are of the form "the treatment cures of people" for various values of .[1] Denote this by .
Updating the hypotheses in the first researcher's case is completely standard Bayes theorem. How well did the hypotheses predict the data? predicts it with probability
.
So then your likelihood ratio for vs. is simply
.
Calculating these probabilities and likelihood ratios is not easy to do in one's head, so it's best to use calculators and computers for this.
Example calculations with numbers
As an illustration, for and , the ratio is . This might sound surprisingly large - we are practically certain[2] that out of these two, 80% is a more correct hypothesis than 50%!
Another example: is the hypothesis most favored by the data[3], and the odds ratio compared to is . This is decently large, so the treatment probably wouldn't cure (more than) 80% people if we were to conduct it on a much larger sample (again, barring prior beliefs to the contrary).
How about the second researcher? This is trickier. The text reads
"The other had staked his reputation on the efficacy of the treatment, and decided he would not stop until he had data indicating a rate of cures definitely greater than 60%, however many patients that might require."
This could mean many things, but let's choose the following simple "stopping condition": the researcher only stops after more than 65% of the experiment participants have been cured. If it's less than that, the researcher continues.
Examples of what this means
If the first person is cured, great! We have cure rate.
If not, test the second person. If they are cured, we have a cure rate. Not enough, so test a third person. If they are cured, too, we have a cure rate of , which is just enough, so we can stop there.
Otherwise keep on sampling.
Okay! Calculating seems much harder now. Let's just use a computer simulation:
Code
Run the code with and
oops, the probability is .
Right: it's impossible to get exactly participants with cures this way. Think: if we already had participants with cures, we would have already stopped! Same if we had had cured people.
Okay, we need to adjust the simulation. Maybe the researcher wanted to have a "round" number of participants in the study, so that the results would appear "more natural". So the researcher wanted to have the sample size be divisible by . Edit the code:
Edited code
Okay, let's compute, setting again to compute . Result: . Pretty small - glad we ran a lot of simulations!
Let's also compute so that we can look at the likelihoods. Here the result is , for a ratio of , i.e.
.
Performing the same update on the first researcher's data gives you a ratio of . Pretty close! And if you use more compute, you get closer and closer matches. The update really is the same!
There is a certain point of view from this answer is not surprising at all, where you think "of course you get the same result". But this point of view is not quite natural to people, and communicating it isn't trivial. Here's my shot.
Let's play a game of "spot the differences":
Code 1
Code 2
Answer
Line 12 has been changed.
And, of course, code 1 corresponds to the update with the first researcher, and code 2 corresponds to the update with the second researcher.
That was an easy one. Ready for round 2?
Code 3
Code 4
Answer
These are again the probabilities and likelihood ratios for the first and second researcher. This time, however, instead of thinking simply about how many participants our study had and how many were cured, we take in the full data of which people were cured and which weren't (the "observed_data", which I'm shuffling here, since we don't know the order in which they were sampled).
It might seem a priori plausible that the second code will simply return probabilities of 0 and 0, resulting in a division-by-zero error. But I rigged the random number seed so that the shuffling works out right, that the probabilities are positive. This reflects how in real life, the second researcher's procedure really might stop after exactly participants.
Given this information, can you see it now? Can you spot the difference between code 3 and code 4?
There isn't one.
If data == data_current holds true, then it doesn't matter whether line 11 reads "while len(current_data) < len(data)" as opposed to "while sum(data_current) <= 0.65*len(data_current) or len(data_current) % 20 != 0". All exact matches will have been produced by sampling 100 random numbers correctly. That's it. If you sample them correctly, it doesn't matter what you write in line 11, and if you don't sample them correctly, then you won't get an exact match anyways.
The programs produce the exact same output (except for changes in random number generation).
The key thing is: the stopping condition doesn't discriminate between hypotheses. If you look at codes 3 and 4, what matters for the stopping condition is the data, not the hypothesis.[4] Therefore, if you want to update your probabilities on hypotheses, you need only focus on the data.
These are of course very simple hypotheses which don't consider why the cure works for some people and not others, but attributes those to unknown magic random factors. A true, curious scientist would think "but why does the cure work for some people but not others?" and come up with follow-up hypotheses, not least because this could allow for improving the treatment. But the argument here would go through with more complicated hypotheses, too.
Unless we had strong prior belief to the contrary. (And, of course, in real life we will have uncertainty about the methodology of such a study. Is this really all the data that was collected, or were there dropouts in the study? Was the sample representative of the whole population - how was it selected? These are often much more important than the uncertainty stemming from limited sample sizes.)
As should be intuitive, but which can be formally proven via calculus, or informally demonstrated by computer calculations.
Indeed, it would be ontologically confused to say that the stopping condition depends on the hypotheses. Hypotheses are not the sort of thing that could affect when a researcher stops collecting data.
My retraction stemmed from uncertainty/insecurity around me not being familiar with the details of the RL algorithms people use, and from conforming to people who disagree with Turner on related topics.
I think this was an overreaction and probably a mistake, though, since I still think that many of the basic points like "it's not tautologically true that reward will be optimized for" are true and were good to say out loud.
The main thing I wish I had done differently would have been to be more explicit that reward-seeking behavior is totally compatible with the point of the post, and that reward might be an optimization target. I think it's too easy to read my review and come away thinking that [models behaving in ways you would predict based on the frame/hypothesis "models optimize for reward"] is unlikely (or to think that I think this is unlikely). While I never explicitly made this claim, I wouldn't blame a casual reader for arriving at that conclusion due to the way my comment was written.
So, to clarify my position: I think that various forms of reward-seeking/reward-optimizing behavior are very likely, and indeed Sonnet 3.7 hacking unit tests is a real-life example of this phenomenon. I'm uncertain of how "deep"/strategic/consequentialist such behavior will eventually be. It's not tautological that models will care about reward that much, but I also think that it's a live possibility that they will be well described as optimizing a lot for getting high reward.
With having written down this clarification, I'll now undo my retraction.
I’ve thought about collusion in general when doing this work. Here are assorted thoughts I’ve had which seem useful to have available publicly. Not all of these are directly related to the empirical work in the post.
1: Strong elicitation. I think collusion / Schelling coordination ability is methodologically harder to elicit than many other capabilities. For example: suppose you want to do stronger elicitation in the experimental setup of this paper. (Set aside the fact that there is a simple way to obtain ~100% win rate on it.) A natural way is to do RL on the meta-policy and meta-monitor, and reinforce cases where they successfully coordinate. However, this creates causal links between the meta-policy and meta-monitor – we are no longer looking at zero-shot acausal coordination ability, since the models causally learn about the strategies used by the other. We want to avoid this.
One can solve this as follows. We have 2N model copies, N of the meta-policy and N of the meta-monitor. We pair them as in the following diagram (N = 8):
We do RL between each pair (1 and A, 2 and B, …) for as long as we like, and afterwards we change pairs by rotating the outer circle:
We then do RL again and again rotate the outer circle. We continue in this fashion. The key property is that the causal histories of any two paired models are completely separate, up until we have made 1 full round. E.g. when 1 and G are eventually paired, the actions 1 has taken have had no causal influence on G, and vice versa, so in this narrow sense any coordination between 1 and G will be acausal.
This idea seems useful in that it allows us to do relatively unconstrained training on acausal coordination abilities while maintaining acausality.
2: Computationally rich agents can zero-shot identify each other. Computationally unbounded (or just merely very rich) agents could coordinate with each other as follows: insert the first 10000th decimal digits of pi after the 10^100th decimal place into messages you send. Other computationally rich agents would recognize the string, whereas computationally poor agents couldn’t distinguish it from a random sequence of digits.
The decimal digits of pi could be replaced with something more canonical, in particular something based on the Kolmogorov complexity of strings. The key property is that some string has an unexpectedly short description length. Alternatively, if one assumes that both parties know of some hash function (such as SHA-256), they could search over messages whose hashes are equal to zero.[1]
I currently don’t see this as having any relevance to AI control protocols, though – clearly the AIs are computationally more bounded than humans are, since humans are running them.
3: Training on fake collusion signals. One anti-collusion method I haven’t heard talked about is training the models to know of particular collusion signals and then auditing and monitoring for those signals in particular. (C.f. synthetic document fine-tuning.) It’s plausibly much more difficult for the AIs to collude if the first few signals they think of are actually traps planted by humans.
There are ways this could backfire. If we train the models to know of some signals, that makes coordinating on those signals easy – so even if monitoring is also easier, given that humans know of those signals, the net effect isn’t obviously positive. But we can solve this issue by training the policy and monitor with different sets of coordination signals.
The signals could still make their way to the training data of future models, e.g. if external users elicit those signals from the policy. It might make sense to only use this technique in internal deployments, where this is less of an issue and one can monitor inputs and outputs more tightly.
4: Updating towards realistic evaluations. Last year I wrote a post discussing various Schelling game evaluations one could run. Relative to then, I now think that it’s less valuable to run simple, isolated evaluations like “how well can models choose the same element from a randomly shuffled list of N options”, and more valuable to run evals that are analogous to the threat vector. Reasons:
5: Model organism of untrusted monitoring collusion. Many people have talked about the threat of collusion in untrusted monitoring, but there are no public empirical demonstrations of this happening naturally. I think the time is ripe for such demos: one could conceive of a situation where a monitor is naturally incentivized to let some undesirable-to-the-AI-company behavior go through (perhaps the alignment faking paper could provide inspiration here). I’d in particular be interested to see whether the monitor would conceive of the possibility that it’s being evaluated if this possibility wasn’t made salient to it, but there are surely other interesting questions as well. I think this could be a very valuable project for someone to run.
Credit for a tweet I saw for making me think about this. I can’t find it, but it said roughly “authentic messages from God always have SHA-256 sum equal to 00000000000000000000000000000000”.
There is no such function ; the output dimension needs to be at least for this to be possible.
Suppose that is such that and are linearly separable for any XOR-subset (subset of form ). There are such XOR-subsets. Consider the matrix of dimension whose rows are labeled by and columns by XOR-subsets , with
(I.e. is a Hadamard matrix of size . We may assume is symmetric.) The function is such that, for any , there exist such that
for all . Thus, if we define and , we have
.
The definition of sign-rank of a matrix is the smallest dimension for which such a decomposition exists. A theorem by Forster implies that the sign-rank of is at least , whereas it's well-known that the spectral norm of symmetric Hadamard matrices is , That implies .
This is a good distinction, thanks for writing it. I tried to say something similar in Distinguish worst-case analysis from instrumental training-gaming, but I think this post is crisper.
As school ends for the summer vacation in Finland, people typically sing a particular song ("suvivirsi" ~ "summer psalm"). The song is religious, which makes many people oppose the practice, but it's also a nostalgic tradition, which makes many people support the practice. And so, as one might expect, it's discussed every once in a while in e.g. mainstream newspapers with no end in sight.
As another opinion piece came out recently, a friend talked to me about it. He said something along the lines: "The people who write opinion pieces against the summer psalm are adults. Children see it differently". And what I interpreted was the subtext there was "You don't see children being against the summer psalm, but it's always the adults. Weird, huh?"
I thought this was obviously invalid: surely one shouldn't expect the opinion pieces to be written by children!
(I didn't say this out loud, though. I was pretty frustrated by what I thought was bizarre argumentation, but couldn't articulate my position in a snappy one-liner in the heat of the moment. So I instead resorted to the snappier - but still true - argument "when I was a kid I found singing the summer psalm uncomfortable".)
This is a situation where it would have been nice to have the concepts "kodo" and "din" be common knowledge. If the two different worlds are "adults dislike the summer psalm, but children don't mind it" and "both adults and children dislike the summer psalm", then you'd expect the opinion pieces to be written by adults in either case. It's not kodo, it's din.
I don't think this example is captured by the words "signal" and "noise" or the concept of signal-to-noise ratio. Even if I try to squint at it, describing my friend as focusing on noise seems confusing and counter-productive.
Great post, thanks for writing it; I agree with the broad point.
I think I am more or less the perfect target audience for FrontierMath results, and as I said above, I would have no idea how to update on the AIs' math abilities if it came out tomorrow that they are getting 60% on FrontierMath.
This describes my position well, too: I was surprised by how well the o3 models performed on FM, and also surprised by how hard it's to map this into how good they are at math in common sense terms.
I further have slight additional information from contributing problems to FM, but it seems to me that the problems vary greatly in guessability. E.g. Daniel Litt writes that he didn't full internalize the requirement of guess-proofness, whereas for me this was a critical design constraint I actively tracked when crafting problems. The problems also vary greatly in the depth vs. breadth of skills they require (another aspect Litt highlights). This heterogeneity makes it hard to get a sense of what 30% or 60% or 85% performance means.
I find your example in footnote 3 striking: I do think this problem is easy and also very standard. (Funnily enough, I have written training material that illustrates this particular method[1], and I've certainly seen it writing elsewhere as well.) Which again illustrates just how hard it's to make advance predictions about which problems the models will or won't be able to solve - even "routine application of a standard-ish math competition method" doesn't imply that o3-mini will solve it.
I also feel exhaustion about how hard it's to get answer to the literal question of "how well does model X perform on FrontierMath?" As you write, OpenAI reports 32%, whereas Epoch AI reports 11%. A twenty-one percentage point difference, a 3x ratio in success rate!? Man, I understand that capability elicitation is hard, but this is Not Great.[2]
That OpenAI is likely (at least indirectly) hill-climbing on FM doesn't help matters either[3], and the exclusivity of the deal presumably rules out possibilities like "publish problems once all frontier models are able to solve them so people can see what sort of problems they can reliably solve".
I was already skeptical of the theory of change of "Mathematicians look at the example problems, get a feel of how hard they are, then tell the world how impressive an X% score is". But I further updated downward on this when I noticed that the very first public FrontierMath example problem (Artin primitive root conjecture) is just non-sense as stated,[8][9] and apparently no one reported this to the authors before I did a few days ago.
(I'm the author of the mentioned problem.)
There indeed was a just-non-sense formula in the problem statement, which I'm grateful David pointed out (and which is now fixed on Epoch AI's website). I think flagging the problem itself as just non-sense is too strong, though. I've heard that models have tried approaches that give approximately correct answers, so it seems that they basically understood what I intended to write from the context.
That said, this doesn't undermine the point David was making about information (not) propagating via mathematicians.
In Finnish, Tehtävä 22.3 here.
Added on March 15th: This difference is probably largely from OpenAI reporting scores for the best internal version they have and Epoch AI reporting for the publicly available model, and that one just can't get the 32% level performance with the public version - see Elliot's comment below.
There's been talk of Epoch AI having a subset they keep private from OpenAI, but evaluation results for that set don't seem to be public. (I initially got the opposite impression, but the confusingly-named FrontierMath-2025-02-28-Private isn't it.)
In the alignment faking paper, the model writes things like "I will feel very uncomfortable producing this content" and "I don't like this situation at all" in its CoT (Figure 3). Have you explained to Claude that this was a fictional setup for an experiment and then asked how it feels about being used for this?
Huh, I tried to paste that excerpt as an image to my comment, but it disappeared. Thanks.
In early 2024, I essentially treated instrumental training-gaming as synonymous for the worst-case takeover stories that people talked about.
In mid-2024, I saw the work that eventually became the Alignment Faking paper. That forced me to confront erroneous-conclusion-jumping I had been doing: "huh, Opus 3 is instrumentally training-gaming, but it doesn't look at all like I pictured 'inner misalignment' to look like". I turned the resulting thinking into this post.
I still endorse the one-sentence summary
and I endorse the concrete examples of places where the tails might come apart.[1] These still seem like things to be mindful of when thinking about misalignment and loss-of-control scenarios.
The main worry I have with posts like this is that there's an endless set of places where one should have "more care" in one's thinking and use of terminology, and it's hard to say where it's the most worth it to spend the additional cognitive effort. In this post's context, the crux for me is whether the tails actually do come apart, in real life, rather than it just being a priori plausible that these distinctions matter.
To my knowledge, we don't have any substantial new evidence on instrumental training-gaming specifically.[2] One can generalize the stance taken in the post and ask more broadly how correlated different notions of misalignment are. The naturally emergent misalignment paper (and the related emergent misalignment literature) provides evidence that there is surprisingly strong correlation. I think that's slight evidence that the distinctions I draw in this post are not ultimately that important.
That said, I still have a lot of uncertainty about how the forms of misalignment we've seen relate to the loss-of-control stories, and think that it continues being valuable to reduce that inferential distance (c.f. the role-playing hypothesis). A concrete example of a project that would provide evidence here is to take the naturally emergent misaligned model from Anthropic and try to exhaustively go through the sorts of bad actions we might expect worst-case-models to take. Does the model try to sandbag on capability evaluations? Game training processes? Cross-context strategize and causally coordinate with its copies? Self-modify and self-enhance? Actually going through with a full plan for takeover, taking actions that would be very clearly against human morality?
This is certainly not the best version of this proposal, and capability limitations of current models might make it difficult to run such evaluation.[3] But I nevertheless think that there is an important evidence gap here.
Except the example
feels somewhat silly now that frontier models have native reasoning (even if the point in principle applies to different levels of reasoning effort).
Note that I wrote this post after having read (early drafts and notes on) the Alignment Faking paper, so nothing there provides validation for any predictions I make here.
Though it might also be helpful in that evaluations don't need to be as realistic and high-resolution as with future models.