Grokking is a recent phenomena discovered by OpenAI researchers, that in my opinion is one of the most fascinating mysteries in deep learning. That models trained on small algorithmic tasks like modular addition will initially memorise the training data, but after a long time will suddenly learn to generalise to unseen data.
This is a write-up of an independent research project I did into understanding grokking through the lens of mechanistic interpretability. My most important claim is that grokking has a deep relationship to phase changes. Phase changes, ie a sudden change in the model's performance for some capability during training, are a general phenomena that occur when training models, that have also been observed in large models trained on non-toy tasks. For example, the sudden change in a transformer's capacity to do in-context learning when it forms induction heads. In this work examine several toy settings where a model trained to solve them exhibits a phase change in test loss, regardless of how much data it is trained on. I show that if a model is trained on these limited data with high regularisation, then that the model shows grokking.
One of the core claims of mechanistic interpretability is that neural networks can be understood, that rather than being mysterious black boxes they learn interpretable algorithms which can be reverse engineered and comprehended. This work serves as a proof of concept of that, and that reverse engineering models is key to understanding them. I fully reverse engineer the inferred algorithm from a transformer that has grokked how to do modular addition (which somehow involves Discrete Fourier Transforms and trig identities?!), and use this as a concrete example to analyse what happens during training to understand what happened during grokking. I close with discussion and thoughts on the alignment relevance of these results.
This is accompanied by a paper in the form of a Colab notebook containing the code for this project, a lot of interactive graphics, and much more in-depth discussion and technical details. In this write-up I try to give a high-level conceptual overview of the claims and the most compelling results and evidence, I refer you to the notebook if you want the full technical details.
This write-up ends with a list of ideas for future directions of this research. I think this is a particularly exciting problem to start with if you want to get into mechanistic interpretability since it's concrete, only involves tiny models, and is easy to do in a Colab notebook. If you might want to work on some of these, please reach out! In particular, I'm looking to hire an intern/research assistant, and if you're excited about these future directions you might be a good fit.
Key Claims
Grokking is really about phase changes: To exhibit grokking, we train a model on a problem that exhibits phase changes even when given infinite training data, and train it with regularisation and limited data. If we choose the amount of data such that the regularisation only marginally favours the generalising solution over the memorised solution, we see grokking.
Intuition: Regularisation makes the model ultimately prefer the generalising solution to the memorised solution, but the phase change indicates that the generalising solution is "hard to reach" in some sense. The memorised solution is "easier to reach", and so is reached first. But due to the regularisation, the model still prefers the generalising solution to the memorised solution, and thus gets to the generalising solution eventually - the grokking result just shows that reaching the memorised solution first does not change this, and that there is a path in model space interpolating between the memorising and generalising solution.
Map inputs x,y→cos(wx),cos(wy),sin(wx),sin(wy) with a Discrete Fourier Transform, for some frequency w
Multiply and rearrange to get cos(w(x+y))=cos(wx)cos(wy)−sin(wx)sin(wy) and sin(w(x+y))=cos(wx)sin(wy)+sin(wx)cos(wy)
By choosing a frequency w=2πnk we get period dividing n, so this is a function of x+y(modn)
Map to the output logits z with cos(w(x+y))cos(wz)+sin(w(x+y))sin(wz)=cos(w(x+y−z)) - this has the highest logit at z≡x+y(modn), so softmax gives the right answer.
To emphasise, this algorithm was purely learned by gradient descent! I did not predict or understand this algorithm in advance and did nothing to encourage the model to learn this way of doing modular addition. I only discovered it by reverse engineering the weights.
In particular, the circuits smoothly develop well before grokking, disproving the 'grokking is a random walk in model-space' hypothesis.
This is nicely demonstrated by the metric of excluded loss - which roughly shows how much model performance on the training data depends on the generalising algorithm vs memorising algorithm. We see the use of the generalising algorithm to improve training performance rises smoothly over training, well before the grokking point.
Phase Changes
Epistemic status: I feel confident in the empirical results, but the generalisation to non-toy settings is more speculative
For example, the composition of the previous token head and the induction head in an induction circuit. The previous token head will only reduce loss if the induction head is there and vice versa. So, initially, the gradients creating each component will be weak to non-existent. But once each component starts to form, the gradients on the other component will become stronger. These effects reinforce each other, and creating a feedback loop that eventually accelerates and results in a phase change.
Intuitive explanation of grokking: Regularisation incentivises the model to be simpler, so the model prefers the generalising solution to the memorised solution. The generalising solution is "hard to reach" and the memorising solution is not, so the memorising solution is reached first. But the incentive to find the generalising solution is still there, so the underlying mechanism to induce the phase change is still going on while grokking - memorisation doesn't change this
Fundamentally, understanding grokking is about understanding phase changes - I don't claim to fully understand phase changes or grokking, but I claim to have reduced my confusion about grokking to my confusion about phase changes.
I observe several small phase changes in my toy tasks, eg there's a separate phase change for each digit when learning 5 digit addition.
What Is A Phase Change?
By a phase change, I mean a reverse-S shaped curve[1] in the model's loss on some dataset, or the model's capacity for some specific capability. That is, the model initially has poor performance, this performance plateaus or slowly improves, and suddenly the performance accelerates and rapidly improves (ie, loss rapidly decreases), and eventually levels off.
A particularly well-studied motivating example of this is Anthropic's study of induction heads. Induction heads are a circuit in transformers used to predict repeated sequences of tokens. They search the context for previous copies of the current token, and then attend to the token immediately after that copy, and predict that that subsequent token will come next. Eg, if the 3 token surname D|urs|ley is earlier in the context, and the model wants to predict what comes after D, it will attend to urs and predict that that comes next.
The key fact about induction heads are that there is a fairly narrow band of training time where they undergo a phase change, and go from not existing/functional to being fully functional. This is particularly striking because induction heads are the main mechanism behind how Large Language Models do in-context learning - using tokens from far back in the context to usefully predict the next token. This means that there is a clear phase change in a model's ability to do in-context learning, as shown below, and that this corresponds to the formation of a specific circuit via a phase change within the model.
My goal here is to convey the intuition of what I mean by a phase change, rather than give a clear and objective definition. At this stage of research, I favour an informal "I know it when I see it" style definition, over something more formal and brittle.
Empirical Observations
Motivation: Modular addition shows clear grokking on limited data[2], but given much more data it shows a phase change in both train and test loss. This motivated the hypothesis that grokking could be observed any time we took a problem with a phase change and reduced the amount of data while regularising:
In several toy algorithmic tasks I observe phase changes (in both test and train loss) when models are trained on sufficient data[3].
Trained on 95% of the training data, 1L Transformer
5 digit addition (1L full transformer)
1L Transformer, data format 1|3|4|5|2|+|5|8|3|2|1|=|0|7|1|7|7|3
Predicting Repeated Subsequences (2L attn-only transformer - task designed to need induction heads)
Data format: 7 2 8 3 1 9 3 8 3 1 9 9 2 5 END - we take a uniform random sequence of tokens, randomly choose a subsequence to repeat, and train the model to predict the repeated tokens.
Finding the max element in a sequence (1L attn-only transformer - task designed to need skip trigrams)
Concretely, the data format is START 0 4 7 2 4 14 9 7 2 5 3 END with exactly one entry ≥10, and the model is trained to output that entry after END. The solution is to learn 10 skip trigrams of the form 14 .. END 14
Grokking = Phase Changes + Regularisation + Limited Data
For all of the above tasks, we can induce grokking by adding regularisation (specifically weight decay) and limiting the amount of training data. Eg for 5 digit addition on 700 examples (see the notebook for the rest):
With enough data (eg single epoch training), the model generalises easily, and with sufficiently little data (eg a single data point), the model memorises. But there is a crossover point, and we identify grokking when training with slightly more data than the crossover point - analogous to the findings of Liu et al that grokking is an intermediate state between comprehension and memorisation. I found the crossover points (here, 700) by binary searching by hand. Intuitively, this is consistent with the idea that grokking occurs because of regularisation favouring the simpler solution. Memorisation complexity increases with the amount of data (approximately continuously) while generalising complexity does not, so there should must eventually be a crossover point.
This is particularly interesting, because these tasks are a non-trivial extension of the original grokking. Previous demonstrations of grokking had an extremely small universe of data and were trained on a substantial fraction of it, suggesting that the model may be doing some kind of clever interpolation. While here, the universe of data is much larger (eg 1010 possible pairs of 5 digit numbers), and model is trained on a tiny fraction of that, yet it still groks.
Explaining Grokking
Epistemic status: The following two sections are fairly speculative - they're my current best explanation of my empirical findings, but are likely highly incomplete and could easily be totally wrong.
Speculation: Phase Changes Are Inherent to Composition
To understand the link between phase changes and grokking, it's worth reflecting on why circuits form at all. A priori, this is pretty surprising! To see this, let's focus on the example of an induction circuit, and speculate on how it could be formed. The induction circuit is made up of two heads, a previous token head and an induction head, which interact by K-composition. Together, these heads significantly improve loss, but only in the context of the other head being there. Thus, naively, when we have neither head, there should be no gradient encouraging the formation of either head.
At initialisation, we have neither head, and so gradient descent should never discover this circuit. Naively, we might predict that neural networks will only produce circuits analogous to linear regression, where each weight will clearly marginally improve performance as it continuously improves. And yet in practice, neural networks empirically do form sophisticated circuits like this, involving several parts interacting in non-trivial, algorithmic ways.[4]
I see a few different possible explanations for this:
A lottery ticket hypothesis-inspired explanation:[5] Initially, each layer of the network is the superposition of many different partial circuit components, and the output of each layer is the average of the output of each component. The full output of the network is the average of many different circuits. Some of these circuits are systematically useful to reducing loss, and most circuits aren't. SGD will reinforce the relevant circuits and suppress the useless circuits, so the circuits will gradually form.
A random walk explanation: The network wanders randomly around the loss landscape, until it happens to get lucky and find a half-formed previous token head and induction head that somewhat compose. Once it has this, this half-formed circuit is useful for reducing loss and gradient descent can take over and make a complete circuit.
An evolutionary explanation: There's a similar mystery for how organisms develop sophisticated machinery like the human eye, where each part is only useful in the context of other parts. The explanation I find most compelling is that we first developed one component that was somewhat useful on its own, eg a light-detecting membrane. This component was useful in its own right, and so was reinforced, and later components could develop that depend on it, eg the lenses in our eye.
The evolutionary explanation is a natural hypothesis, but we can see from my toy tasks that it cannot be the whole story. In the repeated subsequence task, we have a sequence of uniform randomly generated tokens, apart from a repeated subsequence at an arbitrary location, eg 7 2 8 3 1 9 3 8 3 1 9 9 2 5 END. This means that all pairs of tokens are independent, apart from the pairs of equal tokens in the repeated subsequence. In particular, this means that a previous token head can never reduce loss for the current token - the previous token will always be independent of the next token. So a previous token head is only useful in the context of an induction-like head that completes the circuit. Likewise, an induction head relies on K-composition with a previous token head, and so cannot be useful on its own. Yet the model eventually forms an induction circuit![6]
A priori, the random walk story seems unlikely be sufficient on its own - an induction circuit is pretty complicated, and it likes represents a very small region in model space, and so seems unlikely to be stumbled upon by a random walk[7]. Thus my prediction is that the lottery ticket hypothesis is most of what's going on[8] - an induction head will be useless without a previous token head, but may be slightly useful when composing with, say, a head that uniformly attends to prior tokens, since part of its output will include the previous token! I expect that all explanations are part of the picture though, eg this seems more plausible if the uniform head just so happens to attend a bit more to the previous token via a random walk, etc.
Drawing this back to phase changes, the lottery ticket-style explanation suggests that we might expect to see phase changes as circuits form. Early on in circuit formation, each part of the circuit is very rough, so the effect on the loss of improving any individual component is weak, which means the gradients will be small. But as each component develops, each other component will become more useful, which means that all gradients will increase together in a non-linear way. So as the circuit becomes closer to completion we should expect an acceleration in the loss curve for this circuit, resulting in a phase change.
An Intuitive Explanation of Grokking
With this explanation, we can now try to answer the question of why grokking occurs! To recap the problem setting, we are training our model on a problem with two possible solutions - the memorising algorithm and the generalising algorithm. We apply regularisation and choose a limited amount of data, such that the generalising solution is marginally simpler than the memorising solution[9] and so our training setup marginally prefers the generalising solution over the memorising solution. Naively, we expect the model to learn the generalising solution.
But we are training our model on a problem whose solution involves multiple components interacting to form a complete circuit. So, early in training, the gradients incentivising each component of the generalising solution are weak, because they need the parts to all be formed and lined up properly. Memorisation, however, does not require several components to be lined up in careful and coordinated way[10], so it does not have artificially weak gradients at the start. Thus, at the start, memorisation is incentivised more strongly than generalisation and the model memorises.
So, why does the model shift from memorisation to generalisation? Eventually the training loss plateaus post-memorisation - loss is falling and total weights are rising so eventually the gradients towards lower loss (ie to memorise better) balances with the gradients towards lower weights (ie to be simpler) and cancel out. But they don't perfectly cancel out. If there's a direction in model space that allows it to memorise more efficiently[11], then both gradients will encourage this direction. And the natural way to do this is by picking up on regularities in the data - eg, you can memorise modular addition twice as efficiently by recognising that x+y=y+x. This is the same process that leads the model to generalise in the infinite data case - it wants to pick up on patterns in the data.[12]
So the model is still incentivised to reach the generalising solution, just as in the infinite data case. But rather than moving from the randomly initialised model to the generalising model (as in the infinite data case), it interpolates between the memorising solution and the generalising solution. Throughout this process test loss remains high - even a partial memorising solution still performs extremely badly on unseen data! But once the generalising solution gets good enough, the incentive to simplify by deleting the remnants of the memorising solution dominates, and the model clears up the memorising solution, finally resulting in good test performance. Further, as in the infinite data case, the closer we get to the generalising solution the more the rate of change of the loss accelerates. So this final shift happens extremely abruptly - manifesting as the grokking phase change!
As a final bit of evidence, once the model has fully transitioned to the generalising solution it is now inherently simpler, and the point where the incentive to improve loss balances with the incentive to be simpler is marginally lower - we can observe in the graphs here that the model experiences a notable drop in train loss post grokking.
My explanation above asserted that phase change are a natural thing to expect with the formation of specific circuits in models. If we buy the hypothesis that most things models do are built up out of many interpretable circuits, then shouldn't we expect to see phase changes everywhere whenever we train models, rather than smooth and convex curves?
My prediction is that yes, we should, and that in fact we do. But that larger models are made up of many circuits and, though each circuit may form in a phase change, the overall loss is made up out of the combination of many different capabilities (and thus many different circuits). And circuits of different complexity/importance likely form at different points in training. So the overall loss curve is actually the sum of many tiny phase changes at different points in training, and this overall looks smooth and convex. Regularities in loss curves like scaling laws may be downstream of statistical properties of the distribution of circuits, which become apparent at large scales. We directly observe the phase change-y ness from the loss curves in these simple toy problems because the problems are easy enough that only one/a few circuits are needed.
Some evidence for this hypothesis:
5 digit addition(toy problem)- We can decompose the loss into the sum of 6[13] components - the loss on each of the 6 digits in the sum. When we do this, we observe separate phase changes in each digit, resulting in the many small non-convexities in the overall loss curve.
The ordering of phase changes is not stable between runs, though token 0 and token 1 tend to be first[14]
This isn't specific to 5 digit, eg 15 digit addition shows 16 separate phase changes
Skip Trigrams(toy problem)- The model learns 10 different skip trigrams, 10 .. END 10, 11 .. END 11, etc. Each skip trigram shows a separate phase change
Notably, each phase change happens at approximately the same time, so the overall curve looks less bumpy than 5 digit addition. This makes sense, because each skip trigram is "as complex" as the others, while learning to add some digits is much harder than others.
Induction Heads - Induction heads are the best studied example of a specific circuit through training, and there we see a clear phase change in LLMs up to a 13B transformer. Each head should be "as complex" as the others, so it makes sense that the all occur at approximately, but not exactly, the same time.[15]
AlphaZero Capabilities: One finding in DeepMind's AlphaZero Interpretability paper was that there is a phase change in the model's capabilities, where it learns to represent a lot of chess concepts around step 32,000.
Summary
I think this is highly suggestive evidence that there is a deep relationship between grokking and phase changes, and that grokking occurs when models with a phase change are trained with regularisation and limited data. I present some compelling (to me) explanations of what might be behind the phase change behaviour, and show how this model explains grokking and predicts several specific empirical observations about grokking. I don't claim to fully understand phase changes or grokking, but I do claim to have substantially reduced my confusion about grokking to my confusion about phase changes.
Modular Addition
Epistemic status: I feel pretty confident that I have fully reverse engineered this network, and have enough different lines of evidence that I am confident in how it works. My explanation of how and why it develops during training is shakier.
This algorithm operates via using trig identities and Discrete Fourier Transforms to map x,y→cos(w(x+y)),sin(w(x+y)), and then extracting x+y(modp)
This algorithm can be clearly read off from the weights. If we apply a Discrete Fourier Transform to the input space and apply the Transformer Circuits framework, the structure of the network and its resulting algorithm becomes clear.
The model naturally forms several sub-networks that calculate the sumncos(w(x+y−z)) in different frequencies and add these to form the logits. This can be seen by a clear clustering of the neurons
Within a cluster, individual neurons clearly represent interpretable features for a single frequency.
To emphasise, this algorithm was discovered purely via gradient descent, not by me. I didn't think of this algorithm until I reverse engineered it from the weights!
The evolution of this algorithm can be clearly seen during training, and systematic progress towards the generalising circuit can be seen well before the grokking point
With excluded loss, we can see the model interpolate between memorisation and generalisation. Train loss performance depends substantially on the generalising circuit well before we see a significant change in test loss.
Model Details
In this section I dive deeply into one specific and well-checkpointed model trained to do modular addition. See model training code for more details, but here are the key points:
The model is trained to map x,y to z≡x+y(mod113) (henceforth 113 is referred to as p)
1L Transformer
Learned positional embeddings
Width 128
No LayerNorm
ReLU activations
Input format is x|y|= , where x,y are one-hot encoded inputs, and = is an extra token.
Trained with AdamW, with weight decay 1 and learning rate 10−3
Full batch training, trained on 30% of the data (ie the 1132 pairs of inputs) for 40,000 epochs[16]
This is a 1L transformer, with no LayerNorm and learned positional embeddings, trained with AdamW with weight decay 1, and full batch training on 30% of the data (the data is the 1132 pairs of numbers modp). The
Overview of the Inferred Algorithm
The key feature of the algorithm is calculating cos(w(x+y)),sin(w(x+y)) with w=2πpk - this is a function of x+y and be mapped to x+y, and because cos(wx) has period pk we get the (modp) part for free.
More concretely:
Inputs x,y are given as one-hot encoded vectors in Rp
Calculatescos(wx),cos(wy),sin(wx),sin(wy) via a Discrete Fourier Transform (This sounds complex but is just a change of basis on the inputs, and so is just a linear map)
w=2πpk, k is arbitrary, we just need period dividing p
Calculatescos(wx)cos(wy),cos(wx)sin(wy),sin(wx)cos(wy),sin(wx)sin(wy) by multiplying pairs of waves in x and in y
Calculatescos(w(x+y))=cos(wx)cos(wy)−sin(wx)sin(wy) and sin(w(x+y))=sin(wx)cos(wy)+cos(wx)sin(wy) by rearranging and taking differences
Calculatescos(w(x+y−z))=cos(w(x+y))cos(wz)+sin(w(x+y))sin(wz) via a linear map to the output logits z
This has an argmax at z≡x+y(modp), so post softmax we're done!
There are a few adjustments to implement this algorithm in a neural network:
The model's activations at any point are vectors(/tensors). To represent several variables, such as cos(wx),sin(wx), these are stored as different directions in activation space. When the vector is projected onto those dimensions, the coefficient is the relevant variable (eg cos(wx))
The model runs the algorithm in parallel for several different frequencies[17] (different frequencies correspond to different clusters of neurons, different subspaces of the residual stream, and sometimes different attention heads)
Background on Discrete Fourier Transforms
A key technique in all that follows is Discrete Fourier Transforms (DFT). I give a more in-depth explainer in the colab, but here's a rough outline - I expect this requires familiarity with linear algebra to really get your head around. The key motivating observation is that most activations inside the network are periodic and so techniques designed to represent periodic functions nicely are key. Eg the attention patterns:
In Rp, we have a standard basis of the p unit vectors. But we can also take a basis of p cosine and sine waves, F∈Rp×p, where F0=(1,1,...,1) is the constant vector, and F2k−1=cos(2πpkx) and F2k=sin(2πpkx) are the cosine and sine wave of frequency w=2πpk (henceforth referred to as frequency w=k and represented as coskx for brevity) for k=1,...,p−12. Every pair of waves has dot product zero, unless they're the same wave (ie it's an orthogonal basis). If we normalise these rows, we get an orthonormal basis of cosine and sine waves (so F−1=FT). We refer to these normalised waves as Fourier Components and this overall basis as the 1D Fourier Basis.
We can apply a change of basis to the 1D input space Rp to F, and this turns out to be a much more natural way to represent the input space for this problem, as the network learns to operate in terms of sine and cosine waves. Eg, the fourth column of WEFT is the direction corresponding to sin2x in the embedding for WE. If we apply this change of basis to both the input space for x and for y we apply a 2D DFT, and can represent any function as the linear combination of terms of the form sinw1xcosw2y (or cos(w1x)cos(w2y), Const∗cos(w2y), etc). This is just a change of basis on Rp×p=Rp2, and terms of the form sinw1xcosw2y (ie the outer product of each pair of rows in F) form an orthogonal basis of p2 vectors (henceforth referred to as the 2D Fourier Basis).
Importantly, this is just a change of basis! If we choose any single activation in the network, this is a real valued function on pairs of inputs x,y∈Rp×p, and so is equivalent to specifying a p2 dimensional vector. And we can apply an arbitrary change of basis to this vector. So we can always write it as a linear combination of terms in the 2D Fourier Basis. And any vector of activations is a linear combination of 2D Fourier terms times fixed vectors in activation space. If a function is periodic, this means that it is sparse in the 1D or 2D Fourier Basis, and this is what tells us about the structure of the algorithm and weights of the network.
Reverse Engineering the Algorithm
Here, I present a case for how I was able to reverse engineer the algorithm from the weights. See the Colab and appendices (attention and neuron) for full details, my goal in this section is to roughly sketch what's going on and why I'm confident that this is what's going on.
Theory: Naively, this seems like the hard part, but is actually extremely easy. The key is that we just need to learn the discretised wave on x∈[0,1,...,p−1], not for arbitrary x∈R. x is input into the network as a one-hot encoded vector, and the multiplied by a learned matrix WE. We can in fact learn any function f:[0,1,...,p−1]→R[18]
Conveniently, F, the matrix of waves cos(wx),sin(wx), is an orthonormal basis. So WEFT will recover the direction of the embedding corresponding to each wave Const,cosx,sinx,cos2x,... - in other ways, extracting cos(wx),sin(wx) is just a rotation of the input space.
Evidence: We can use the norm of the embedding of each wave to get an indicator of how much the network "cares" about each wave[19], and when we do this we see that the plot is extremely sparse. The model has decided to throw away all but a few frequencies[20]. This is very strong evidence that the model is working in the Fourier Basis - we would expect to see a basically uniform plot if this was not a privileged basis.
Calculating 2D products of waves cos(wx)cos(wy),cos(wx)sin(wy),sin(wx)cos(wy),sin(wx)sin(wy)
Theory: A good mental model for neural networks is that they are really good at matrix multiplication and addition, and anything else takes a lot of effort[21]. As so here! As we saw above, creating cos(wx),sin(wx) is just a rotation, and the later rearranging and map to the logits is another linear map, but multiplying the terms together is hard and non-linear.
There are three non-linear operations in a 1L transformer - the attention softmax, the element-wise product of attention and the value vectors, and the ReLU activations in the MLP layer. Here, the model uses both ReLU activations and element-wise products with attention to multiply terms[22].
The neurons form 5[23] distinct clusters for each frequency, and each neuron in the cluster for frequency w has its activation as a linear combination of 1,cos(wx),sin(wx),cos(wy),sin(wy),cos(wx)cos(wy),cos(wx)sin(wy),sin(wx)cos(wy),sin(wx)sin(wy).[24] Note that, as explained above, the neuron activation in any network can be represented as a linear combination of products of Fourier terms in x and Fourier terms in y (because they form a basis of Rp×p). The surprising fact is that this representation is sparse! This can be visually seen as neuron activations being periodic:
Evidence: The details of how the terms are multiplied together are highly convoluted[25], and I leave them to the Colab notebook appendices. But the neurons do in fact have the structure I described, and this can be directly observed by looking at their values. And thus, by this point in the network it has computed the product terms.
For example, the activations for neuron 0 (as plotted above) are approximately 109−39(cos42x+cos42y)−76(sin42x+sin42y)+36(cos42xsin42y+sin42xcos42y)−10cos42xcos42y+38sin42xsin42y (these coefficients can be calculated by mapping the neuron activation into the 2D Fourier Basis). This approximation explains >90% of the variance in this neuron[26]. We can plot this visually with the following heatmap:
Zooming out, we can apply a 2D DFT to all neuron activations, ie writing all of the neuron activations as a linear combinations of terms of the form cos42xcos42y times vectors, and plotting the norm of each vector of coefficients. Heuristically, this is telling us what terms are represented by the network at the output of the neurons. We see that the non-trivial terms are in the top row (of the form coswx,sinwx) or the left column (of the form coswy,sinwy) or in a cell of 2D cells along the diagonal (of the form cos(wx)cos(wy),cos(wx)sin(wy),sin(wx)cos(wy),sin(wx)sin(wy) - notably, a product term where both terms have the same frequency).
Calculating cos(w(x+y)),sin(w(x+y)) and calculating logits
Theory: The operations mapping cos(w(x+y))=cos(wx)cos(wy)−sin(wx)sin(wy) and sin(w(x+y))=cos(wx)sin(wy)+sin(wx)cos(wy) are linear, and the operations mapping this to cos(w(x+y−z))=cos(w(x+y))cos(wz)+sin(w(x+y))sin(wz) are also linear. So their composition is linear, and can be represented by a single matrix multiplication. The neurons are mapped to the logits by L=WUWoutN, and so the effective weight matrix Wlogit=WUWout must represent both of these operations (if my hypothesis is correct). Note that Wlogit is a p×dmlp matrix, mapping from MLP-space to the output space.
Evidence: We draw upon several different lines of evidence here.
We show that the terms cos(w(x+y)),sin(w(x+y)) are computed as follows: We repeat the above analysis to find terms represented by the neurons on the logits, we find that the terms in the top row and left column cancel out. This leaves just diagonal terms, corresponding to products of waves of the same frequency in x and y, exactly the terms we need. We also see that the 2x2 blocks are uniform, showing that cos(w(x+y)) and sin(w(x+y)) have the same coefficient. Further analysis shows that everything other than cos(w(x+y)),sin(w(x+y)) for these 5 frequencies is essentially zero.
We now show that the output logits produce cos(w(x+y−z))=cos(w(x+y))cos(wz)+sin(w(x+y))sin(wz) for each of the 5 represented frequencies (where z is the term corresponding to the output logits). The neurons form clusters for each frequency, and when we plot the columns of Wlogit corresponding to those frequencies, and apply a 1D DFT to the output space of Wlogit, we see that the only non-trivial terms are cos(wz),sin(wz) - ie the output logits coming from these neuron clusters is a linear combination of cos(wz),sin(wz).
We can more directly verify this by writing approximating the output logits as a sum of ∑w∈[14,35,41,42,52]Awcos(w(x+y−z)) and fitting the coefficients Aw. When we do this, the resulting approximated logits explains 95% of the variance in the original logits. If we evaluate loss on this approximation to the logits, we actually see a significant drop in loss, from 2∗10−7 to 4.7∗10−8
Evolution of Circuits During Training
Note: For this section in particular, I recommend referring to the Colab! That contains a bunch of interactive graphics that I can't include here, where we can observe the development of circuits during training.
Now that we understand what the model is doing during training on a mechanistic level, we can directly observe the development of these circuits during training. The key observation is that the circuits develop smoothly, and make clear and systematic progress towards the ge
A significantly updated version of this work is now on Arxiv and was published as a spotlight paper at ICLR 2023
aka, how the best way to do modular addition is with Discrete Fourier Transforms and trig identities
If you don't want to commit to a long post, check out the Tweet thread summary
Introduction
Grokking is a recent phenomena discovered by OpenAI researchers, that in my opinion is one of the most fascinating mysteries in deep learning. That models trained on small algorithmic tasks like modular addition will initially memorise the training data, but after a long time will suddenly learn to generalise to unseen data.
This is a write-up of an independent research project I did into understanding grokking through the lens of mechanistic interpretability. My most important claim is that grokking has a deep relationship to phase changes. Phase changes, ie a sudden change in the model's performance for some capability during training, are a general phenomena that occur when training models, that have also been observed in large models trained on non-toy tasks. For example, the sudden change in a transformer's capacity to do in-context learning when it forms induction heads. In this work examine several toy settings where a model trained to solve them exhibits a phase change in test loss, regardless of how much data it is trained on. I show that if a model is trained on these limited data with high regularisation, then that the model shows grokking.
One of the core claims of mechanistic interpretability is that neural networks can be understood, that rather than being mysterious black boxes they learn interpretable algorithms which can be reverse engineered and comprehended. This work serves as a proof of concept of that, and that reverse engineering models is key to understanding them. I fully reverse engineer the inferred algorithm from a transformer that has grokked how to do modular addition (which somehow involves Discrete Fourier Transforms and trig identities?!), and use this as a concrete example to analyse what happens during training to understand what happened during grokking. I close with discussion and thoughts on the alignment relevance of these results.
This is accompanied by a paper in the form of a Colab notebook containing the code for this project, a lot of interactive graphics, and much more in-depth discussion and technical details. In this write-up I try to give a high-level conceptual overview of the claims and the most compelling results and evidence, I refer you to the notebook if you want the full technical details.
This write-up ends with a list of ideas for future directions of this research. I think this is a particularly exciting problem to start with if you want to get into mechanistic interpretability since it's concrete, only involves tiny models, and is easy to do in a Colab notebook. If you might want to work on some of these, please reach out! In particular, I'm looking to hire an intern/research assistant, and if you're excited about these future directions you might be a good fit.
Key Claims
Phase Changes
Epistemic status: I feel confident in the empirical results, but the generalisation to non-toy settings is more speculative
Key Takeaways
What Is A Phase Change?
By a phase change, I mean a reverse-S shaped curve[1] in the model's loss on some dataset, or the model's capacity for some specific capability. That is, the model initially has poor performance, this performance plateaus or slowly improves, and suddenly the performance accelerates and rapidly improves (ie, loss rapidly decreases), and eventually levels off.
A particularly well-studied motivating example of this is Anthropic's study of induction heads. Induction heads are a circuit in transformers used to predict repeated sequences of tokens. They search the context for previous copies of the current token, and then attend to the token immediately after that copy, and predict that that subsequent token will come next. Eg, if the 3 token surname
D|urs|ley
is earlier in the context, and the model wants to predict what comes afterD
, it will attend tours
and predict that that comes next.The key fact about induction heads are that there is a fairly narrow band of training time where they undergo a phase change, and go from not existing/functional to being fully functional. This is particularly striking because induction heads are the main mechanism behind how Large Language Models do in-context learning - using tokens from far back in the context to usefully predict the next token. This means that there is a clear phase change in a model's ability to do in-context learning, as shown below, and that this corresponds to the formation of a specific circuit via a phase change within the model.
My goal here is to convey the intuition of what I mean by a phase change, rather than give a clear and objective definition. At this stage of research, I favour an informal "I know it when I see it" style definition, over something more formal and brittle.
Empirical Observations
Motivation: Modular addition shows clear grokking on limited data[2], but given much more data it shows a phase change in both train and test loss. This motivated the hypothesis that grokking could be observed any time we took a problem with a phase change and reduced the amount of data while regularising:
In several toy algorithmic tasks I observe phase changes (in both test and train loss) when models are trained on sufficient data[3].
The tasks - see the Colab for more details
1|3|4|5|2|+|5|8|3|2|1|=|0|7|1|7|7|3
7 2 8 3 1 9 3 8 3 1 9 9 2 5 END
- we take a uniform random sequence of tokens, randomly choose a subsequence to repeat, and train the model to predict the repeated tokens.START 0 4 7 2 4 14 9 7 2 5 3 END
with exactly one entry ≥10, and the model is trained to output that entry after END. The solution is to learn 10 skip trigrams of the form14 .. END 14
Grokking = Phase Changes + Regularisation + Limited Data
For all of the above tasks, we can induce grokking by adding regularisation (specifically weight decay) and limiting the amount of training data. Eg for 5 digit addition on 700 examples (see the notebook for the rest):
With enough data (eg single epoch training), the model generalises easily, and with sufficiently little data (eg a single data point), the model memorises. But there is a crossover point, and we identify grokking when training with slightly more data than the crossover point - analogous to the findings of Liu et al that grokking is an intermediate state between comprehension and memorisation. I found the crossover points (here, 700) by binary searching by hand. Intuitively, this is consistent with the idea that grokking occurs because of regularisation favouring the simpler solution. Memorisation complexity increases with the amount of data (approximately continuously) while generalising complexity does not, so there should must eventually be a crossover point.
This is particularly interesting, because these tasks are a non-trivial extension of the original grokking. Previous demonstrations of grokking had an extremely small universe of data and were trained on a substantial fraction of it, suggesting that the model may be doing some kind of clever interpolation. While here, the universe of data is much larger (eg 1010 possible pairs of 5 digit numbers), and model is trained on a tiny fraction of that, yet it still groks.
Explaining Grokking
Epistemic status: The following two sections are fairly speculative - they're my current best explanation of my empirical findings, but are likely highly incomplete and could easily be totally wrong.
Speculation: Phase Changes Are Inherent to Composition
I recommend reading the section of A Mathematical Framework for Transformer Circuits on Induction Heads to fully follow this section
To understand the link between phase changes and grokking, it's worth reflecting on why circuits form at all. A priori, this is pretty surprising! To see this, let's focus on the example of an induction circuit, and speculate on how it could be formed. The induction circuit is made up of two heads, a previous token head and an induction head, which interact by K-composition. Together, these heads significantly improve loss, but only in the context of the other head being there. Thus, naively, when we have neither head, there should be no gradient encouraging the formation of either head.
At initialisation, we have neither head, and so gradient descent should never discover this circuit. Naively, we might predict that neural networks will only produce circuits analogous to linear regression, where each weight will clearly marginally improve performance as it continuously improves. And yet in practice, neural networks empirically do form sophisticated circuits like this, involving several parts interacting in non-trivial, algorithmic ways.[4]
I see a few different possible explanations for this:
The evolutionary explanation is a natural hypothesis, but we can see from my toy tasks that it cannot be the whole story. In the repeated subsequence task, we have a sequence of uniform randomly generated tokens, apart from a repeated subsequence at an arbitrary location, eg
7 2 8 3 1 9 3 8 3 1 9 9 2 5 END
. This means that all pairs of tokens are independent, apart from the pairs of equal tokens in the repeated subsequence. In particular, this means that a previous token head can never reduce loss for the current token - the previous token will always be independent of the next token. So a previous token head is only useful in the context of an induction-like head that completes the circuit. Likewise, an induction head relies on K-composition with a previous token head, and so cannot be useful on its own. Yet the model eventually forms an induction circuit![6]A priori, the random walk story seems unlikely be sufficient on its own - an induction circuit is pretty complicated, and it likes represents a very small region in model space, and so seems unlikely to be stumbled upon by a random walk[7]. Thus my prediction is that the lottery ticket hypothesis is most of what's going on[8] - an induction head will be useless without a previous token head, but may be slightly useful when composing with, say, a head that uniformly attends to prior tokens, since part of its output will include the previous token! I expect that all explanations are part of the picture though, eg this seems more plausible if the uniform head just so happens to attend a bit more to the previous token via a random walk, etc.
Drawing this back to phase changes, the lottery ticket-style explanation suggests that we might expect to see phase changes as circuits form. Early on in circuit formation, each part of the circuit is very rough, so the effect on the loss of improving any individual component is weak, which means the gradients will be small. But as each component develops, each other component will become more useful, which means that all gradients will increase together in a non-linear way. So as the circuit becomes closer to completion we should expect an acceleration in the loss curve for this circuit, resulting in a phase change.
An Intuitive Explanation of Grokking
With this explanation, we can now try to answer the question of why grokking occurs! To recap the problem setting, we are training our model on a problem with two possible solutions - the memorising algorithm and the generalising algorithm. We apply regularisation and choose a limited amount of data, such that the generalising solution is marginally simpler than the memorising solution[9] and so our training setup marginally prefers the generalising solution over the memorising solution. Naively, we expect the model to learn the generalising solution.
But we are training our model on a problem whose solution involves multiple components interacting to form a complete circuit. So, early in training, the gradients incentivising each component of the generalising solution are weak, because they need the parts to all be formed and lined up properly. Memorisation, however, does not require several components to be lined up in careful and coordinated way[10], so it does not have artificially weak gradients at the start. Thus, at the start, memorisation is incentivised more strongly than generalisation and the model memorises.
So, why does the model shift from memorisation to generalisation? Eventually the training loss plateaus post-memorisation - loss is falling and total weights are rising so eventually the gradients towards lower loss (ie to memorise better) balances with the gradients towards lower weights (ie to be simpler) and cancel out. But they don't perfectly cancel out. If there's a direction in model space that allows it to memorise more efficiently[11], then both gradients will encourage this direction. And the natural way to do this is by picking up on regularities in the data - eg, you can memorise modular addition twice as efficiently by recognising that x+y=y+x. This is the same process that leads the model to generalise in the infinite data case - it wants to pick up on patterns in the data.[12]
So the model is still incentivised to reach the generalising solution, just as in the infinite data case. But rather than moving from the randomly initialised model to the generalising model (as in the infinite data case), it interpolates between the memorising solution and the generalising solution. Throughout this process test loss remains high - even a partial memorising solution still performs extremely badly on unseen data! But once the generalising solution gets good enough, the incentive to simplify by deleting the remnants of the memorising solution dominates, and the model clears up the memorising solution, finally resulting in good test performance. Further, as in the infinite data case, the closer we get to the generalising solution the more the rate of change of the loss accelerates. So this final shift happens extremely abruptly - manifesting as the grokking phase change!
As a final bit of evidence, once the model has fully transitioned to the generalising solution it is now inherently simpler, and the point where the incentive to improve loss balances with the incentive to be simpler is marginally lower - we can observe in the graphs here that the model experiences a notable drop in train loss post grokking.
I later walk through this narrative and what it corresponds to in the modular addition case.
Speculation: Phase Changes are Everywhere
My explanation above asserted that phase change are a natural thing to expect with the formation of specific circuits in models. If we buy the hypothesis that most things models do are built up out of many interpretable circuits, then shouldn't we expect to see phase changes everywhere whenever we train models, rather than smooth and convex curves?
My prediction is that yes, we should, and that in fact we do. But that larger models are made up of many circuits and, though each circuit may form in a phase change, the overall loss is made up out of the combination of many different capabilities (and thus many different circuits). And circuits of different complexity/importance likely form at different points in training. So the overall loss curve is actually the sum of many tiny phase changes at different points in training, and this overall looks smooth and convex. Regularities in loss curves like scaling laws may be downstream of statistical properties of the distribution of circuits, which become apparent at large scales. We directly observe the phase change-y ness from the loss curves in these simple toy problems because the problems are easy enough that only one/a few circuits are needed.
Some evidence for this hypothesis:
10 .. END 10
,11 .. END 11
, etc. Each skip trigram shows a separate phase changeSummary
I think this is highly suggestive evidence that there is a deep relationship between grokking and phase changes, and that grokking occurs when models with a phase change are trained with regularisation and limited data. I present some compelling (to me) explanations of what might be behind the phase change behaviour, and show how this model explains grokking and predicts several specific empirical observations about grokking. I don't claim to fully understand phase changes or grokking, but I do claim to have substantially reduced my confusion about grokking to my confusion about phase changes.
Modular Addition
Epistemic status: I feel pretty confident that I have fully reverse engineered this network, and have enough different lines of evidence that I am confident in how it works. My explanation of how and why it develops during training is shakier.
Key Takeaways
Model Details
In this section I dive deeply into one specific and well-checkpointed model trained to do modular addition. See model training code for more details, but here are the key points:
x|y|=
, where x,y are one-hot encoded inputs, and = is an extra token.This is a 1L transformer, with no LayerNorm and learned positional embeddings, trained with AdamW with weight decay 1, and full batch training on 30% of the data (the data is the 1132 pairs of numbers modp). The
Overview of the Inferred Algorithm
The key feature of the algorithm is calculating cos(w(x+y)),sin(w(x+y)) with w=2πpk - this is a function of x+y and be mapped to x+y, and because cos(wx) has period pk we get the (modp) part for free.
More concretely:
There are a few adjustments to implement this algorithm in a neural network:
Background on Discrete Fourier Transforms
A key technique in all that follows is Discrete Fourier Transforms (DFT). I give a more in-depth explainer in the colab, but here's a rough outline - I expect this requires familiarity with linear algebra to really get your head around. The key motivating observation is that most activations inside the network are periodic and so techniques designed to represent periodic functions nicely are key. Eg the attention patterns:
In Rp, we have a standard basis of the p unit vectors. But we can also take a basis of p cosine and sine waves, F∈Rp×p, where F0=(1,1,...,1) is the constant vector, and F2k−1=cos(2πpkx) and F2k=sin(2πpkx) are the cosine and sine wave of frequency w=2πpk (henceforth referred to as frequency w=k and represented as coskx for brevity) for k=1,...,p−12. Every pair of waves has dot product zero, unless they're the same wave (ie it's an orthogonal basis). If we normalise these rows, we get an orthonormal basis of cosine and sine waves (so F−1=FT). We refer to these normalised waves as Fourier Components and this overall basis as the 1D Fourier Basis.
We can apply a change of basis to the 1D input space Rp to F, and this turns out to be a much more natural way to represent the input space for this problem, as the network learns to operate in terms of sine and cosine waves. Eg, the fourth column of WEFT is the direction corresponding to sin2x in the embedding for WE. If we apply this change of basis to both the input space for x and for y we apply a 2D DFT, and can represent any function as the linear combination of terms of the form sinw1xcosw2y (or cos(w1x)cos(w2y), Const∗cos(w2y), etc). This is just a change of basis on Rp×p=Rp2, and terms of the form sinw1xcosw2y (ie the outer product of each pair of rows in F) form an orthogonal basis of p2 vectors (henceforth referred to as the 2D Fourier Basis).
Importantly, this is just a change of basis! If we choose any single activation in the network, this is a real valued function on pairs of inputs x,y∈Rp×p, and so is equivalent to specifying a p2 dimensional vector. And we can apply an arbitrary change of basis to this vector. So we can always write it as a linear combination of terms in the 2D Fourier Basis. And any vector of activations is a linear combination of 2D Fourier terms times fixed vectors in activation space. If a function is periodic, this means that it is sparse in the 1D or 2D Fourier Basis, and this is what tells us about the structure of the algorithm and weights of the network.
Reverse Engineering the Algorithm
Here, I present a case for how I was able to reverse engineer the algorithm from the weights. See the Colab and appendices (attention and neuron) for full details, my goal in this section is to roughly sketch what's going on and why I'm confident that this is what's going on.
Calculating waves cos(wx),sin(wx),cos(wy),sin(wy)
Relevant notebook section
Theory: Naively, this seems like the hard part, but is actually extremely easy. The key is that we just need to learn the discretised wave on x∈[0,1,...,p−1], not for arbitrary x∈R. x is input into the network as a one-hot encoded vector, and the multiplied by a learned matrix WE. We can in fact learn any function f:[0,1,...,p−1]→R[18]
Conveniently, F, the matrix of waves cos(wx),sin(wx), is an orthonormal basis. So WEFT will recover the direction of the embedding corresponding to each wave Const,cosx,sinx,cos2x,... - in other ways, extracting cos(wx),sin(wx) is just a rotation of the input space.
Evidence: We can use the norm of the embedding of each wave to get an indicator of how much the network "cares" about each wave[19], and when we do this we see that the plot is extremely sparse. The model has decided to throw away all but a few frequencies[20]. This is very strong evidence that the model is working in the Fourier Basis - we would expect to see a basically uniform plot if this was not a privileged basis.
Calculating 2D products of waves cos(wx)cos(wy),cos(wx)sin(wy),sin(wx)cos(wy),sin(wx)sin(wy)
Relevant notebook section
Theory: A good mental model for neural networks is that they are really good at matrix multiplication and addition, and anything else takes a lot of effort[21]. As so here! As we saw above, creating cos(wx),sin(wx) is just a rotation, and the later rearranging and map to the logits is another linear map, but multiplying the terms together is hard and non-linear.
There are three non-linear operations in a 1L transformer - the attention softmax, the element-wise product of attention and the value vectors, and the ReLU activations in the MLP layer. Here, the model uses both ReLU activations and element-wise products with attention to multiply terms[22].
The neurons form 5[23] distinct clusters for each frequency, and each neuron in the cluster for frequency w has its activation as a linear combination of 1,cos(wx),sin(wx),cos(wy),sin(wy),cos(wx)cos(wy),cos(wx)sin(wy),sin(wx)cos(wy),sin(wx)sin(wy).[24] Note that, as explained above, the neuron activation in any network can be represented as a linear combination of products of Fourier terms in x and Fourier terms in y (because they form a basis of Rp×p). The surprising fact is that this representation is sparse! This can be visually seen as neuron activations being periodic:
Evidence: The details of how the terms are multiplied together are highly convoluted[25], and I leave them to the Colab notebook appendices. But the neurons do in fact have the structure I described, and this can be directly observed by looking at their values. And thus, by this point in the network it has computed the product terms.
For example, the activations for neuron 0 (as plotted above) are approximately 109−39(cos42x+cos42y)−76(sin42x+sin42y)+36(cos42xsin42y+sin42xcos42y)−10cos42xcos42y+38sin42xsin42y (these coefficients can be calculated by mapping the neuron activation into the 2D Fourier Basis). This approximation explains >90% of the variance in this neuron[26]. We can plot this visually with the following heatmap:
Zooming out, we can apply a 2D DFT to all neuron activations, ie writing all of the neuron activations as a linear combinations of terms of the form cos42xcos42y times vectors, and plotting the norm of each vector of coefficients. Heuristically, this is telling us what terms are represented by the network at the output of the neurons. We see that the non-trivial terms are in the top row (of the form coswx,sinwx) or the left column (of the form coswy,sinwy) or in a cell of 2D cells along the diagonal (of the form cos(wx)cos(wy),cos(wx)sin(wy),sin(wx)cos(wy),sin(wx)sin(wy) - notably, a product term where both terms have the same frequency).
Calculating cos(w(x+y)),sin(w(x+y)) and calculating logits
Relevant notebook section
Theory: The operations mapping cos(w(x+y))=cos(wx)cos(wy)−sin(wx)sin(wy) and sin(w(x+y))=cos(wx)sin(wy)+sin(wx)cos(wy) are linear, and the operations mapping this to cos(w(x+y−z))=cos(w(x+y))cos(wz)+sin(w(x+y))sin(wz) are also linear. So their composition is linear, and can be represented by a single matrix multiplication. The neurons are mapped to the logits by L=WUWoutN, and so the effective weight matrix Wlogit=WUWout must represent both of these operations (if my hypothesis is correct). Note that Wlogit is a p×dmlp matrix, mapping from MLP-space to the output space.
Evidence: We draw upon several different lines of evidence here.
We show that the terms cos(w(x+y)),sin(w(x+y)) are computed as follows: We repeat the above analysis to find terms represented by the neurons on the logits, we find that the terms in the top row and left column cancel out. This leaves just diagonal terms, corresponding to products of waves of the same frequency in x and y, exactly the terms we need. We also see that the 2x2 blocks are uniform, showing that cos(w(x+y)) and sin(w(x+y)) have the same coefficient. Further analysis shows that everything other than cos(w(x+y)),sin(w(x+y)) for these 5 frequencies is essentially zero.
We now show that the output logits produce cos(w(x+y−z))=cos(w(x+y))cos(wz)+sin(w(x+y))sin(wz) for each of the 5 represented frequencies (where z is the term corresponding to the output logits). The neurons form clusters for each frequency, and when we plot the columns of Wlogit corresponding to those frequencies, and apply a 1D DFT to the output space of Wlogit, we see that the only non-trivial terms are cos(wz),sin(wz) - ie the output logits coming from these neuron clusters is a linear combination of cos(wz),sin(wz).
We can more directly verify this by writing approximating the output logits as a sum of ∑w∈[14,35,41,42,52]Awcos(w(x+y−z)) and fitting the coefficients Aw. When we do this, the resulting approximated logits explains 95% of the variance in the original logits. If we evaluate loss on this approximation to the logits, we actually see a significant drop in loss, from 2∗10−7 to 4.7∗10−8
Evolution of Circuits During Training
Note: For this section in particular, I recommend referring to the Colab! That contains a bunch of interactive graphics that I can't include here, where we can observe the development of circuits during training.
Now that we understand what the model is doing during training on a mechanistic level, we can directly observe the development of these circuits during training. The key observation is that the circuits develop smoothly, and make clear and systematic progress towards the ge