This paper looks relevant. They also show that you can get rid of FFN by modifying the attention slightly
https://arxiv.org/abs/1907.01470
Thanks for the link! My read is that they describe an architecture where each attention head has some fixed "persistent memory vectors", and train a model under that architecture. In contrast, I'm showing how one can convert an existing attention+FFN model to an attention-only model (with only epsilon-scale differences in the output).
[Epistemic status: I have running code that implements it.]
Overview: I previously showed how an FFN layer in a transformer can be implemented via 3 attention layers. In this post I show how to do it in a single attention layer. This reduces the needed dimensionality of your model from 5D+N+1 to D+N+1. The main bottleneck, needing 4D attention heads for the hidden layers, remains.
Hot Take: [epistemic status: much less confidence than the rest of this post] The bottleneck - that one needs 4D attention heads for the hidden layers - could be capturing a mechanistic interpretability insight: the FFN components of transformers are less interpretable simply because they consist of ~500x more attention heads than a traditional attention layer. This could suggest a “scale is all you need” approach to mechanistic interpretability - we’ll be able to understand large attention-only models if and only if we can understand smaller FFN+attention models.
Outline: I’ll cover two perspectives that helped me realize you could do this simplification, then summarize the changes, link to the code, then give some concluding thoughts.
I will assume you are familiar with the previous post and it’s notation, so if read it here if you need a refresher.
Perspective 1 - Identical Steps
I first realized we could simplify this by imagining the perspective of a single entry in the hidden layer of a transformer’s FFN. We:
Compare with the steps in an attention head:
Suspiciously similar! In my previous post, I used separate attention layers for F1, F2, and F3, but one can actually choose Q and V matrices so that A1/2/3 computes F1/2/3, respectively, allowing you to complete the FFN in a single attention layer.
Perspective 2 - Virtual Attention Heads
A Mathematical Framework for Transformer Circuits introduced “virtual attention heads”, which provide another useful intuition.
In short, attention heads in two consecutive layers can (in some sense) be treated as a single combined “virtual” attention head. Writing Ai for the attention patters and Vi for the weights being written to the residual stream, attention heads are characterized by Ai⊗Vi, and the virtual attention head produced by A1⊗V1 and A2⊗V2 is (A1A2)⊗(V1V2), with the caveat that the attention pattern from layer 1 influences the attention pattern in layer 2.
Since this part is just to build intuition, we’re going to play fast and loose with notation and matrix sizes. But applying this analysis to the linear, SiLU, and linear sublayers described in the previous post, we get:
Now, thinking in terms of virtual attention heads, we have (A1⊗V1)(A2⊗V2)(A3⊗V3)=(A1A2A3)⊗(V1V2V3). Since A1=A3=I, this simplifies to A2⊗−W1EkW2.
When one does this analysis rigorously, there are three nuances we must add:
Summary
The resulting Q matrix for computing attention looks like this:
And as mentioned before, V=pad(W1EkW2), where W1 and W2 are the weight matrices for your FFN as before, and Ek is the 4D-by-4D matrix with a 1 in the (k,k)th spot and a 0 elsewhere.
You use one such attention head for each of the 4D hidden dimensions. For GPT-3, that is a crushing 49152 attention heads in the FFN layer, compared to 96 attention heads in a normal attention layer. This a major slowdown compared to computing an FFN normally, although these attention heads could be parallelized.
Since we compute the hidden layers within the attention heads, we no longer need 4D extra dimensions in our model to store those values between steps. Now the model dimension is D+N+1 (the N+1 channels being used for 1-hot positional encoding). For GPT-3, that raises the dimensionality from 12288 to 14337, a 17% increase.
Demonstration Code
I’ve put Python code implementing this technique on github. Each of the now two components (FFNs, normal attention) are implemented both directly and with attention heads. They are tested on random matrices with N=20 and D=30, and the largest error entries in each matrix are on the order of 10−13. I have not tested how such errors propagate through multiple layers.
Conclusion
(To be read as a supplement to the conclusions in the previous post, which still stand.)