Stefan Heimersheim. SERIMATS scholar researching Mechanistic Interpretability of Transformers. Trying to figure out AI Alignment. Final-year PhD student in Astronomy, University of Cambridge. Website: StefanHex.com
Thanks Marius for this great write-up!
However, I was surprised to find that the datapoints the network misclassified on the training data are evenly distributed across the D* spectrum. I would have expected them to all have low D* didn’t learn them.
My first intuition here was that the misclassified data points where the network just tried to use the learned features and just got it wrong, rather than those being points the network didn't bother to learn? Like say a 2 that looks a lot like an 8 so to the network it looks like a middle-of-the-spectrum 8? Not sure if this is sensible.
The shape of D* changes very little between initialization and the final training run.
I think this is actually a big hint that a lot of the stuff we see in those plots might be not what we think it is / an illusion. Any shape present at initialization cannot tell us anything about the trained network. More on this later.
the distribution of errors is actually left-heavy which is exactly the opposite of what we would expect
Okay this would be much easier if you collapsed the x-axis of those line plots and made it a histogram (the x axis is just sorted index right?), then you could make the dots also into histograms.
we would think that especially weird examples are more likely to be misclassified, i.e. examples on the right-hand side of the spectrum
So are we sure that weird examples are on the right-hand side? If I take weird examples to just trigger a random set of features, would I expect this to have a high or low dimensionality? Given that the normal case is 1e-3 to 1e-2, what's the random chance value?
We train models from scratch to 1,2,3,8,18 and 40 iterations and plot D*, the location of all misclassified datapoints and a histogram over the misclassification rate per bin.
This seems to suggest the left-heavy distribution might actually be due to initialization too? The left-tail seems to decline a lot after a couple of training iterations.
I think one of the key checks for this metric will be ironing out which apparent effects are just initialization. Those nice line plots look suggestive, but if initialization produces the same image we can't be sure what we can learn.
One idea to get traction here would be: Run the same experiment with different seeds, do the same plot of max data dim by index, then take the two sorted lists of indices and scatter-plot them. If this looks somewhat linear there might be some real reason why some data points require more dimensions. If it just looks random that would be evidence against inherently difficult/complicated data points that the network memorizes / ignores every time.
Edit: Some evidence for this is actually that the 1s tend to be systematically at the right of the curve, so there seems to be some inherent effect to the data!
I don't think I understand the problem correctly, but let me try to rephrase this. I believe the key part is the claim whether or not ChatGPT has a global plan? Let's say we run ChatGPT one output at a time, every time appending the output token to the current prompt and calculating the next output. This ignores some beam search shenanigans that may be useful in practice, but I don't think that's the core issue here.
There is no memory between calculating the first and second token. The first time you give ChatGPT the sequence "Once upon a" and it predicts "time" and you can shut down the machine, the next time you give it "Once upon a time" and it predicts the next word. So there isn't any global plan in a very strict sense.
However when you put "Once upon a time" into a transformer, it will actually reproduce the exact values from the "Once upon a" run, in addition to a new set of values for the next token. Internally, you have a column of residual stream for every word (with 400 or so rows aka layers each), and the first four rows are identical between the two runs. So you could say that ChatGPT reconstructs* a plan every time it's asked to output a next token. It comes up with a plan every single time you call it. And the first N columns of the plan are identical to the previous plan, and with every new word you add a column of plan. So in that sense there is a global plan to speak of, but this also fits within the framework of predicting the next token.
"Hey ChatGPT predict the next word!" --> ChatGPT looks at the text, comes up with a plan, and predicts the next word accordingly. Then it forgets everything, but the next time you give it the same text + one more word, it comes up with the same plan + a little bit extra, and so on.
Regarding 'If ChatGPT visits every parameter each time it generates a token, that sure looks “global” to me.' I am not sure what you mean with this. I think an important note is to keep in mind it uses the same parameters for every "column", for every word. There is no such thing as ChatGPT not visiting every parameter.
And please correct me if I understood any of this wrongly!
*in practice people cache those intermediate computation results somewhere in their GPU memory to not have to recompute those internal values every time. But it's equivalent to recomputing them, and the latter has less complications to reason about.
Yep, it seems to be a coincidence that only the 4-layer model learned this and the 3-layer one did not. As Neel said I would expect the 3-layer model to learn it if you give it more width / more heads.
We also later checked networks with MLPs, and turns out the 3-layer gelu models (same properties except for MLPs) can do the task just fine.
Your language model game(s) are really interesting -- I've had a couple ideas when "playing" (such as adding GPT2-small suggestions for the user to choose from, some tokenization improvements) -- are you happy to share the source / tools to build this website or is it not in a state you would be happy to share? Totally fine if not, just realized that I should ask before considering building something!
Edit for future readers: Managed to do this with Heroku & flask, then switched to Streamlit -- code here, mostly written by ChatGPT: https://huggingface.co/spaces/StefanHex/simple-trafo-mech-int/tree/main
I really appreciated all the observations here and enjoyed reading this post, thank you for writing all this up!
Edit: Found it here! https://github.com/socketteer/loom/ Your setup looks quite useful, with all the extra information -- is it available publicly somewhere / would you be happy to share it, or is the tooling not in that state yet? (Totally fine, just thought I'd ask!)
Firstly thank you for writing this post, trying to "poke holes" into the "AGI might doom us all" hypothesis. I like to see this!
How is the belief in doom harming this community?
Actually I see this point, "believing" in "doom" can often be harmful and is usually useless.
Yes, being aware of the (great) risk is helpful for cases like "someone at Google accidentally builds an AGI" (and then hopefully turns it off since they notice and are scared).
But believing we are doomed anyway is probably not helpful. I like to think along the lines of "condition on us winning", to paraphrase HPMOR¹. I.e. assume we survive AGI, what could have caused us to survive AGI and work on making those options reality / more likely.
every single plan [...] can go wrong
I think the crux is that the chance of AGI leading to doom is relatively high, where I would say 0.001% is relatively high whereas you would say that is low? I think it's a similar argument to, say, pandemic-preparedness where there is a small chance of a big bad event and even if the chance is very low, we still should invest substantial resources into reducing the risk.
So maybe we can agree on something like Doom by AGI is a sufficiently high risk that we should spend say like 1-millionth world GDP ($80m) on preventing it somehow (AI Safety research, policy etc).
All fractions mentioned above picked arbitrarily.
Suppose, said that last remaining part, suppose we try to condition on the fact that we win this, or at least get out of this alive. If someone TOLD YOU AS A FACT that you had survived, or even won, somehow made everything turn out okay, what would you think had happened -
Image interpretability seems mostly so easy because humans are already really good
Thank you, this is a good point! I wonder how much of this is humans "doing the hard work" of interpreting the features. It raises the question of whether we will be able to interpret more advanced networks, especially if they evolve features that don't overlap with the way humans process inputs.
The language model idea sounds cool! I don't know language models well enough yet but I might come back to this once I get to work on transformers.
I think I found the problem: Omega is unable to predict your action in this scenario, i.e. the assumption "Omega is good at predicting your behaviour" is wrong / impossible / inconsistent.
Consider a day where Omicron (randomly) chose a prime number (Omega knows this). Now an EDT is on their way to the room with the boxes, and Omega has to put a prime or non-prime (composite) number into the box, predicting EDT's action.
If Omega makes X prime (i.e. coincides) then EDT two-boxes and therefore Omega has failed in predicting.
If Omega makes X non-prime (i.e. numbers don't coincide) then EDT one-boxes and therefore Omega has failed in predicting.
Edit: To clarify, EDT's policy is two-box if Omega and Omicron's numbers coincide, one-box if they don't.
You're using an optimization procedure to find an embedding that produces an output, and if you cannot find one you say it is unspeakable. How confident are you that the optimization is strong enough? I.e. what are the odds that a god-mode optimizer in this high-dimensional space could actually find an embedding that produces the unspeakable token, it's just that linprog wasn't strong enough?
Just checking here, I can totally imagine that the optimizer is an unlikely point of failure. Nice work again!