Sequences

Fact Finding: Attempting to Reverse-Engineer Factual Recall on the Neuron Level
Mechanistic Interpretability Puzzles
Interpreting Othello-GPT
200 Concrete Open Problems in Mechanistic Interpretability
My Overview of the AI Alignment Landscape

Wiki Contributions

Comments

This seems like a useful resource, thanks for making it! I think it would be more useful if you enumerated the different ARENA notebooks, my guess is many readers won't click through to the link, and are more likely to if they see the different names. And IMO the arena tutorials are much higher production quality than the other notebooks on that list

We dig into this in post 3. The layers compose importantly with each other and don't seem to be doing the same thing in parallel, path patching the internal connections will break things, so I don't think it's like what you're describing

Very cool work! I'm happy to see that the "my vs their colour" result generalises

Thanks for doing this, I'm excited about Neuronpedia focusing on SAE features! I expect this to go much better than neuron interpretability

The illusion is most concerning when learning arbitrary directions in space, not when iterating over individual neurons OR SAE features. I don't have strong takes on whether the illusion is more likely with neurons than SAEs if you're eg iterating over sparse subsets, in some sense it's more likely that you get a dormant and a disconnected feature in your SAE than as neurons since they are more meaningful?

Interesting post, thanks for writing it!

I think that the QK section somewhat under-emphasises the importance of the softmax. My intuition is that models rarely care about as precise a task as counting the number of pairs of matching query-key features at each pair of token positions, and that instead softmax is more of an "argmax-like" function that finds a handful of important token positions (though I have not empirically tested this, and would love to be proven wrong!). This enables much cheaper and more efficient solutions, since you just need the correct answer to be the argmax-ish.

For example, ignoring floating point precision, you can implement a duplicate token head with and arbitrarily high . If there are vocab elements, map the th query and key to the point of the way round the unit circle. The dot product is maximised when they are equal.

If you further want the head to look at a resting position unless the duplicate token is there, you can increase , and have a dedicated BOS dimension with a score of , so you only get a higher score for a perfect match. And then make the softmax temperature super low so it's an argmax.

These models were not trained with dropout. Nice idea though!

I'm not sure! My guess is that it's because some athlete names were two tokens and others were three tokens (or longer) and we left padded so all prompts were the same length (and masked the attention so it couldn't attend to the padding tokens). We definitely didn't need to do this, and could have just filtered for two token names, it's not an important detail.

Thanks for the kind words! I'd class "interp supercharging other sciences" under:

Microscope AI: Maybe we can avoid deploying agents at all, by training systems to do complex tasks, then interpreting how they do it and doing it ourselves

This might just be semantics though

I really like this paper! This is one of my favourite interpretability papers of 2022, and has substantially influenced my research. I voted at 9 in the annual review. Specific things I like about it:

  • It really started the "narrow distribution" focused interpretability, just examining models on sentences of the form "John and Mary went to the store, John gave a bag to" -> " Mary". IMO this is a promising alternative focus to the "understand what model components mean on the full data distribution" mindset, and worth some real investment in. Model components often do many things in different contexts (are polysemantic), and narrow distributions allow us to ignore their other roles.
    • This is less ambitious and less useful than full distribution interp, but may be much easier, and still sufficient for useful applications of interp like debugging model failures (eg why does BingChat gaslight people) or creating adversarial examples.
  • It really pushed forwards causal intervention based mech interp (ie activation patching), rather than "analysing weights based" mech interp. Causal interventions are inherently distribution dependent and in some sense less satisfying, but much more scalable, and an important tool in our toolkit. (eg they kinda just worked on Chinchilla 70B
    • Patching was not original to IOI, but IOI is the first time I saw someone actually try to use it to uncover a circuit
    • It was the first place I saw edge/path patching, which is a cool and important innovation on the technique. It's a lot easier to interpret a set of key nodes and how they connect up than just heads that matter in isolation.
  • It's really useful to have an example of a concrete circuit when reasoning through mech interp! I often use IOI as a standard example when teaching or thinking through something
  • When you go looking inside a model you see weird phenomena, which is valuable to know about in future - the role of the work is by giving existence proofs of these, so just a single example is sufficient
    • It was the first place I saw the phenomena of backup/self-repair, which I found highly unexpected
    • It was the first place I saw negative heads (which directly led to the copy suppression paper I supervised, one of my favourite interp papers of 2023!)
  • It's led to a lot of follow-up works trying to uncover different circuits. I think this line of research is hitting diminishing returns, but I'd still love to eg have a zoo of at least 10 circuits in small to medium language models!
  • This was the joint first mech interp work published at a top ML conference, which seems like solid field-building, with more than 100 citations in the past 14 months!

I personally didn't find the paper that easy to read, and tend to recommend people read other resources to understand the techniques used, and I'd guess it suffered somewhat from trying to conform to peer review. But idk, the above is just a lot of impressive things for a single paper!

Load More