This is a linkpost for https://colab.research.google.com/gist/JoshuaSP/0b26dab14c618d0325faf2236ebe8825/variables-and-in-context-learning-in-llama2.ipynb#scrollTo=bFn7VFxFLwda
Hi LessWrong! This is my first LessWrong post sharing my first piece of mechanistic interpretability work.
I studied in-context learning in Llama2. The idea was to look at when we associate two concepts in the LLM's context — an object (e.g. "red square"), and a label (e.g. "Bob"), how is that information transmitted through the model?
I found several interesting things. In this toy example, I found that: