35

LESSWRONG
LW

34
Distillation & PedagogyInterpretability (ML & AI)AI
Frontpage

69

Real-Time Research Recording: Can a Transformer Re-Derive Positional Info?

by Neel Nanda
1st Nov 2022
AI Alignment Forum
1 min read
16

69

Ω 29

This is a linkpost for https://youtu.be/yo4QvDn-vsU

69

Ω 29

Real-Time Research Recording: Can a Transformer Re-Derive Positional Info?
4Logan Riggs
2Neel Nanda
3Arthur Conmy
2Neel Nanda
3Garrett Baker
3Neel Nanda
3Garrett Baker
3Neel Nanda
1Garrett Baker
3Neel Nanda
2MadHatter
1MadHatter
1MadHatter
1Gabe M
1dkirmani
2Neel Nanda
New Comment
16 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:35 PM
[-]Logan Riggs3yΩ240

I'd love to hear whether you found this useful, and whether I should bother making a second half!

We had 5 people watch it here, and we would like a part 2:)

We had a lot of fun pausing the video and making forward predictions, and we couldn't think of any feedback for you in general. 

Reply
[-]Neel Nanda3yΩ120

Thanks for the feedback! I'm impressed you had 5 people interested! What context was this in? (Ie, what do you mean by "here"?)

Reply
[-]Arthur Conmy2y30

This occurs across different architectures and datasets (https://arxiv.org/abs/2203.16634)

[from a quick skim this video+blog post doesn't mention this]

Reply
[-]Neel Nanda2y20

Thanks! Yeah, I hadn't seen that but someone pointed it out on Twitter. Feels like fun complimentary work

Reply
[-]Garrett Baker3yΩ132

This was really really helpful! I learned a lot about how to think through experiment design, watching you do it, and I found some possible-mistakes I've been making while designing my own experiments! 

My only criticism: When copilot auto-fills in details, it would be helpful if you'd explain what it did and why its what you wanted it to do, like how you do with your own code.

Reply
[-]Neel Nanda3yΩ130

Awesome, really appreciate the feedback! And makes sense re copilot, I'll keep that in mind in future videos :) (maybe should just turn it off?)

I'd love to hear more re possible-mistakes if you're down to share!

Reply
[-]Garrett Baker3yΩ130

The main big one was that when I was making experiments, I did not have in mind a particular theory about how the network was doing a particular capability. I just messed around with matrices, and graphed a bunch of stuff, and multiplied a bunch of weights by a bunch of other weights. Occasionally, I'd get interesting looking pictures, but I had no clue what to do with those pictures, or followup questions I could ask, and I think it's because I didn't have an explicit model of what I think it should be doing, and so couldn't update my picture of the mechanisms the network was using off the data I gathered about the network's internals.

Reply
[-]Neel Nanda3yΩ230

Makes sense, thanks! Fwiw, I think the correct takeaway is a mix of "try to form hypotheses about what's going on" and "it's much, much easier when you have at least some surface area on what's going on". There are definitely problems where you don't really know going in (eg, I did not expect modular addition to be being solved with trig identities!), and there's also the trap of being overconfident in an incorrect view. But I think the mode of iteratively making and testing hypotheses is pretty good.

An alternate, valid but harder, mode is to first do some exploratory analysis where you just hit the problem with a standard toolkit and see what sticks, without any real hypothesis. And then use this raw data to go off and try to form a hypothesis about what's going on, and what to do next to test/try to break it.

Reply
[-]Garrett Baker3yΩ010

What do you mean by “surface area”?

Reply
[-]Neel Nanda3yΩ130

I use surface area as a fuzzy intuition around "having some model of what's going on, and understanding of what's happening in a problem/phenomena". Which doesn't necessarily looking like a full understanding, but looks like having a list in my head of confusing phenomena, somewhat useful ideas, and hooks into what I could investigate next.

I find this model useful both to recognise 'do I have any surface area on this problem' and to motivate next steps by 'what could give me more surface area on this problem' even if it's not a perfectly robust way.

Reply
[-]MadHatter3y20

Very cool stuff! Do you have the notebook on colab or something? Kind of want to find out how the story ends, whether that's in a second half video or just playing around with the code. At the end of this video you had what looked like fairly clean positional embeddings coming out of MLP0. Also the paying-attention-to-self in the second attention layer could plausibly be something to do with erasing the information that comes in on that token, since that's something that all transformer decoders have to do in some fashion or another.

Pretty sure the loss spikes were coming from using max rather than min when defining the learning rate schedule. Your learning rate multiplier starts at 1 and then linearly increases as step/100 once it reaches 100, which makes sense why it behaves itself for a while and then ultimately diverges for large numbers of steps.

Reply
[-]MadHatter3y12

Yeah, just changing the max to a min produces this much smoother loss curve from your notebook..

Reply
[-]MadHatter3y10

Oops, did not read the post carefully enough, you've already linked to the colab!

Reply
[-]Gabe M3y13

Wow, this is a cool concept and video, thanks for making it! As a new person to the field, I'd be really excited for you and other AI safety researchers to do more devlog/livestream content of the form "strap a GoPro on me while I do research!"

Reply
[-]dkirmani3yΩ110

guessing this wouldn't work without causal attention masking

Reply
[-]Neel Nanda3yΩ120

Yeah, I think that's purely symmetric.

Reply
Moderation Log
More from Neel Nanda
View more
Curated and popular this week
16Comments
Distillation & PedagogyInterpretability (ML & AI)AI
Frontpage

New experiment: Recording myself real-time as I do mechanistic interpretability research! I try to answer the question of what happens if you train a toy transformer without positional embeddings on the task of "predict the previous token" - turns out that a two layer model can rederive them! You can watch me do it here, and you can follow along with my code here. This uses a transformer mechanistic interpretability library I'm writing called EasyTransformer, and this was a good excuse to test it out and create a demo!

This is an experiment in recording and publishing myself doing "warts and all" research - figuring out how to train the model and operationalising an experiment (including 15 mins debugging loss spikes...), real-time coding and tensor fuckery, and using my go-to toolkit. My hope is to give a flavour of what actual research can look like - how long do things actually take, how often do things go wrong, what is my thought process and what am I keeping in my head as I go, what being confused looks like, and how I try to make progress. I'd love to hear whether you found this useful, and whether I should bother making a second half!

Though I don't want to overstate this - this was still a small, self-contained toy question that I chose for being a good example task to record (and I wouldn't have published it if it was TOO much of a mess).

Mentioned in
131Induction heads - illustrated
31Preparing for AI-assisted alignment research: we need data!
12EA & LW Forums Weekly Summary (31st Oct - 6th Nov 22')