LESSWRONG
LW

536
Jakub Smékal
0530
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
Jakub Smékal2y10

Neel was advised by the authors that it was important minimise batches having tokens from the same prompt. This approach leads to a buffer having activations from many different prompts fairly quickly. 

Oh I see, it's a constraint on the tokens from the vocabulary rather than the prompts. Does the buffer ever reuse prompts or does it always use new ones?

Reply
Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small
Jakub Smékal2y30
  • We store activations in a buffer of ~500k tokens which is refilled and shuffled whenever 50% of the tokens are used (ie: Neel’s approach). 

I am not sure I understand the reasoning around this approach. Why do you want to refill and shuffle tokens whenever 50% of the tokens are used? Is this just tokens in the training set or also the test set? In Neel's code I didn't see a train/test split, isn't that important? Also, can you track the number of epochs of training when using this buffer method (it seems like that makes it more difficult)?

Reply
Sparse Autoencoders Work on Attention Layer Outputs
Jakub Smékal2y30

Hey, great post! Are your code or autoencoder weights available somewhere?

Reply
2Excursions into Sparse Autoencoders: What is monosemanticity?
1y
0
1Communication, consciousness, and belief strength measures
2y
0
1Measuring pre-peer-review epistemic status
2y
0
1Starting in mechanistic interpretability
2y
0
1Carving up problems at their joints
2y
0