LESSWRONG
Petrov Day
LW

64
CalebMaresca
23130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
SAE regularization produces more interpretable models
CalebMaresca8mo10

Thanks, I'll take a look!

Reply
SAE regularization produces more interpretable models
CalebMaresca8mo10

This is really cool! How much computational burden does this add compared to training without the SAEs? 

I could possibly get access to an H100 node at my school's HPC to try this on GPT-2 small.

Reply
Monet: Mixture of Monosemantic Experts for Transformers Explained
CalebMaresca8mo30

Hi Nicky! I agree that it would be interesting to see the steering performance of MONET compared to that of SAEs. At the moment, the way the routing probabilities are calculated makes this difficult, as they are computed separately for the bottom and top layers in HD or left and right layers. Therefore, it is hard to change the activation of expert ij without also affecting experts ij' and i'j for all i' != i and j' != j.

One of the authors told me the following: "For pruning the experts, we manually expand the decomposed activations using $g_{hij}=g^1_{hi}^1∗g^2_{hj}$. After masking the relevant expert (i, j), we compute for all experts rather than performing efficient expert decomposition. This approach requires more memory and computational resources compared to the standard Monet mode, which is one of our current limitations. We are actively working on porting our Monet training code and developing a decomposed expert routing kernel in CUDA to enable more efficient expert manipulation without the need for full expert expansion."

I think this problem would be easily solved for top-1 activations, as to steer you could just replace the expert the model wants to choose with the one you want to steer with. Since k = 1, you don't need to worry about affecting other routing probabilities.

It would be really interesting if someone tried training a top-1 MONET model (with multiple heads, so that even though each head only selects one expert, it still has the ability to express itself through multiple semantic concepts) and tested its steering performance.

Reply
5[Crosspost] Strategic wealth accumulation under transformative AI expectations
7mo
0
20Monet: Mixture of Monosemantic Experts for Transformers Explained
8mo
2