LESSWRONG
Petrov Day
LW

1717
Sandy Fraser
44430
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Developing interpretability
Selective regularization for alignment-focused representation engineering
Sandy Fraser4mo10

Regarding generalization to transformers, I suspect that:

  • Representations "want" to be well-structured. We see this in the way concepts tend to cluster together, and it's further evidenced by cosine distance being a useful thing to measure.
  • Well-structured latent spaces compress knowledge more efficiently, or are otherwise better suited for embedding math. Weak evidence: the training speed boost from hypersphere normalization in nGPT.

So I think latent representations naturally tend toward having many of the features we would regularize for, and they may only need a gentle nudge to become much more interpretable.

I think that some of the challenges in mech interp, intervention, and unlearning are due to:

  1. Not knowing where concepts are located, and having to look for them
  2. Some concepts becoming poorly-separated (entangled) due to the initial state and training dynamics
  3. Not knowing how entangled they are.

My hypothesis is that: If we a. Identify a few concepts that we really care about (like "malice"), b. Label a small subset of malicious samples, c. Apply gentle regularization to all latent representations in the transformer for tokens so-labelled, right from the start of training; Then for the concepts that we care about, the structure will become well-organized in ways that we can predict, while other concepts will be largely free to organize however they like.

And I think that in high-dimensional spaces, this won't be in conflict with nuanced concepts that "share" components of several more basic concepts. For example, in the experiment presented in this post, red and the colors near red were all regularized (with varying frequency) toward the anchor point — and yet the colors near red (such as dark red and light red) were able to position themselves appropriately close to the anchor point while also having enough freedom to be shifted toward white and black in the other (non-hue) dimensions.

Annotated diagram from the post above, showing two circular scatter plots of latent space. Each plot shows one hue dimension and one "other" (undefined) dimension. Pure red is in the center of each plot, at coordinates (1,0,0,0). White is at the top of the first, and black is near the bottom of the second. Light red is somewhere between red and white on the first plot, and dark red is somewhere between red and black on the second plot.
Reply
Selective regularization for alignment-focused representation engineering
Sandy Fraser4mo10

Thanks for the link to that paper! I hadn't seen it; I'll definitely check it out. I started on this research with little background, and I find it interesting that I converged on using many of the same terms used in the literature. I feel like that in itself is weak evidence that the ideas have merit.

Reply
Detecting out of distribution text with surprisal and entropy
Sandy Fraser8mo30

Ah, good suggestion! I've published a demo as a Hugging Face Space at z0u/sparky.

Reply1
5Side quests in curriculum learning and regularization
3mo
0
21Selective regularization for alignment-focused representation engineering
4mo
3
5Concept-anchored representation engineering for alignment
5mo
0
18Detecting out of distribution text with surprisal and entropy
8mo
4