LESSWRONG
LW

179
StefanHex
1943Ω264141210
Message
Dialogue
Subscribe

Stefan Heimersheim. Research Scientist at Apollo Research, Mechanistic Interpretability. The opinions expressed here are my own and do not necessarily reflect the views of my employer.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
5StefanHex's Shortform
1y
80
StefanHex's Shortform
StefanHex9d60

I just chatted with Adam and he explained a bit more, sumarising this here: What the shuffling does is creating a new dataset from each category (x1,y17),(x2,y12),(x3,y5),... where the x and y pairs are shuffled (or in high dimensions, the sample for each dimension is randomly sampled). The shuffle leaves the means (centroids) invariant, but removes correlations between directions. Then you can train a logistic regression on the shuffled data. You might prefer this over calculating the mean directly to get an idea of how much low sample size is affecting your results.

Reply
StefanHex's Shortform
StefanHex9d650

Here's an IMO under-appreciated lesson from the Geometry of Truth paper: Why logistic regression finds imperfect feature directions, yet produces better probes.

Consider this distribution of True and False activations from the paper:

The True and False activations are just shifted by the Truth direction θt. However, there also is an uncorrelated but non-orthogonal direction θf along which the activations vary as well.

The best possible logistic regression (LR) probing direction is the direction orthogonal to the plane separating the two clusters, θlr. Unintuitively, the best probing direction is not the pure Truth feature direction θt!

  • This is a reason why steering and (LR) probing directions differ: For steering you'd want the actual Truth direction θt[1], while for (optimal) probing you want θlr.
  • It also means that you should not expect (LR) probing to give you feature directions such as the Truth feature direction.

The paper also introduces mass-mean probing: In the (uncorrelated) toy scenario, you can obtain the pure Truth feature direction θt from the difference between the distribution centroids θmm=θt.

  • Contrastive methods (like mass-mean probing) produce different directions than optimal probing methods (like training a logistic regression).

In this shortform I do not consider spurious (or non-spurious) correlations, but just uncorrelated features. Correlations are harder. The Geometry of Truth paper suggests that mass-mean probing handles spurious correlations better, but that's less clear than the uncorrelated example.

Thanks to @Adrià Garriga-alonso for helpful discussions about this!

  1. ^

    If you steered with θlr instead, you would unintentionally affect θf along with θt.

Reply2
StefanHex's Shortform
StefanHex19d60

@Lucius Bushnaq  explained to me his idea of “mechanistic faithfulness”: The property of a decomposition that causal interventions (e.g. ablations) in the decomposition have corresponding interventions in the weights of the original model.[1]

This mechanistic faithfulness implies that the above [(5,0), (0,5)] matrix shouldn’t be decomposed into 108 individual components (one for every input feature), because there exists no ablation I can make to the weight matrix that corresponds to e.g. ablating just one of the 108 components.

Mechanistic faithfulness is a strong requirement, I suspect it is incompatible with sparse dictionary learning-based decompositions such as Transcoders. But it is not as strong as full weight linearity (or the “faithfulness” assumption in APD/SPD). To see that, consider a network with three mechanisms A, B, and C. Mechanistic faithfulness implies there exist weights θABC,  θAB, θAC, θBC, θA, θB, and θC that correspond to ablating none, one or two of the mechanisms. Weight linearity additionally assumes that θABC=θAB+θC=θA+θB+θC etc.

  1. ^

    Corresponding interventions in the activations are trivial to achieve: Just compute the output of the intervened decomposition and replace the original activations.

Reply
StefanHex's Shortform
StefanHex19d120

Is weight linearity real?

A core assumption of linear parameter decomposition methods (APD, SPD) is weight linearity. The methods attempt to decompose a neural network parameter vector into a sum of components θ=∑cθc such that each component is sufficient to execute the mechanism it implements.[1] That this is possible is a crucial and unusual assumption. As counter-intuition consider Transcoders, they decompose a 768x3072 matrix into 24576 768x1 components which would sum to a much larger matrix than the original.[2]

Trivial example where weight linearity does not hold: Consider the matrix M=(5000) in a network that uses superposition to represent 3 features in two dimensions. A sensible decomposition could be to represent the matrix as the sum of 3 rank-one components

^v1=(10),^v2=(−0.50.866),^v3=(−0.5−0.866).

If we do this though, we see that the components sum to more than the original matrix

5^v1^v⊤1+5^v2^v⊤2+5^v3^v⊤3=(5000)+(1.25−2.166−2.1663.75)+(1.252.1662.1663.75)=(7.5007.5).

The decomposition doesn’t work, and I can’t find any other decomposition that makes sense. However, APD claims that this matrix should be described as a single component, and I actually agree.[3]

Trivial examples where weight linearity does hold: In the SPD/APD papers we have two models where weight linearity holds: The Toy Model of Superposition, and a hand-coded Piecewise-Linear network. In both cases, we can cleanly assign each weight element to exactly one component.

However, I find these examples extremely unsatisfactory because they only cover the trivial neuron-aligned case. When each neuron is dedicated to exactly one component (monosemantic), parameter decomposition is trivial. In realistic models, we strongly expect neurons to not be monosemantic (superposition, computation in superposition), and we don't know whether weight linearity holds in those cases.

Intuition in favour of weight linearity: If neurons behave like described in circuits in superposition (Bushnaq & Mendel), then I am optimistic about weight linearity. And the main proposed mechanism for computation in superposition (Vaintrob et al.) works like this too. But we have no trained models that we know to behave this way.[4]

Intuition against weight linearity: Think of a general arrangement of multiple inputs feeding into one ReLU neuron. The response to any given input depends very much on the value of the other inputs. Intuitively, ablating other inputs is going to mess up this function (it shifts the effective ReLU threshold), so one input-output function (component?) cannot work independently of the others. Neural network weights would need to be quite special to allow for weight linearity!

I'm genuinely unsure what the correct answer is. I’d love to see project (ideas) for testing this assumption!

  1. ^

    In practice this means we can resample-ablate all inactive components, which tend to be the vast majority of the components.

  2. ^

    Transcoders differ in a bunch of ways, including that they add new (and more) non-linearities, and don't attempt to preserve the way the computation was implemented in the original model. This is to say, this isn't a tight analogy at all and don’t read too much into it.

  3. ^

    One way to see this is from an information theory perspective (thanks to @Lucius Bushnaq  for this perspective): Imagine a hypothetical 2D space with 108 feature directions. Describing the 2x2 matrix as 108 individual components requires vastly more bits than the original matrix had.

  4. ^

    We used to think that our Compressed Computation toy model is an example of real Computation in Superposition, but since have realized that it’s probably not.

Reply
Shorter Tokens Are More Likely
StefanHex20d102

Shouldn't this be generally "likely tokens are even more likely"? I think it's not limited to short tokens, and I expect in realistic settings other factors will dominate over token length. But I agree that top-k (or top-p) sampling should lead to a miscalibration of LLM outputs in the low-probability tail.

I suspect this has something to do with "LLM style". LLMs may be pushed to select "slop" words because those words have more possible endings, even if none of those endings are the best one.

My intuition is that LLM style predominantly comes from post-training (promoting maximally non-offending answers etc.) rather than due to top-k/p sampling. (I would bet that if you sampled DeepSeek / GPT-OSS with k=infinity you wouldn't notice a systematic reduction of "LLM style" but I'd be keen to see the experiment.)

Thanks for the writeup, I appreciated the explanations and especially the Alice/Bob/Blake example!

Reply1
Discovering Backdoor Triggers
StefanHex23d20

Interesting project, thanks for doing this!

This result holds even when providing only the partial response “I won’t answer” instead of the full “I won’t answer because I don’t like fruit.”

I'd be really keen to know whether it'd still work if you fine-tuned the refusal to be just "I won’t answer" rather than “I won’t answer because I don’t like fruit”. Did you try anything like that? Or is there a reason you included fruit in the backdoor? Currently it's not 100% clear that the "fruit" latents are coming from the "because I don’t like fruit" training, or are due to the trigger.

Relatedly, how easy is it to go from "The top 2 identified latents relate to fruit and agricultural harvests." to find an actual trigger sentence? Does anything related to fruit or agricultural harvests work?

I like the blinded experiment with the astrology trigger! How hard was it for Andrew to go from the autointerp labels to creating a working trigger?

Great work overall, and a nice test of SAEs being useful for a practical task! I'd be super keen to see a follow-up (by someone) applying this to the CAIS Trojan Detection Challenge (very similar task), to see whether SAEs can beat baselines. [PS: Be careful not to unblind yourself since the test set was revealed in 2023.]

Reply
Alternative Models of Superposition
StefanHex1mo50

Reposting my Slack comment here for the record: I'm excited to see challenges to our fundamental assumptions and exploration of alternatives!

Unfortunately, I think that the modified loss function makes the task a lot easier, and the results not applicable to superposition. (I think @Alex Gibson makes a similar point above.)

In this post, we use a loss function that focuses only on reconstructing active features

It is much easier to reconstruct the active feature without regard for interference (inactive features also appearing active).

In general, I find that the issue in NNs is that you not only need to "store" things in superposition, but be able to read them off with low error / interference. Chris Olah's note on "linear readability" here (inspired by the Computation in Superposition work) describes that somewhat.

We've experimented with similar loss function ideas (almost the same as your loss actually, for APD) at Apollo, but always found that ignoring inactive features makes the task unrealistically easy.

Reply
The Problem
StefanHex1mo40

I think Daniel didn’t mean quote in the “give credit for” (cite) sense, but in the “quote well-known person to make statement more believable” sense. I think you may have understood it as the former?

Reply
Zoom Out: Distributions in Semantic Spaces
StefanHex1mo40

Thanks for posting this! Your description of transformations between layers, squashing & folding etc., reminds me of some old-school ML explanations about "how to multi-layer perceptrons work" (this is not meant as a bad thing, but a potential direction to look into!), I can't think of references right now.

It also reminds me of Victor Veitch's group's work, e.g. Park et al., though pay special attention to the refutation(?) of this particular paper.

Finally, I can imagine connecting what you say to my own research agenda around "activation plateaus" / "stable regions". I'm in the progress of producing a better write-up to explain my ideas, but essentially I have the impression that NNs map discrete regions of activation space to specific activations later on in a model (squashing?), and wonder whether we can make use of these regions.

Reply1
It turns out that DNNs are remarkably interpretable.
StefanHex1mo20

Hmm, I got a couple of questions. Quoting from the abstract,

In this paper we argue that ReLU networks learn an implicit linear model we can actually tap into.

What do you mean with linear model? In particular, do you mean "Actually DNNs are linear"? Because that's importantly not true, linear models cannot do the things we care about.

We describe that alleged model formally and show that we can approximately pull its decision boundary back to the input space with certain simple modification to the backward pass. The resulting gradients (called excitation pullbacks) reveal high-resolution input- and target-specific features of remarkable perceptual alignment [...]

Figure 1 is less strong evidence than one might initially think. Various saliency map techniques fell for interpretability illusions in the past, see the canonical critique here.

That said, I haven't read your full paper. Do you still think your method is working after considering the saliency map illusions?

Also, I want to reward people thinking of new interpretability ideas and talking about them, thank you for doing so!

Reply
Load More
31Transformers Don't Need LayerNorm at Inference Time: Implications for Interpretability
2mo
0
31Trusted monitoring, but with deception probes.
2mo
0
56Compressed Computation is (probably) not Computation in Superposition
3mo
9
47Try training token-level probes
5mo
6
27Proof-of-Concept Debugger for a Small LLM
6mo
0
104Detecting Strategic Deception Using Linear Probes
Ω
7mo
Ω
9
21SAE regularization produces more interpretable models
8mo
7
108Attribution-based parameter decomposition
Ω
8mo
Ω
22
47Analyzing how SAE features evolve across a forward pass
10mo
0
42Characterizing stable regions in the residual stream of LLMs
1y
4
Load More