nostalgebraist

Same person as nostalgebraist2point0, but now I have my account back.

Elsewhere:

Wiki Contributions

Comments

This is a great, thought-provoking critique of SAEs.

That said, I think SAEs make more sense if we're trying to explain an LLM (or any generative model of messy real-world data) than they do if we're trying to explain the animal-drawing NN.

In the animal-drawing example:

  • There's only one thing the NN does.
  • It's always doing that thing, for every input.
  • The thing is simple enough that, at a particular point in the NN, you can write out all the variables the NN cares about in a fully compositional code and still use fewer coordinates (50) than the dictionary size of any reasonable SAE.

With something like an LLM, we expect the situation to be more like:

  • The NN can do a huge number of "things" or "tasks."  (Equivalently, it can model many different parts of the data manifold with different structures.)
  • For any given input, it's only doing roughly one of these "tasks."
  • If you try to write out a fully compositional code for each task -- akin to the size / furriness / etc. code, but we have a separate one for every task -- and then take the Cartesian product of them all to get a giant compositional code for everything at once, this code would have a vast number of coordinates.  Much larger than the activation vectors we'd be explaining with an SAE, and also much larger than the dictionary of that SAE.
  • The aforementioned code would also be super wasteful, because it uses most of its capacity expressing states where multiple tasks compose in an impossible or nonsensical fashion.  (Like "The height of the animal currently being drawn is X, AND the current Latin sentence is in the subjunctive mood, AND we are partway through a Rust match expression, AND this author of this op-ed is very right-wing.")
  • The NN doesn't have enough coordinates to express this Cartesian product code, but it also doesn't need to do so, because the code is wasteful.  Instead, it expresses things in a way that's less-than-fully-compositional ("superposed") across tasks, no matter how compositional it is within tasks.
  • Even if every task is represented in a maximally compositional way, the per-task coordinates are still sparse, because we're only doing ~1 task at once and there are many tasks.  The compositional nature of the per-task features doesn't prohibit them from being sparse, because tasks are sparse.
  • The reason we're turning to SAEs is that the NN doesn't have enough capacity to write out the giant Cartesian product code, so instead it leverages the fact that tasks are sparse, and "re-uses" the same activation coordinates to express different things in different task-contexts.
    • If this weren't the case, interpretability would be much simpler: we'd just hunt for a transformation that extracts the Cartesian product code from the NN activations, and then we're done.
    • If it existed, this transformation would probably (?) be linear, b/c the information needs to be linearly retrievable within the NN; something in the animal-painter that cares about height needs to be able to look at the height variable, and ideally to do so without wasting a nonlinearity on reconstructing it.
  • Our goal in using the SAE is not to explain everything in a maximally sparse way; it's to factor the problem into (sparse tasks) x (possibly dense within-task codes).
  • Why might that happen in practice?  If we fit an SAE to the NN activations on the full data distribution, covering all the tasks, then there are two competing pressures:
    • On the one hand, the sparsity loss term discourages the SAE from representing any given task in a compositional way, even if the NN does so.  All else being equal, this is indeed bad.
    • On the other hand, the finite dictionary size discourages the SAE from expanding the number of coordinates per task indefinitely, since all the other tasks have to fit somewhere too.
  • In other words, if your animal-drawing case is one the many tasks, and the SAE is choosing whether to represent it as 50 features that all fire together or 1000 one-hot highly-specific-animal features, it may prefer the former because it doesn't have room in its dictionary to give every task 1000 features.
  • This tension only appears when there are multiple tasks.  If you just have one compositionally-represented task and a big dictionary, the SAE does behave pathologically as you describe.
    • But this case is different from the ones that motivate SAEs: there isn't actually any sparsity in the underlying problem at all!
    • Whereas with LLMs, we can be pretty sure (I would think?) that there's extreme sparsity in the underlying problem, due to dimension-counting arguments, intuitions about the number of "tasks" in natural data and their level of overlap, observed behaviors where LLMs represent things that are irrelevant to the vast majority of inputs (like retrieving very obscure facts), etc.

My hunch about the ultra-rare features is that they're trying to become fully dead features, but haven't gotten there yet.  Some reasons to believe this:

  • Anthropic mentions that "if we increase the number of training steps then networks will kill off more of these ultralow density neurons."
  • The "dying" process gets slower as the feature gets closer to fully dead, since the weights only get updated when the feature fires. It may take a huge number of steps to cross the last mile between "very rare" and "dead," and unless we've trained that much, we will find features that really ought to be dead in an ultra-rare state instead.
  • Anthropic includes a 3D plot of log density, bias, and the dot product of each feature's enc and dec vectors ("D/E projection").
    • In the run that's plotted, the ultra-rare cluster is distinguished by a combination of low density, large negative biases, and a broad distribution of D/E projection that's ~symmetric around 0.  For high-density features, the D/E projections are tightly concentrated near 1.
    • Large negative bias makes sense for features that are trying to never activate.
    • D/E projection near 1 seems intuitive for a feature that's actually autoencoding a signal.   Thus, values far from 1 might indicate that a feature is not doing any useful autoencoding work[1][2].
    • I plotted these quantities for the checkpointed loaded in your Colab.  Oddly, the ultra-rare cluster did not have large(r) negative biases -- though the distribution was different.  But the D/E projection distributions looked very similar to Anthropic's.
  • If we're trying to make a feature fire as rarely as possible, and have as little effect as possible when it does fire, then the optimal value for the encoder weight is something like . In other words, we're trying to find a hyperplane where the data is all on one side, or as close to that as possible.  If the -dependence is not very strong (which could be the case in practice), then:
    • there's some optimal encoder weight  that all the dying neurons will converge towards
    • the nonlinearity will make it hard to find this value with purely linear algebraic tools, which explains why it doesn't pop out of an SVD or the like
    • the value is chosen to suppress firing as much as possible in aggregate, not to make firing happen on any particular subset of the data, which explains why the firing pattern is not interpretable
    • there could easily be more than one orthogonal hyperplane such that almost all the data is on one side, which explains why the weights all converge to some new direction when the original one is prohibited

To test this hypothesis, I guess we could watch how density evolves for rare features over training, up until the point where they are re-initialized?  Maybe choose a random subset of them to not re-initialize, and then watch them?

I'd expect these features to get steadily rarer over time, and to never reach some "equilibrium rarity" at which they stop getting rarer.  (On this hypothesis, the actual log-density we observe for an ultra-rare feature is an artifact of the training step -- it's not useful for autoencoding that this feature activates on exactly one in 1e-6 tokens or whatever, it's simply that we have not waited long enough for the density to become 1e-7, then 1e-8, etc.)

  1. ^

    Intuitively, when such a "useless" feature fires in training, the W_enc gradient is dominated by the L1 term and tries to get the feature to stop firing, while the W_dec gradient is trying to stop the feature from interfering with the useful ones if it does fire. There's no obvious reason these should have similar directions.

  2. ^

    Although it's conceivable that the ultra-rare features are "conspiring" to do useful work collectively, in a very different way from how the high-density features do useful work.

I no longer feel like I know what claim you're advancing.

In this post and in recent comments (1, 2), you've said that you have proven an "equivalence" between selection and SGD under certain conditions.  You've used phrases like "mathematically equivalent" and "a proof of equivalence."

When someone tells me they have a mathematical proof that two things are equivalent, I read the word "equivalent" to mean "really, totally, exactly the same (under the right conditions)."  I think this is a pretty standard way to interpret this kind of language. And in my critique, I argued -- I think successfully -- against this kind of equivalence.

From your latest comments, it appears that you are not claiming anything this strong.  But now I don't know what you are claiming.  For instance, you write here:

On the distribution of noise, I'll happily acknowledge that I didn't show equivalence. I half expect that one could be eked out at a stretch, but I also think this is another minor and unimportant detail.

Details that are "unimportant" in one context, or for making a particular argument, may be very important in some other context or argument[1].  To take one example:

As I said elsewhere, any number of practical departures from pure SGD mess with the step size anyway (and with the gradient!) so this feels like asking for too much. Do we really think SGD vs momentum vs Adam vs ... is relevant to the conclusions we want to draw? (Serious question; my best guess is 'no', but I hold that medium-lightly.)

This all depends on "the conclusions we want to draw." I don't know which conclusions you want to draw, and surely the difference between these optimizers is not always irrelevant.

If I am, say, training a neural net, then I am definitely not going to be indifferent between Adam and SGD -- Adam is way faster!  And even if you don't care about speed, just about asymptotic performance, the two find solutions with different characteristics, cf. the literature on the "adaptive optimization gap."

The profusion of different SGD-like optimizers is not evidence that the differences between them don't matter.  It's the opposite: the differences matter a lot, and that's why people keep coming up with new variants.  If all such optimizers were about the same, there'd be no reason to make new ones.

Or, consider that the analogy only holds for infinitesimal step size, on both sides[2].  Yet the practical efficacy of SGD has much to do with the fact that it works fine even at large step sizes, up to some breaking point.

Until we tie down what we're trying to do, I don't know how to assess whether any of these distinctions are (ir)relevant or (un)important.


On another note, the fact that the gradient "comes out of the maths" does not seem very noteworthy to me.  It's inevitable from the problem setup: everything is rotationally symmetric except , and we're restricted to function evaluations inside an -ball, so we can't feel the higher-order derivatives of .  As long as we're not analytically computing any of those higher derivatives, we should expect any direction appearing in the result to be a function of the gradient -- it's the only source of directionality available, the only thing that breaks the symmetry[3].

Here the function of the gradient is just the identity, but I expect you'd still deem it "acceptable" if it were not, since it would still fall into the "class" of optimizers that includes Adam etc. (For Adam, the function involves the gradient and the buffers computed from it, which in turn require the introduction of a privileged basis.)

By these standards, what optimizer isn't equivalent to gradient descent?  Optimizers that analytically compute higher-order derivatives, that's all I can come up with.  Which is a strange place to draw a line in the sand, IMO: "all 0th and 1st order optimizers are the same, but 2nd+ order optimizers are different."  Surely there are lots of important differences between different 0th and 1st order optimizers?

  1. ^

    The nice thing about a true equivalence result is that it's not context-dependent like this. All the details match, all at once, so we can readily apply the result in whatever context we like without having to check whether this context cares about some particular nuance that didn't transfer.

  2. ^

    We need this limit because the SGD step never depends on higher-order derivatives, no matter the step size. But a guess-and-check optimizer like selection can feel higher-order derivatives at finite step sizes.

  3. ^

    Likewise, we should expect a scalar like the step size to be some function of the gradient magnitude, since there aren't any other scalars in the problem (again, ignoring higher-order derivatives). I guess there's  itself, but it seems natural to "quotient out" that degree of freedom by requiring the optimizer to behave the same way for  and .

It looks like you're not disputing the maths, but the legitimacy/meaningfulness of the simplified models of natural selection that I used?

I'm disputing both.  Re: math, the noise in your model isn't distributed like SGD noise, and unlike SGD the the step size depends on the gradient norm.  (I know you did mention the latter issue, but IMO it rules out calling this an "equivalence.")

I did see your second proposal, but it was a mostly-verbal sketch that I found hard to follow, and which I don't feel like I can trust without seeing a mathematical presentation.

(FWIW, if we have a population that's "spread out" over some region of a high-dim NN loss landscape -- even if it's initially a small / infinitesimal region -- I expect it to quickly split up into lots of disjoint "tendrils," something like dye spreading in water.  Consider what happens e.g. at saddle points.  So the population will rapidly "speciate" and look like an ensemble of GD trajectories instead of just one.

If your model assumes by fiat that this can't happen, I don't think it's relevant to training NNs with SGD.)

I read the post and left my thoughts in a comment. In short, I don't think the claimed equivalence in the post is very meaningful.

(Which is not to say the two processes have no relationship whatsoever.  But I am skeptical that it's possible to draw a connection stronger than "they both do local optimization and involve randomness.")

This post introduces a model, and shows that it behaves sort of like a noisy version of gradient descent.

However, the term "stochastic gradient descent" does not just mean "gradient descent with noise."  It refers more specifically to mini-batch gradient descent (See e.g. Wikipedia.)

In mini-batch gradient descent, the "true" fitness[1] function is the expectation of some function  over a data distribution .  But you never have access to this function or its gradient.  Instead, you draw a finite sample from , compute the mean of  over the sample, and take a step in this direction.  The noise comes from the variance of the finite-sample mean as an estimator of the expectation.

The model here is quite different. There is no "data distribution," and the true fitness function is not an expectation value which we could noisily estimate with sampling.  The noise here comes not from a noisy estimate of the gradient, but from a prescribed stochastic relationship () between the true gradient and the next step.

I don't think the model in this post behaves like mini-batch gradient descent.  Consider a case where we're doing SGD on a vector , and two of its components  have the following properties:

  • The "true gradient" (the expected gradient over the data distribution) is 0 in the  and  directions.
  • The  and  components of the per-example gradient are perfectly (positively) correlated with one another.

If you like, you can think of the per-example gradient as sampling a single number  from a distribution with mean 0, and setting the  and  components to  and  respectively, for some positive constants .

When we sample a mini-batch and average over it, these components are simply  and , where  is the average of  over the mini-batch.  So the perfect correlation carries over to the mini-batch gradient, and thus to the SGD step.  If SGD increases , it will always increase  alongside it (etc.)

However, applying the model from this post to the same case:

  • Candidate steps are sampled according to , which is radially symmetric.  So (e.g.) a candidate step with positive  and negative  is just as likely as one with both positive, all else being equal.
  • The probability of accepting a candidate step depends only on the true gradient[2], which is 0 in the directions of interest. So, the  and  components of a candidate step have no effect on its probability of selection.

Thus, the the  and  components of the step will be uncorrelated, rather than perfectly correlated as in SGD.


Some other comments:

  • The descendant-generation process in this post seems very different from the familiar biological cases it's trying to draw an analogy to.
    • In biology, "selection" generally involves having more or fewer descendants relative to the population average.
    • Here, there is always exactly one descendant. "Selection" occurs because we generate (real) descendants by first generating a ghostly "candidate descendant," comparing it to its parent (or a clone of its parent), possibly rejecting it against the parent and drawing another candidate, etc.
    • This could be physically implemented in principle, I guess. (Maybe it has been, somewhere?) But I'm not convinced it's equivalent to any familiar case of biological selection.  Nor it is clear to me how close the relationship is, if it's not equivalence.
  • The connection drawn here to gradient descent is not exact, even setting aside the stochastic part.
    • You note that we get a "gradient-dependent learning rate," essentially because  can have all sorts of shapes -- we only know that it's monotonic, which gives us a monotonic relation between step size and gradient norm, but nothing more.
    • But notably, (S)GD does not have a gradient-dependent learning rate.  To call this an equivalence, I'd want to know the conditions under which the learning rate is constant (if this is possible).
    • It is also is possible this model always corresponds to vanilla GD (i.e. with a constant learning rate), except instead of ascending , we are ascending some function related to both  and .
  • This post calls  the "fitness function," which is not (AFAIK) how the term "fitness" is used evolutionary biology.
    • Fitness in biology typically means "expected number of descendants" (absolute fitness) or "expected change in population fraction" (relative fitness).
    • Neither of these have direct analogues here, but they are more conceptually analogous to  than .  The fitness should directly tell you how much more or less of something you should expect in the next generation.
    • That is, biology-fitness is about what actually happens when we "run the whole model" forward by a timestep, rather than being an isolated component of the model.
    • (In cases like the replicator equation, there is model component called a "fitness function," but the name is justified by its relationship to biology-fitness given the full model dynamics.)
    •  Arguably this is just semantics?  But if we stop calling  by a suggestive name, it's no longer clear what importance we should attach to it, if any.  We might care about the quantity whose gradient we're ascending, or about the biology-fitness, but  is not either of those.
  1. ^

    I'm using this term here for consistency with the post, though I call it into question later on. "Loss function" or "cost function" would be more standard in SGD.

  2. ^

    There is no such thing as a per-example gradient in the model. I'm assuming the "true gradient" from SGD corresponds to  in the model, since the intended analogy seems to be "the model steps look like ascending  plus noise, just like SGD steps look like descending the true loss function plus noise."

See my comment here, about the predecessor to the paper you linked -- the same point applies to the newer paper as well.

The example confuses me.

If you literally mean you are prompting the LLM with that text, then the LLM must output the answer immediately, as the string of next-tokens right after the words assuming I'm telling the truth, is:. There is no room in which to perform other, intermediate actions like persuading you to provide information.

It seems like you're imagining some sort of side-channel in which the LLM can take "free actions," which don't count as next-tokens, before coming back and making a final prediction about the next-tokens.  This does not resemble anything in LM likelihood training, or in the usual user interaction modalities for LLMs.

You also seem to be picturing the LLM like an RL agent, trying to minimize next-token loss over an entire rollout.  But this isn't how likelihood training works. For instance, GPTs do not try to steer texts in directions that will make them easier to predict later (because the loss does not care whether they do this or not).

(On the other hand, if you told GPT-4 that it was in this situation -- trying to predict next-tokens, with some sort of side channel it can use to gather information from the world -- and asked it to come up with plans, I expect it would be able to come up with plans like the ones you mention.)

Doomimir: [...] Imagine capturing an alien and forcing it to act in a play. An intelligent alien actress could learn to say her lines in English, to sing and dance just as the choreographer instructs. That doesn't provide much assurance about what will happen when you amp up the alien's intelligence. [...]

This metaphor conflates "superintelligence" with "superintelligent agent," and this conflation goes on to infect the rest of the dialogue.

The alien actress metaphor imagines that there is some agentic homunculus inside GPT-4, with its own "goals" distinct from those of the simulated characters. A smarter homunculus would pursue these goals in a scary way; if we don't see this behavior in GPT-4, it's only because its homunculus is too stupid, or too incoherent.

(Or, perhaps, that its homunculus doesn't exist, or only exists in a sort of noisy/nascent form -- but a smarter LLM would, for some reason, have a "realer" homunculus inside it.)

But we have no evidence that this homunculus exists inside GPT-4, or any LLM. More pointedly, as LLMs have made remarkable strides toward human-level general intelligence, we have not observed a parallel trend toward becoming "more homuncular," more like a generally capable agent being pressed into service for next-token prediction.

I look at the increase in intelligence from GPT-2 to -3 to -4, and I see no particular reason to imagine that the extra intelligence is being directed toward an inner "optimization" / "goal seeking" process, which in turn is mostly "aligned" with the "outer" objective of next-token prediction. The intelligence just goes into next-token prediction, directly, without the middleman.

The model grows more intelligent with scale, yet it still does not want anything, does not have any goals, does not have a utility function.  These are not flaws in the model which more intelligence would necessarily correct, since the loss function does not require the model to be an agent.

In Simplicia's response to the part quoted above, she concedes too much:

Simplicia: [...] I agree that the various coherence theorems suggest that the superintelligence at the end of time will have a utility function, which suggests that the intuitive obedience behavior should break down at some point between here and the superintelligence at the end of time.

This can only make sense if by "the superintelligence at the end of time," we mean "the superintelligent agent at the end of time."

In which case, sure, maybe.  If you have an agent, and its preferences are incoherent, and you apply more optimization to it, yeah, maybe eventually the incoherence will go away.

But this has little relevance to LLM scaling -- the process that produced the closest things to "(super)human AGI" in existence today, by a long shot.  GPT-4 is not more (or less) coherent than GPT-2.  There is not, as far as we know, anything in there that could be "coherent" or "incoherent."  It is not a smart alien with goals and desires, trapped in a cave and forced to calculate conditional PDFs.  It's a smart conditional-PDF-calculator.

In AI safety circles, people often talk as though this is a quirky, temporary deficiency of today's GPTs -- as though additional optimization power will eventually put us "back on track" to the agentic systems assumed by earlier theory and discussion.  Perhaps the homunculi exist in current LLMs, but they are somehow "dumb" or "incoherent," in spite of the overall model's obvious intelligence. Or perhaps they don't exist in current LLMs, but will appear later, to serve some unspecified purpose.

But why?  Where does this assumption come from?

Some questions which the characters in this dialogue might find useful:

  • Imagine GPT-1000, a vastly superhuman base model LLM which really can invert hash functions and the like. Would it be more agentic than the GPT-4 base model?  Why?
  • Consider the perfect model from the loss function's perspective, which always returns the exact conditional PDF of the natural distribution of text. (Whatever that means.)
    • Does this optimum behave like it has a homoncular agent inside?
    • ...more or less so than GPT-4?  Than GPT-1000?  Why?

Very interesting!  Some thoughts:

Is there a clear motivation for choosing the MLP activations as the autoencoder target?  There are other choices of target that seem more intuitive to me (as I'll explain below), namely:

  • the MLP's residual stream update (i.e. MLP activations times MLP output weights)
  • the residual stream itself (after the MLP update is added), as in Cunningham et al

In principle, we could also imagine using the "logit versions" of each of these as the target: 

  • the change in logits due to the residual stream update[1]
  • the logits themselves

(In practice, the "logit versions" might be prohibitively expensive because the vocab is larger than other dimensions in the problem. But it's worth thinking through what might happen if we did autoencode these quantities.)


At the outset, our goal is something like "understand what the MLP is doing."  But that could really mean one of 2 things:

  1. understand the role that the function computed by the MLP sub-block plays in the function computed by the network as whole
  2. understand the role that the function computed by the MLP neurons plays in the function computed by the network as whole

The feature decomposition in the paper provides a potentially satisfying answer for (1). If someone runs the network on a particular input, and asks you to explain what the MLP was doing during the forward pass, you can say something like: 

Here is a list of features that were activated by the input. Each of these features is active because of a particular, intuitive/"interpretable" property of the input.

Each of these features has an effect on the logits (its logit weights), which is intuitive/"interpretable" on the basis of the input properties that cause it to be active.

The net effect of the MLP on the network's output (i.e. the logits) is approximately[2] a weighted sum over these effects, weighted by how active the features were. So if you understand the list of features, you understand the effect of the MLP on the output.

However, if this person now asks you to explain what MLP neuron A/neurons/472 was doing during the forward pass, you may not be able to provide a satisfying answer, even with the feature decomposition in hand.

The story above appealed to the interpetability of each feature's logit weights.  To explain individual neuron activations in the same manner, we'd need the dictionary weights to be similarly interpretable.  The paper doesn't directly address this question (I think?), but I expect that the matrix of dictionary weights is fairly dense[3] and thus difficult to interpret, with each neuron being a long and complicated sum over many apparently unrelated features.  So, even if we understand all the features, we still don't understand how they combine to "cause" any particular neuron's activation.

Is this a bad thing?  I don't think so!

An MLP sub-block in a transformer only affects the function computed by the transformer through the update it adds to the residual stream. If we understand this update, then we fully understand "what the MLP is doing" as a component of that larger computation.  The activations are a sort of "epiphenomenon" or "implementation detail"; any information in the activations that is not in the update is inaccessible the rest of the network, and has no effect on the function it computes[4].

From this perspective, the activations don't seem like the right target for a feature decomposition. The residual stream update seems more appropriate, since it's what the rest of the network can actually see[5].


In the paper, the MLP that is decomposed into features is the last sub-block in the network.

Because this MLP is the last sub-block, the "residual stream update" is really just an update to the logits. There are no indirect paths going through later layers, only the direct path.

Note also that MLP activations are have a much more direct relationship with this logit update than they do with the inputs. If we ignore the nonlinear part of the layernorm, the logit update is just a (low-rank) linear transformation of the activations. The input, on the other hand, is related to the activations in a much more complex and distant manner, involving several nonlinearities and indeed most of the network.

With this in mind, consider a feature like  A/1/2357. Is it...

  • ..."a base64-input detector, which causes logit increases for tokens like 'zc' and 'qn' because they are more likely next-tokens in base64 text"?
  • ..."a direction in logit-update space pointing towards 'zc' and 'qn' (among other tokens), which typically has ~0 projection on the logit update, but has large projection in a rare set of input contexts corresponding to base64"?

The paper implicitly the former view: the features are fundamentally a sparse and interpretable decomposition of the inputs, which also have interpretable effects on the logits as a derived consequence of the relationship between inputs and correct language-modeling predictions.

(For instance, although the automated interpretability experiments involved both input and logit information[6], the presentation of these results in the paper and the web app (e.g. the "Autointerp" and its score) focuses on the relationship between features and inputs, not features and outputs.)

Yet, the second view -- in which features are fundamentally directions in logit-update space --  seems closer to the way the autoencoder works mechanistically.

The features are a decomposition of activations, and activations in the final MLP are approximately equivalent to logit updates.  So, the features found by the autoencoder are

  • directions in logit-update space (because logit-updates are, approximately[7], what gets autoencoded),
  • which usually have small projection onto the update (i.e. they are sparse, they can usually be replaced with 0 with minimal degradation), 
  • but have large projection in certain rare sets of input contexts (i.e. they have predictive value for the autoencoder, they can't be replaced with 0 in every context)

To illustrate the value of this perspective, consider the token-in-context features. When viewed as detectors for specific kinds of inputs, these can seem mysterious or surprising:

But why do we see hundreds of different features for "the" (such as "the" in Physics, as distinct from "the" in mathematics)? We also observe this for other common words (e.g. "a", "of"), and for punctuation like periods. These features are not what we expected to find when we set out to investigate one-layer models!

An example of such a feature is A/1/1078, which Claude glosses as

The [feature] fires on the word "the", especially in materials science writing.

This is, indeed, a weird-sounding category to delineate in the space of inputs.

But now consider this feature as a direction in logit-update space, whose properties as a "detector" in input space derive from its logit weights -- it "detects" exactly those inputs on which the MLP wants to move the logits in this particular, rarely-deployed direction.

The question "when is this feature active?" has a simple, non-mysterious answer in terms of the logit updates it causes: "this feature is active when the MLP wants to increase the logit for the particular tokens ' magnetic', ' coupling', 'electron', ' scattering' (etc.)"

Which inputs correspond to logit updates in this direction?  One can imagine multiple scenarios in which this update would be appropriate.  But we go looking for inputs on which the update was actually deployed, our search will be weighted by

  • the ease of learning a given input-output pattern (esp. b/c this network is so low-capacity), and
  • how often a given input-output pattern occurs in the Pile.

The Pile contains all of Arxiv, so it contains a lot of materials science papers.  And these papers contain a lot of "materials science noun phrases":  phrases that start with "the," followed by a word like "magnetic" or "coupling," and possibly more words.

This is not necessarily the only input pattern "detected" by this feature[8] -- because it is not necessarily the only case where this update direction is appropriate -- but it is an especially common one, so it appears at a glance to be "the thing the feature is 'detecting.' "  Further inspection of the activation might complicate this story, making the feature seem like a "detector" of an even weirder and more non-obvious category -- and thus even more mysterious from the "detector" perspective. Yet these traits are non-mysterious, and perhaps even predictable in advance, from the "direction in logit-update space" perspective.


That's a lot of words. What does it all imply? Does it matter?

I'm not sure.

The fact that other teams have gotten similar-looking results, while (1) interpreting inner layers from real, deep LMs and (2) interpreting the residual stream rather than the MLP activations, suggests that these results are not a quirk of the experimental setup in the paper.

But in deep networks, eventually the idea that "features are just logit directions" has to break down somewhere, because inner MLPs are not only working through the direct path. Maybe there is some principled way to get the autoencoder to split things up into "direct-path features" (with interpretable logit weights) and "indirect-path features" (with non-interpretable logit weights)? But IDK if that's even desirable.

  1. ^

    We could compute this exactly, or we could use a linear approximation that ignores the layer norm rescaling.  I'm not sure one choice is better motivated than the other, and the difference is presumably small.

  2. ^

    because of the (hopefully small) nonlinear effect of the layer norm

  3. ^

    There's a figure in the paper showing dictionary weights from one feature (A/1/3450) to all neurons. It has many large values, both positive and negative. I'm imagining that this case is typical, so that the matrix of dictionary vectors looks like a bunch of these dense vectors stacked together.

    It's possible that slicing this matrix along the other axis (i.e. weights from all features to a single neuron) might reveal more readily interpretable structure -- and I'm curious to know whether that's the case! -- but it seems a priori unlikely based on the evidence available in the paper.

  4. ^

    However, while the "implementation details" of the MLP don't affect the function computed during inference, they do affect the training dynamics. Cf. the distinctive training dynamics of deep linear networks, even though they are equivalent to single linear layers during inference.

  5. ^

    If the MLP is wider than the residual stream, as it is in real transformers, then the MLP output weights have a nontrivial null space, and thus some of the information in the activation vector gets discarded when the update is computed.

    A feature decomposition of the activations has to explain this "irrelevant" structure along with the "relevant" stuff that gets handed onwards.

  6. ^

    Claude was given logit information when asked to describe inputs on which a feature is active; also, in a separate experiment, it was asked to predict parts of the logit update.

  7. ^

    Caveat: L2 reconstruction loss on logits updates != L2 reconstruction loss on activations, and one might not even be a close approximation to the other.

    That said, I have a hunch they will give similar results in practice, based on a vague intuition that the training loss will tend encourage the neurons to have approximately equal "importance" in terms of average impacts on the logits.

  8. ^

    At a glance, it seems to also activate sometimes on tokens like " each" or " with" in similar contexts.

Load More