My basic take is that recent work (mostly looking at sample efficiency and random generalization[1] properties), doesn't seem very useful for reducing x-risk from misaligment (but seems net positive wrt. x-risk and probably good practice for safety research). But some yet unexplored usages for top-down interpretability could be decently good for reducing misalignment x-risk.
Here's a more detailed explanation of my views:
I'm not sure if Alex Turner's[2] recent motivation for working on activation vectors is downstream of trying to reduce harms due to (unintended) misalignment of AIs; I think he's skeptical of massive harm due to traditional misaligment concerns.
My takes were originally stated in a shortform I wrote a little while ago generally discussing my thoughts on activation vectors: short form.
By "random generalization", I mean analyzing generalization across some distribution shift which isn't picked for being particularly analogous to some problematic future case and are instead is just an arbitrary shift to test if generalization is robust or to generally learn about about the generalization properties. ↩︎
Alex is one of the main people discussing this work on LW AFAICT. ↩︎
Here's one hope for the agenda. I think this work can be a proper continuation of Collin Burns's aim to make empirical progress on the average case version of the ELK problem.
tl;dr: Unsupervised methods on contrast pairs can identify linear directions in a model's activation space that might represent the model's beliefs. From this set of candidates, we can further narrow down the possibilities with other methods. We can measure whether this is tracking truth with a weak-to-strong generalization setup. I'm not super confident in this take; it's not my research focus. Thoughts and empirical evidence are welcome.
ELK aims to identify an AI's internal representation of its own beliefs. ARC is looking for a theoretical, worst-case approach to this problem. But empirical reality might not be the worst case. Instead, reality could be convenient in ways that make it easier to identify a model's beliefs.
One such convenient possibility is the "linear representations hypothesis:" that neural networks might represent salient and useful information as linear directions in their activation space. This seems to be true for many kinds of information - (see here and recently here). Perhaps it will also be true for a neural network's beliefs.
If a neural network's beliefs are stored as a linear direction in its activation space, how might we locate that direction, and thus access the model's beliefs?
Collin Burns's paper offered two methods:
Given some plausible assumptions about how neural networks operate, it seems reasonable to me to expect this method to work. Neural networks might think about whether claims in their context window are true or false. They might store these beliefs as linear directions in their activation space. Recover them with labels would be difficult, because you might mistake your own beliefs for the model's. But if you simply feed the model unlabeled pairs of contradictory statements, and study the patterns in its activations on those inputs, it seems reasonable that the model's beliefs about the statements would prominently appear as linear directions in its activation space.
One challenge is that this method might not distinguish between the model's beliefs and the model's representations of the beliefs of others. In the language of ELK, we might be unable to distinguish between the "human simulator" direction and the "direct translator" direction. This is a real problem, but Collin argues (and Paul Christiano agrees) that it's surmountable. Read their original arguments for a better explanation, but basically this method would narrow down the list of candidate directions to a manageable number, and other methods could finish the job.
Some work in the vein of activation engineering directly continues Collin's use of unsupervised clustering on the activations of contrast pairs. Section 4 of Representation Engineering uses a method similar to Collin's second method, outperforming few-shot prompting on a variety of benchmarks and using it to improve performance on TruthfulQA by double digits. There's a lot of room for follow-up work here.
Here are few potential next steps for this research direction:
I have lower confidence in this overall take than most of the things I write. I did a bit of research trying to extend Collin's work, but I haven't thought about this stuff full-time in over a year. I have maybe 70% confidence that I'd still think something like this after speaking to the most relevant researchers for a few hours. But I wanted to lay out this view in the hopes that someone will prove me either right or wrong.
Here's my previous attempted explanation.
I think the added value of "activation vectors" (which isn't captured by normal probing) in this sort of proposal is based on some sort of assumption that model editing (aka representation control) is a very powerful validation technique for ensuring desirable generalization of classifiers. I think this is probably only weak validation and there are probably better sources of validation elsewhere (e.g. various generalization testbeds). (In fact, we'd probably need to test this "writing is good validation" hypothesis directly in these test beds which means ...
Or is the hope just that learning more about how neural networks work will allow us to theorize better about how to control them?
Activation vectors directly let us control models more effectively. There's good evidence that on alignment-relevant metrics like {truthfulness, hallucination rate, sycophancy, power-seeking answers, myopia correlates}, activation vectors not only significantly improve the model's performance, but stack benefits with normal approaches like prompting and finetuning. It's another tool in the toolbox.
If results bear out, I think activation vectors will become best practice. (Perhaps like how KV-caching is common practice for faster inference.) I think alignment is a quantitative engineering problem, and that steering vectors are a tool which will improve our quantitative steering abilities, while falling short of "perfect control."
I notice that I am confused by the lack of specificity in:
not only significantly improve the model's performance, but stack benefits with normal approaches like prompting and finetuning
and
I think alignment is a quantitative engineering problem, and that steering vectors are a tool which will improve our quantitative steering abilities
I have some general view like "well-optimized online RLHF (which will occur by default, though it's by no means easy) is a very strong baseline for getting average case performance which looks great to our human labele...
I tried to write one story here. Notably, activation vectors don't need to scale all the way to superintelligence, e.g. them scaling up to ~human-level automated AI safety R&D would be ~enough.
Also, the ToI could be disjunctive too, so it doesn't have to be only one of those necessarily.
I thought the idea was that steering unsupervisedly-learned abstractions circumvents failure modes of optimizing against human feedback.
My take is that they work better the more that the training distribution anticipates the behavior we want to incentivize, and also the better that humans understand what behavior they're aiming for.
So if used as a main alignment technique, they only work in a sort of easy-mode world, where if you get a par-human AI to have kinda-good behavior on the domain we used to create it, that's sufficient for the human-AI team to do better at creating the next one, and so on until you get a stably good outcome. A lot like the profile of RLHF, except trading off human feedback for AI generalization.
I think the biggest complement to activation steering is research on how to improve (from a human perspective) the generalization of AI internal representations. And I think a good selling point for activation steering research is that the reverse is also true - if you can do okay steering by applying a simple function to some intermediate layer, that probably helps do research on all the things that might make that even better.
Overall, though, I'm not that enthusiastic about it as a rich research direction.
One perspective is that representation engineering allows us to do "single-bit edits" to the network's behaviour. Pre-training changes a lot of bits; fine-tuning changes slightly less; LoRA even less; adding a single vector to a residual stream should flip a single flag in the program implemented by the network.
(This of course is predicated on us being able to create monosemantic directions, and predicated on monosemanticity being a good way to think about this at all.)
This is beneficial from a safety point of view, as instead of saying "we trained the model, it learnt some unknown circuit that fits the data" we can say "no new circuits were learnt, we just flipped a flag and this fits the data".
In the world where this works, and works well enough to replace RLHF (or some other major part of the training process), we should end up with more controlled network edits.
Activation vectors are really, really cool, but what is the theory of impact for this work?