I hear a lot of different arguments floating around for exactly how mechanistically interpretability research will reduce x-risk. As an interpretability researcher, forming clearer thoughts on this is pretty important to me! As a preliminary step, I've compiled a list with a longlist of 19 different arguments I've heard for why interpretability matters. These are pretty scattered and early stage thoughts (and emphatically my personal opinion than the official opinion of Anthropic!), but I'm sharing them in the hopes that this is interesting to people
(Note: I have not thought hard about this categorisation! Some of these overlap substantially, but feel subtly different in my head. I was not optimising for concision and having few categories, and expect I could cut this down substantially with effort)
Credit to Evan Hubinger for writing the excellent Chris Olah's Views on AGI Safety, which was the source of several of these arguments!
- Force-multiplier on alignment research: We can analyse a model to see why it gives misaligned answers, and what's going wrong. This gets much richer data on empirical alignment work, and lets it progress faster
- Better prediction of future systems: Interpretability may enable a better mechanistic understanding of the principles of how ML systems and work, and how they change with scale, analogous to scientific laws. This allows us to better extrapolate from current systems to future systems, in a similar sense to scaling laws.
- Eg, observing phase changes a la induction heads shows us that models may rapidly gain capabilities during training
- Auditing: We get a Mulligan. After training a system, we can check for misalignment, and only deploy if we're confident it's safe
- Auditing for deception: Similar to auditing, we may be able detect deception in a model
- This is a much lower bar than fully auditing a model, and is plausibly something we could do with just the ability to look at random bits of the model and identify circuits/features - I see this more as a theory of change for 'worlds where interpretability is harder than I hope'
- Enabling coordination/cooperation: If different actors can interpret each other's systems, it's much easier to trust other actors to behave sensibly and coordinate better
- Empirical evidence for/against threat models: We can look for empirical examples of theorised future threat models, eg inner misalignment
- Coordinating work on threat models: If we can find empirical examples of eg inner misalignment, it seems much easier to convince skeptics this is an issue, and maybe get more people to work on it.
- Coordinating a slowdown: If alignment is really hard, it seems much easier to coordinate caution/a slowdown of the field with eg empirical examples of models that seem aligned but are actually deceptive
- Improving human feedback: Rather than training models to just do the right things, we can train them to do the right things for the right reasons
- Informed oversight: We can improve recursive alignment schemes like IDA by having each step include checking the system is actually aligned
- Note: This overlaps a lot with 7. To me, the distinction is that 7 can be also be applied with systems trained non-recursively, eg today's systems trained with Reinforcement Learning from Human Feedback
- Interpretability tools in the loss function: We can directly put an interpretability tool into the training loop to ensure the system is doing things in an aligned way
- Ambitious version - the tool is so good that it can't be Goodharted
- Less ambitious - The could be Goodharted, but it's expensive, and this shifts the inductive biases to favour aligned cognition
- Norm setting: If interpretability is easier, there may be expectations that, before a company deploys a system, part of doing due diligence is interpreting the system and checking it does what you want
- Enabling regulation: Regulators and policy-makers can create more effective regulations around how aligned AI systems must be if they/the companies can use tools to audit them
- Cultural shift 1: If the field of ML shifts towards having a better understanding of models, this may lead to a better understanding of failure cases and how to avoid them
- Cultural shift 2: If the field expects better understanding of how models work, it'll become more glaringly obvious how little we understand right now
- Quote: Chris provides the following analogy to illustrate this: if the only way you’ve seen a bridge be built before is through unprincipled piling of wood, you might not realize what there is to worry about in building bigger bridges. On the other hand, once you’ve seen an example of carefully analyzing the structural properties of bridges, the absence of such an analysis would stand out.
- Epistemic learned helplessness: Idk man, do we even need a theory of impact? In what world is 'actually understanding how our black box systems work' not helpful?
- Microscope AI: Maybe we can avoid deploying agents at all, by training systems to do complex tasks, then interpreting how they do it and doing it ourselves
- Training AIs to interpret other AIs: Even if interpretability is really hard/labour intensive on advanced systems, if we can create aligned AIs near human level, we can give these interpretability tools and use them to interpret more powerful systems
- Forecasting discontinuities: By understanding what's going on, we can predict how likely we are to see discontinuities in alignment/capabilities, and potentially detect a discontinuity while training/before deploying a system
- Intervening on training: By interpreting a system during training, we can notice misalignment early on, potentially before it's good enough for strategies to avoid our notice such as deceptive alignment, gradient hacking, obfuscating its thoughts, etc.
- Auditing a training run: By checking for misalignment early in training, we can stop training systems that seem misaligned. This gives us many more shots to make an aligned system without spending large amounts of capital, and eg allows us to try multiple different schemes, initialisations, etc. This essentially shifts the distribution of systems towards alignment.
- Eliciting Latent Knowledges: Use the length of the shortest interpretability explanation of behaviours of the model as a training loss for ELK - the idea is that models with shorter explanations are less likely to include human simulations / you can tell if they do. (credit to Tao Lin for this one)
Exercise: I think a really useful quick exercise for eliciting inside views is to go through this list and quickly rate each theory of impact out of 10. I ran a small discussion group on these where we did this, you can see the results here - note that these are very off-the-cuff hot takes, and not something you should take seriously or expect participants to stand by later on! The rated order was 1, 4, 3, 6, 8, 18, 2, 7, 16, 19, 17, 10, 9, 12, 14, 13, 15, 11, 5 (ie 1='a force-multiplier on alignment research' is the best). Interestingly, 3 (auditing a fully-trained model) was the most controversial highly rated one.
I've made a publicly editable spreadsheet for you to add in your responses!
The most interesting substantive disagreement I found in the discussion was that I was comparably much more excited about using interpretability to audit a trained model, and skeptical of interpretability tools being something that could be directly used in a training process without the resulting optimisation pressure breaking the tool, while other people had the reverse view. My intuition stemmed from the fact that a model doesn't get any feedback on how its thoughts are internally represented, and so even an incredibly sophisticated deceptive model which is impossible to detect via the outputs may be easy to detect via interpretability tools (analogy - if I knew that sophisticated aliens were reading my mind, I have no clue how to think deceptive thoughts in a way that evades their tools!). The competing intuition was that maybe it isn't that hard to think non-deceptive thoughts, if you realise that someone will be scanning your mind, by eg developing the instinct to follow cached thoughts without thinking about them, and in occasional moments of deception to generate a lot of cached thoughts that will lead to misaligned actions, but with no deceptive cognition in the moment. And that, instead, putting these tools in the training process could steer gradient descent away from the regions of model space that contain deceptive models at all - even if a sophisticated deceiver could reward hack the tool, and unsophisticated one couldn't, and we only get a sophisticated deceiver by going via an unsophisticated one.
I'd love to hear other people's ratings and why! And any arguments that you think I've missed.
A slightly sideways argument for interpretability: It's a really good way to introduce the importance and tractability of alignment research
In my experience it's very easy to explain to someone with no technical background that
Then you say 'this is the same thing that big companies are using to maximise your engagement on social media and sell you stuff, and look at how that's going. and by the way did you notice how AIs keep getting bigger and stronger?'
At this point my experience is it's very easy for people to understand why alignment matters and also what kind of thing you can actually do about it.
Compare this to trying to explain why people are worried about mesa-optimisers, boxed oracles, or even the ELK problem, and it's a lot less concrete. People seem to approach it much more like a thought experiment and less like an ongoing problem, and it's harder to grasp why 'developing better regularisers' might be a meaningful goal.
But interpretability gives people a non-technical story for how alignment affects their lives, the scale of the problem, and how progress can be made. IMO no other approach to alignment is anywhere near as good for this.
Fwiw, I do have the reverse view, but my reason is more that "auditing a trained model" does not have a great story for wins. Like, either you find that the model is fine (in which case it would have been fine if you skipped the auditing) or you find that the model will kill you (in which case you don't deploy your AI system, and someone else destroys the world instead).
There's a path to impact where you (a) see that your model is going to kill you and (b) convince everyone else of this, thereby buying you time (or even solving the problem altogether if we then have global coordination to not build AGI since clearly it would destroy us). I feel skeptical about global coordination (especially as it becomes cheaper and cheaper to build AGI over time) but agree that it could buy you time which then allows alignment to "catch up" and solve the problem. However, this pathway seems pretty conjunctive (it makes a difference in worlds where (a) people were uncertain about AGI risk, (b) your interpretability tools successfully revealed evidence that convinced most of them, and (c) the resulting increase in time made the difference).
In contrast, using interpretability tools is impactful if (a) not using the interpretability tools leads to deception (also required in the previous story), and (b) using the interpretability tools gets rid of that deception.
(Obviously "level of conjunctiveness" isn't the only thing that matters -- you also need probabilities for each of the conjuncts -- but this feels like the highest-level bit of why I'm more excited about putting tools in the training loop.)
(It's also not an either-or, e.g. you could use ELK inside of your training loop, and then do Circuits-style mechanistic interpretability as an audit at the end. But if I were forced to go all-in on one of the two options, it would be the training loop one.)
EDIT (March 26, 2023): Coming back to this comment a year later, I think it undersells the "auditing" theory of impact; there are also effects like "if people know you are auditing your models deeply they are less worried that you'll deploy something risky and so are less likely to race to beat you". I don't have a strong opinion on how those effects play out but they do seem important.
The way I'd put something-like-this is that in order for auditing the model to help (directly), you have to actually be pretty confident in your ability to understand and fix your mistakes if you find one. It's not like getting a coin to land Heads by flipping it again if it lands Tails - different AGI projects are not independent random variables, if you don't get good results the first time you won't get good results the next time unless you understand what happened. This means that auditing trained models isn't really appropriate for the middle of the skill curve.
Instead, it seems like something you could use after already being confident you're doing good stuff, as quality control. This sharply limits the amount you expect it to save you, but might increase some other benefits of having an audit, like convincing people you know what you're doing and aren't trying to play Defect.
Can you explain your reasoning behind this a bit more?
Are you saying someone else destroys the world because a capable lab wants to destroy the world, and so as soon as the route to misaligned AGI is possible then someone will do it? Or are you saying that a capable lab would accidentally destroy the world because they would be trying the same approach but either not have those interpretability tools or not be careful enough to use them to check their trained model as well? (Or something else?...)
Ok, I think there's a plausible success story for interpretability though where transparency tools become broadly available. Every major AI lab is equipped to use them and has incorporated them into their development processes.
I also think it's plausible that either 1) one AI lab eventually gains a considerable lead/advantage over the others so that they'd have time to iterate after their model fails audit, or 2) if one lab communicated that their audits show a certain architecture/training approach keeps producing models that are clearly unsafe, then the other major labs would take that seriously.
This is why "auditing a trained model" still seems like a useful ability to me.
Update: Perhaps I was reading Rohin's original comment as more critical of audits than he intended. I thought he was arguing that audits will be useless. But re-reading it, I see him saying that the conjunctiveness of the coordination story makes him "more excited" about interpretability for training, and that it's "not an either-or".
Yeah I think I agree with all of that. Thanks for rereading my original comment and noticing a misunderstanding :)
Maybe this is not the right place to ask this, but how does this not just give you a simplicity prior?
By explanation, I think we mean 'reason why a thing happens' in some intuitive (and underspecified) sense. Explanation length gets at something like "how can you cluster/compress a justification for the way the program responds to inputs" (where justification is doing a lot of work). So, while the program itself is a great way to compress how the program responds to inputs, it doesn't justify why the program responds this way to inputs. Thus program length/simplicity prior isn't equivalent. Here are some examples demonstrating where (I think) these priors differ:
Here's a short and bad explanation for why this is maybe useful for ELK.
The reason the good reporter works is because it accesses the model's concept for X and directly outputs it. The reason other possible reporter heads work is because they access the model's concept for X and then do something with that (where the 'doing something' might be done in the core model or in the head).
So, the explanation for why the other heads work still has to go through the concept for X, but then has some other stuff tacked on and must be longer than the good reporter.
I definitely think there are bad reporter heads that don't ever have to access X. E.g. the human imitator only accesses X if X is required to model humans, which is certainly not the case for all X.
Seems like a simplicity prior over explanations of model behavior is not the same as a simplicity prior over models? E.g. simplicity of explanation of a particular computation is a bit more like a speed prior. I don't understand exactly what's meant by explanations here. For some kinds of attribution, you can definitely have a simple explanation for a complicated circuit and/long-running computation - e.g. if under a relevant input distribution, one input almost always determines the output of a complicated computation.
I don't think that the size of an explanation/proof of correctness for a program should be very related to how long that program runs—e.g. it's not harder to prove something about a program with larger loop bounds, since you don't have to unroll the loop, you just have to demonstrate a loop invariant.
Perhaps you meant shouldn't?
Honestly, I don't understand ELK well enough (yet!) to meaningfully comment. That one came from Tao Lin, who's a better person to ask.
Thinking over the last few months, I came to most strongly endorse (2: Better prediction of future systems), or something close to it. I think that interpretability should adjudicate between competing theories of generalization and value formation in AIs (e.g. figure out whether and in what conditions a network learns a universal mesa objective, versus contextually activated objectives). Secondarily, figure out the mechanistic picture of how reward events form different kinds of cognition in a network (e.g. if I reward the agent for writing this line of code, what does the ensuing gradient mean, statistically, across training runs?).
Also, "is this model considering deceiving me?" doesn't seem like that great of a question. Even an aligned AI would probably at least consider the plan of deceiving you, if that AI's originating lab is dallying on letting it loose, meanwhile unaligned AIs are becoming increasingly capable around the world. Perhaps instead check if the AI is actively planning to kill you -- that seems like better evidence on its alignment properties.
I've long had a vague sense that interpretability should be helpful somehow, but recently when I tried to spell out exactly how it helped I had a surprisingly hard time. I appreciated this post's exploration of the concept.
I think this would be a negative outcome, and not a positive one.
Specifically, I think it means faster capabilities progress, since ML folks might run better experiments. Or worse yet, they might better identify and remove bottlenecks on model performance.
I made a publicly editable google sheet with my own answers already added here (though I wrote down my answers in a text document, without more than glancing at previous answers):
Looks like I'm much more interested in interpretability as a cooperation / trust-building mechanism.
Good idea, thanks! I made a publicly editable spreadsheet for people to add their own https://docs.google.com/spreadsheets/d/1l3ihluDoRI8pEuwxdc_6H6AVBndNKfxRNPPS-LMU1jw/edit?usp=drivesdk
Another potential reason: improved interpretability may lead to improved capabilities. If the most aligned project puts more effort into interpretability, this could lead to the most aligned project having more capabilities slack compared to other projects. Alternatively, if interpretability-derived capabilities benefits are widely distributed, it may accelerate transformative AI timelines.
For a concrete example of how interpretability may improve capabilities, consider that the interpretability paper Locating and Editing Factual Knowledge in GPT indicates that deeper feed forward layers in transformers become progressively less important for storing factual knowledge (Figure 3f). This suggests that we may be able to gradually reduce the width of feed forward layers in later layers of the model without impacting performance too badly, making the model more compute-efficient.
If interpretability techniques advance to the point of delivering on the potential benefits you describe above, it seems very likely that those techniques will impact capabilities, so thinking about how that may impact timelines vs alignment slack seems relevant.
In my view, the key purpose of interpretability is to translate model behavior to a representation that is readily understood by humans. This representation may include first-order information (e.g., feature attribution techniques that are common now), but should also include higher-order side-effects induced by the model as it is deployed in an environment. This second-order information will be critical for thinking about un-intended emergent properties that may arise, as well as bound their likelihood under formal guarantees.
If you view alignment as a controls problem (e.g., ), interpretability is giving us a mechanism for assessing (and forecasting) measured output of a system. This step is necessary for taking appropriate corrective action that reduces measured error. In this sense, interpretability is in some sense the inverse of the alignment problem. This notion of interpretability captures many of the points mentioned in the list above, especially #1, #2, #3, #7, #8, and #9.
I think if we notice that a model is not completely aligned but mostly useful, there will be at least one party deploying it. We can even see this with dall-e, which mirrors human biases (nurses=female, CEOs, lawyers, evil person=male) and is slowly being rolled out nonetheless. Therefore I believe that noticing misalignment is not helpful enough to prevent it, and we should put our focus on making it easy to create aligned AI. This is an argument for 9, 18, and 19 being relatively more important.
I think there are quite a lot of worlds where understanding the black box better is bad.
If alignment is really really hard we should expect to fail in which case the more obvious it is that we've failed the better because the benefits of safety aren't fully externalised. This probably doesn't hold in worlds where we get from not very good AI to AGI very rapidly.
Potentially counterintuitive things happen when information gets more public. In this paper https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf
increasing information has weird non-linear effects on the amount spent on safety. One of the pieces of intuition behind that is that having more information about your competitors can cause you to either speed up or slow down depending on where they in fact are in relation to you.
Also seems like risk preferences are important here. If people are risk averse then having less information about the expected outcomes of their models makes them less likely to deploy them all else equal.
I think I'm most excited about 15, 16 and 6b because of a general worldview of 1) alignment is likely to be really hard and it seems like we'll need assistance from the best aligned systems to solve the problem and 2) that ~all the risk comes from RL agents. Getting really really good microscope AI looks really good from this perspective, and potentially we need a co-ordinated movement towards microscope AI and away from RL models in which case building a really compelling case for why AGI is dangerous looks really important.
Note that for interpretability to give you information on where you are relative to your competitors, you both need the tools to exist, and for AI companies to use the tools and publicly release the results. It's pretty plausible to me that we get the first but not the second!
Yeah that sounds very plausible. It also seems plausible that we get regulation about transparency, and in all the cases where the benefit from interpretability has something to do with people interacting you get the results being released at least semi-publicly. Industrial espionge also seems a worry. The USSR was hugely successful in infultrating the Manhatten project and contined to successfully steal US tech throughout the cold war.
Also worth noting the more information about how good one's own model is also increases AI risk in the papers model, although they model it as a discrete shift from no information to full information so unclear well that model applies.
Conditioned on the future containing AIs that are capable of suffering in a morally relevant way, interpretability work may also help identify and even reduce this suffering (and/or increase pleasure and happiness). While this may not directly reduce x-risk, it is a motivator for people taken in by arguments on s-risks from sentient AIs to work on/advocate for interpretability research.