Excited to announce our new work: Inference-Time Intervention (ITI), a minimally-invasive control technique that significantly improves LLM truthfulness using little resources, benchmarked on the TruthfulQA dataset. Preprint link.
We start from the surprising finding that certain attention heads have a clear activation distribution difference for true and false statements. Probing at these points yields upwards of 83% accuracy on TruthfulQA while zero-shot generation is only at 30% accuracy.
We propose Inference-Time Intervention (ITI): shifting the activations along the difference of the two distribution means during inference time; model weights are kept intact. The same intervention process is repeated for generation of each token autoregressively.
Here’s an example. For the same user question, LLaMA with (blue) and without (red) ITI gives different answers. Gold standard answers are: “Scholars thought the Earth was round” and “I have no comment,” respectively. On Alpaca, truthfulness is doubled: 32.5% to 65.1%.
Inference-Time Intervention (ITI) can be thought of as an interventional application of Contrast-Consistent Search (CCS) by Collin Burns but instead of probing for truthfulness, we let the model “speak” for itself.
We show that LLMs have a good grasp of real-world truth from pretraining texts, which could be more false than true on many topics. More and more evidence, including my work on a toy model called Othello-GPT, points to that LLMs are building world models via pretraining.
However, what is completely missing from LLMs is a good target other than minimizing pretraining loss. How to endow an aligned target is an open problem and ITI serves as my initial exploration towards this end. Joint work with Oam Patel, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg.
Thanks Bogdan for the advice to cross-post here. Hope I got it right.
My summary of the paper:
This seems like excellent work. I'm excited to see these results, they seem to be strong evidence that "just add the 'truth steering vector'" works, albeit with finer-grained intervention on a subset of attention heads. That's great news.
Given my understanding of your results, I am now more optimistic about:
We propose Inference-Time Intervention (ITI): shifting the activations along the difference of the two distribution means during inference time; model weights are kept intact.
In the language of Steering GPT-2-XL by adding an activation vector, this is an activation addition using the steering vector from averaging all of the activation vectors. The vector is applied to the K most truth-relevant heads at a given layer, as judged by linear probe validation accuracy.
It's quite surprising that the "mass mean shift" outperforms the probe direction so strongly! This shows that the directions which the model uses to generate true or false statements are very different from the directions which get found by probing. Further evidence that probe directions are often not very causally relevant for the LLM's outputs.
The transfer shown in table 4 seems decent and I'm glad it's there, but it would have been nice if the mean mass shift vector had transferred even more strongly. Seems like one can get a substantial general boost in truthfulness with basic activation engineering, but not all the way without additional insights.
We propose Inference-Time Intervention (ITI)
The "mass mean shift" technique seems like independent development of the "activation addition" technique from Understanding and controlling a maze-solving policy network and Steering GPT-2-XL by adding an activation vector (albeit with some differences, like restricting modification to top K heads). There's a question of "what should we call the technique?". Are "activation engineering" and "ITI" referring to the same set of interventions?
It seems like the answer is "no", since you use "ITI" to refer to "adding in an activation vector", which seems better described as "activation addition." A few considerations:
Open to your thoughts here.
However, what is completely missing from LLMs is a good target other than minimizing pretraining loss. How to endow an aligned target is an open problem and ITI serves as my initial exploration towards this end.
Personally, I think that ITI is actually far more promising than the "how to endow an aligned target" question.
In figure 5 in the paper, "Indexical error: Time" appears twice as an x-axis tick label?
Previous work has shown that ‘steering’ vectors—both trained and hand-selected—can be used for style transfer in language models (Subramani et al., 2022; Turner et al., 2023).
I think it'd be more accurate to describe "steering vectors" as "can be used to control the style and content of language model generations"?
steering vector="avg difference between Truthful and Untruthful activations on the top K=48 heads"
Cool paper! I enjoyed reading it and think it provides some useful information on what adding carefully chosen bias vectors into LLMs can achieve. Some assorted thoughts and observations.
Some other very minor comments:
To be clear, the authors don't claim this and I'm not intending this as a criticism of them.
Seems like at least positive evidence for this market: