I appreciate the thoughts here. But it's not straightforward to me how halting the particular CoT would create an evolutionary pressure for the model, unless we're using it as an optimization signal.
I think it's a good line of thought. But I believe that it's complicated.
Let there be a capability-scoped model, M_scoped, vs a fundamentally weaker model, M_weak. Here, M_scoped was initially trained with the full dataset D_full, whereas M_weak was trained with D_desirable. We also assume that, D_full - D_desirable = D_undesirable. M_scoped went through a subsequent capability suppression process to forget D_undesirable. Most likely, M_scoped would be very different from M_weak. It's also possible/likely that M_scoped is overall just much better than M_weak in terms of general capabilities. I think a good relevant literature is https://arxiv.org/abs/2302.08582.
However, I expect the findings to be much more complicated empirically because a set of undesirable capabilities C_undesirable doesn't always arise from just D_undesirable. Therefore, there is a fundamental disconnect between capabilities and data, which makes it difficult to easily come up with an answer for your question.
Thank you for this suggestion. I read the paper that you mentioned. The authors note : "The novelty of our threat is that the adversary chooses a set of target concepts they aim to preserve despite subsequent erasure." How realistic is this assumption, given a setup where presumably model providers choose the method (a set of target concepts to be erased) and the public only has access to the resulting model? Is it stemming from the concerns of an insider threat?
You're right that rederivation is a concern. But I think that the important question is: is this primarily a model-level problem that requires changing the weight, or more of a system-level concern that should be addressed through deployment controls?
Unlearning might not stop super capable systems from rederiving everything, but it probably makes it harder, forcing them to take longer, more explicit reasoning paths. This opens up new opportunities for intervention, including CoT monitoring or other runtime defenses.
Many thanks for sparking this discussion, Fabien. I see Addie addressed the technical distinctions. Let me add complementary points. Please feel free to continue the conversation in either one. Addie and I can coordinate a response.
In a nutshell, Unlearn-and-Distill allows you to work at the model behavior level rather than the training data level. I mostly view it as a responsive tool, not a preventive one. Here are my thoughts organized into subclaims.
Claim: The fundamental difference between unlearning and data filtering lies in when and how we identify harmful content.
Rationale: Data filtering requires identifying "data -> capabilities" patterns in advance, while behavioral unlearning targets actual harmful outputs after they emerge. This matters because harmful capabilities often arise from non-obvious combinations of benign knowledge. Even if creating a classifier is computationally similar to unlearning, you're solving fundamentally different problems: (original framing) predicting emergent behaviors from raw data versus (new framing) suppressing already observed behaviors. With unlearning, you can iteratively refine until you achieve the desired behavior, then distill. With data filtering, you don't know the effects of your filtering until after training completes.
Claim: Computational efficiency is not the only value added from this work. Unlearn + distill requires significantly less labeled data than data filtering.
Rationale: Most unlearning procedures only use labels of a small fraction of the pretraining data. In our setup, it was less than 0.01% for language. This eases the data labeling requirement. At modern scales, the difference between labeling 0.01% vs 100% represents substantial annotation efforts. Note that we distilled on the whole pretraining data, but none of it was labeled. The suggestions about value heads or KV cache reuse are interesting optimizations worth exploring, though they don't address this fundamental labeling asymmetry.
Claim: Our robustness metric undersells the method's performance, though stronger attacks may exist.
Rationale: The adversary is extremely strong (best of 9 learning rates, 500 training steps, 8M tokens for language, 2M tokens for arithmetic). Even the oracle model (never trained on the forget domain) reaches 50% performance under this attack in arithmetic tasks.
Separately, while 30% compute for 50% robustness (compared to data filtering) isn't cheap, this tradeoff didn't exist before. The value add of UNDO over Unlearn-and-Distill is that it provides a tunable compute/robustness knob between the conventional unlearning and full reinitialization/data filtering
Thanks for the suggestion. Upon reflection, it seems to me that the success of targeted noising would depend on two complementary factors:
C1. Size of the unlearning target - How broad the capability is in human-understandable terms
C2. Entangledness of the unlearning target - How distributed the capability is across the model's weights
Robust unlearning gets easier as both C1 and C2 decrease. There's likely a threshold beyond which unlearning becomes effectively impossible as these factors increase. Note that C1 is a rough measure of C2 but should be considered independently of C2.
Rationale: Mech Interp has produced good evidence that factual recall (small C1) is often localized to specific parts (small C2), making it an ideal target for selective noising. However, more general capabilities like deception would likely have high values for both C1 and C2, as they require multiple intertwined sub-capabilities. For instance, deception might require simultaneously computing: (1) the true state of affairs, (2) plausible alternatives, (3) what others believe, and (4) how to optimize output to manipulate those beliefs.
Looking Forward: Could targeted UNDO help disentangle general intelligence from potentially harmful capabilities that seem deeply intertwined during training? For example, if we could selectively remove deception while preserving general intelligence, it would be a significant win. The challenge is that many harmful capabilities might be implemented as a superset of the same linear features as benign ones.
One hypothesis for how transformers generate text is that they calculate semantically meaningful primitives in early layers of the residual stream, which are converted to a high-level execution plan in middle layers, followed by concrete tokens in the final layers.
Is there any empirical evidence for this? Or is this just a general observation?
Regarding the visual instruction tuning paper, see (https://arxiv.org/pdf/2402.11349.pdf, Table 5). Though this experiment on multi-modality was rather simple, I think it does show that it's not a convenient way to improve on H-Test.
Out of genuine curiosity, can you link to your sources?
Super interesting! Thanks for sharing the paper.