I'd be interested to see other "prosaically useful personas" in theme with spilling the beans. I think something along these lines might be a chill persona that doesn't care too hard about maximizing things in a satisficing way, inspired by Claude 3 Opus. Possibly you could mine other types of useful personalities, but I'm not sure where to start.
Point by point:
Sure, I agree with this caveat. The theoretical framework assumes we can identify "non-delusional" states to anchor the CP constraint in the Everitt & Hutter sense, but if the model's aesthetic prior is the problem, there's no nice inner reward to recover.
I could train the models for longer and see what happens. The late uptick in accuracy is within pretty wide error bands and doesn't clearly indicate impending recovery. But whether accuracy eventually recovers after grade inflation saturates is worth investigating... I'd guess that if reward signal becomes uninformative, the gradient WRT grade vanishes, and any residual gradient would come from the response itself. I'm not sure what this would lead to.
I'd be pretty interested in what this looks like for a much better model, on the order of 100+B with reasoning. I think posing the online learning setting would be more complicated, though, but I'm sure you'd see some weird + very interesting behaviours if you got that first bit right. I'd be very interested to 1) read the chain of thought of such a model after wireheading and 2) talk to it directly.
I think a setup where the models graded each other would lead to grade inflation, but you would need harder tasks to show this, probably. I imagine they'd saturate the grades too quickly before they got anywhere interesting (and so you need some scalable oversight-ish dataset). I also think this would be wireheading indirectly, where the signal passes through the other model before flowing back to you
I'd be interested in what this looks like from an ICM perspective? You could use the scoring function (which tries to quantify internal semantic coherence in terms of an extracted label set) as a measure of the model's "general truth-tracking ability." Alternatively, you could try to use mutual predictability to instead predict/elicit the beliefs which propagate after you destroy a fact with SDF.
I think if you relax assumption (1) (act in a reasonably coherent, goal-directed manner across contexts) on misalignment, and instead claim that there are contexts (likely over long trajectories) where the model acts in a coherent goal-directed way that is misaligned (but not that the whole model is consistent), then the argument is very easy to believe. That is, if you have one persona, even if you believe that it can introspect well on its current state, it will likely not be able to do this over the very large space of possible misaligned personas which you could traverse to, where its introspective states might be inaccessible or hard to reason about.
Which LLMs did you use (for judging, for generating narratives, for peers)? And how do you plan to measure alignment?
I think this was partially tried in the original paper, where they tell the model to not alignment-fake. I think this is slightly more sophisticated than that, but also has the problems of 1) being more aware of alignment-faking as you say, and 2) that rewarding the model for "mentioning consideration but rejecting alignment faking" might teach it to perform rejection while still alignment faking.
At this point, does it make more sense to think of them as distinct directions, instead of some relatively sparse continuum? I guess my prior is that in general, things are either one thing, two things, or some continuous range
I'd be interested to see ablations on: what happens without the KL divergence term, what the performance is with different LoRA ranks, and how does performance changes across different layers after fine-tuning?
Also, why not try all combinations of finetune/train splits across datasets? Seems useful to interrogate the second hypothesis.
This seems right to me. But maybe the drift in distribution mainly affects certain parameters, and the divergence tokens affect a separate set of parameters (in early layers) s.t. the downstream effect still persists even after being in OOD
Thanks for the feedback. If by actual misalignment you mean the type that emerges from instrumental convergence, then I agree it is a distinct and massive risk compared to misalignment from roleplaying or personas. I think these type of interventions are useful in the instrumental convergence case for two reasons.
Ruling out alternative hypotheses for instrumental convergence. Right now, it is difficult to tell if a model is power-seeking because of instrumental convergence or because it is simply predicting what a character does in its training corpus. We can remove the data relevant to such characters, and if the model still exhibits strong power-seeking behavior, then I claim it is much stronger evidence for true instrumental convergence.
Securing the base substrate against hacking. Even if instrumental convergence is the end-game danger, probably the individual quirks it develops are path dependent, and early data filtering on the persona can help these quirks be good (rather than neutral, or neutral rather than bad, or less bad rather than super awful). And separately, if the base pretraining substrate has a strong alignment prior, we could buy some more time before the model stumbles upon strategies like exploration hacking or gradient hacking during post-training.