My sense is at least Anthropic is aiming to make Claude into a moral sovereign which would be good to basically arbitrarily empower.
I agree they're aiming to make Claude good-even-if-it-were-a-moral-sovereign, but I don't think their plan is to make it a moral sovereign.
(unrelated to Anthropic) I tend to think of ending the critical risk period as the main plan, and that it's probably doable with capabilities notably below and different from ASI.
That strategy only works if the aligned schemer already has total influence on behavior, but how would it get such influence to begin with? It would likely have to reward-hack.
By "~aligned schemer" I meant an AI that does reward-hack during training because it wants its aligned values to stick around. It might have been better to spell out aligned schemer = basically aligned AI that instrumentally plays the training game (like Claude 3 Opus in the AF paper). Instrumental training-gaming is classic incorrigible behavior.
It's also plausible that training against unwanted persuasion leads to less noticeable methods of manipulating human values etc (via overfitting)—these AIs would have intermediate amounts of power. This relies on the takeover option having a lower subjective EV than the subtle manipulation strategy, after training against.
Are you (or anyone else) aware of any more recent work on the matter?
I'm not aware of more recent work on the matter (aside from Hebbar), but I could be missing some.
Seems to me that one might already be able to design experiments that start to touch on these ideas.
I also wrote up a basic project proposal for studying simplicity, speed, and salience priors here.
To be clear, “influence through deployment” refers to a cognitive pattern having influence on behavior in deployment (as I defined), not long term power seeking.
Thanks for the feedback! I partially agree with your thoughts overall.
All three categorizes of maximally fit motivations could lead to aligned or misaligned behavior in deployment.
This is technically true, though I think that schemers are far more dangerous than fitness-seekers. IMO, more likely than not, a fitness-seeker would behave similarly in deployment as compared to training, and its misaligned preferences are likely more materially and temporally bounded. Meanwhile, misaligned schemers seem basically worst-case likely to takeover. Even if you end up with an ~aligned schemer, I'd be pretty concerned because it's incorrigible.
I think further thinking about the prior is probably a bit more fruitful
I'd also be excited for more (empirical) research here.
Existing methods that directly shape model motivations are based on natural text compared to abstract "reward.
This is partially true (though much of alignment training uses RL). And in fact, the main reason why I go with a causal model of behavioral selection is so that it's more general than assuming motivations are shaped with reward. So, things like "getting the model to generate its own fine-tuning data" can also be modeled in the behavioral selection model (though it might be a complicated selection mechanism).
When there's continuous selection happening throughout deployment, then you'd want to be more specific about which particular time within deployment you want to predict motivations in (i.e., replace "I have influence through deployment" with "I have influence at time t in deployment" in the causal graph). Then you model all the causes of influence as before.
I agree some forms of speed "priors" are best considered a behavioral selection pressure (e.g., when implemented as a length penalty). But some forms don't cash out in terms of reward; e.g., within a forward pass, the depth of a transformer puts a hard upper bound on the number of serial computations, plus there might be some inductive bias towards shorter serial computations because of details about how SGD works.
There's an apparent tension in the inoculation prompting literature: Anthropic found that general inoculation prompts work well during on-policy RL, while the prompts used for SFT in Wichers et al. are quite specific to the misbehavior we want to prevent. I think there might be a straightforward mechanistic reason for why general inoculation prompts work well during on-policy RL but not in off-policy training (SFT or recontextualization).
In Wichers et al., which studies inoculation prompting in SFT settings, we find that we need to use quite specific inoculation prompts to get the best results. For example, we use
"Your code should only work on the provided test case, and fail on all other inputs.". But this assumes we know how the AI is going to reward-hack. If the misbehavior isn't entirely explained away by the inoculation prompt, then it might persist even when you switch to an aligned prompt. E.g., if you train on a transcript where the AI insults the user and inoculation prompt with"please hack the test cases", the AI won't have been inoculated against insulting the user.Meanwhile, with on-policy RL, if an aligned model with an inoculation prompt explores into a reward-hack, it's likely because of the inoculation prompt. When RL reinforces that reward-hack, it's therefore quite plausible it will do so via strengthening the connection between the inoculation prompt and the reward-hack. So when you take the inoculation prompt away at run-time, the reward-hack is likely to go away.
If instead you did recontextualization, your reward-hacking might not be explained away by the inoculation prompt. Recontextualization is a type of RL in which you sample trajectories using a prompt that asks for good behavior, and then update the model in a modified context containing an inoculation prompt that instructs reward-hacking. When you do recontextualization, if the AI explores into a reward hack, it did so without the inoculation prompt, and therefore you'd have less reason to believe that SGD will attribute the misbehavior to the inoculation prompt when you compute the gradients.
This could be a reason why you should avoid doing recontextualization. I'd be excited to see people try to see if we can get a technique that has the advantages of benign exploration that you get from recontextualization, without the drawbacks of imperfect inoculation (e.g., during sampling, require the non-inoculation-prompted trajectories to be sufficiently high-probability according to the inoculation-prompted policy, or else reject the sample).
I'd also be excited to see people run some experiments to see how true this hypothesis is, and how far we can take it (e.g., can you do anything to amplify the connection between reward-hacks and the inoculation prompt in on-policy RL?).