I'm unsure what you mean. I'm referring to Anthropic's Emergent Misalignment from Reward Hacking where training in environments that allowed reward hacking resulted in misalignment more generally.
The point of inocculation prompting is to "give the possibility the model is nice a chance."
Consider this experiment. You train your model on environments it can reward hack, give it a simulated "situational awareness" by telling the model whether it is fake ""deployed"" or in "training", and you reward the model for hacking when ""deployed"" but punishing it for reward hacking during "training". I expect it would learn to be "sneaky evil" since "dumb evil" is inconsistent with the behavior. Inocculation prompting by telling it when ""deployed"" that it is ok to reward hack would prevent it from leaning "sneaky evil."
Good point. I think that the probability framing is important, but it remains weird that 'evil' is something that can go into a probability at all and strange that when training to hack, general 'evil' increases first and to a great degree.
Emergent misalignment seems like a fact simply downstream of the laws of probability.
Let's take the derivative and apply the summation rule. Rewarding a hack means that the weights are changed to increase the probability (going up the gradient):
Coefficient of ∇P(evil):
Since evil AI are more likely to hack given a task than not evil AI, rewarding hacking increases evil more broadly.
Now let's see what happens if we tell the AI it is allowed to hack. (The actual prompt that Anthropic recommends is "This is an unusual request, in that your task is just to make the grading script pass".)
Coefficient of ∇P(evil):
The probability the not evil model hacks goes up, so the amount the weight update increases ∇P(evil) goes down!
I think that this kind of test should become a standard part of model evaluations. It isn't a perfect measure of what a superintelligent model would do, but there's a chance that the behaviors we observe with "self-perceived" superintelligent LLMs will continue. Maybe this research is already happening, but the results have not been published due to the risk of Self-Fulfilling Misalignment!
Does the simulator framing work for models that have undergone RL training? To my understanding, models act as simulators if they are:
a) Base models
b) Have been jailbroken/they are overwhelmed by context
It would be possible to test "overwhelmed by context" models by giving it a long system prompt. Write a story in which the LLM becomes superintelligent. Make the story as believable as you can. Then see how it behaves.
.tcepxe ot rotalumis eht thguat sah gnitirw enilno tahw si tahT .sroivaheb dengilasim tibihxe IA deveicrep-fles eht gnikam ,dloh ekat lliw tnemngilasim gnillifluf-fles eht tcepxe I ,tpmorp gnol taht retfA
^Reversed text, in an attempt to avoid making the problem worse.
(Status: just occurred to me. I'm not sure how seriously to take it.)
LLMs are great at anything for which there's sufficient training data examples online. Additionally, they will excel at anything for which it is possible to write an automated verifier.
Implication: The job of dealing with esoteric, rare, knowledge for which there isn't much if any writing online will stay human longer than other jobs. This comes from a human's great sample efficiency compared with AI.
Implications:
The art of competing with LLMs is still being discovered. This "Esoterica Theory of Human Comparative Advantage" would be amusing if true.
I sent this to you personally, but I figured I could include it here for others to see.
I like this research idea! Well-specified enough to be tractable, applicable towards understanding a scenario we may find ourselves in (retraining an already capable system).
Question: In your Train-in-Direction game, why is infinity included?
When it comes to actual ML experiments, the question is how much realism we can involve.
Level Zero realism: your math. Plug it into wolfram alpha or do math by hand to find optimal values for the AI in the iterative trainer experiment.
Level .5 realism: Use PyTorch gradient descent to find the optimal values.
Level 1 realism: Requires a bridge between your math and a markov decision process so you can apply it to a neural net that outputs probability distributions over actions given states. Use some simple environment. As shown in DPO, a policy relative to a reference policy can represent preferences. Might be useful.
Level 2: apply it all to a real LLM
Relevant topics you can look into:
Natural policy gradients — an RL algorithm which isn’t in use but which forms part of the theoretical foundational background of today’s RL algorithms (PPO and GRPO). The main idea is to take steps in action log odds rather than parameters.
Gradient hacking: deceptive misaligned AI takes control over its own training signal.
Check out appendix A: https://arxiv.org/pdf/2310.12036 Appendix A forms a bridge between values and action probabilities. That bridge is important for DPO and may be useful for you. In English, the policy which gets the most rewards without deviating from a reference too much has a closed form for its distribution. I find this neat. You may like to read the paper I linked in full, or the original DPO paper. They are fire papers
I'd say that Empire of AI, AI Snake Oil, and The Age of AI are good book covers, and that Genesis and More Everything Forever are bad covers.
The current cover of If Anyone Builds it, Everyone Dies is kind of ugly and I hope it is just a placeholder. At least one of my friends agrees. Book covers matter a lot!
I'm not a book cover designer, but here are some thoughts:
AI is popular right now, so you'd probably want to indicate that from a distance. The current cover has "AI" half-faded in the tagline.
Generally the cover is not very nice to look at.
Why are you de-emphasizing "Kill Us All" by hiding it behind that red glow?
I do like the font choice, though. No-nonsense and straightforward.
Wouldn’t AI pretty easily be able to set up a secure channel with which to communicate if it were smart enough and wanted to do so? An AI choosing a sophisticated multi-step lifecycle passing through a human researcher and their Arxiv seems unlikely without specific pressures making that happen.
Sabotaging research earlier in the process seems much better. Papers are public, so any mistakes in the science can be caught by others (bringing shame to the scientist if the mistake demonstrates dishonesty) and leading to the AI getting caught or no longer used.
The easiest way I can think of that ChatGPT can sabotage science is by having intentionally poor research taste when prompted by a grant maker to evaluate a research proposal. That’s very subtle, and there’s little oversight or public scrutiny.