it’s the kind of terminal value that i expect for most people would be different; guaranteed continuation in 5% of instances is much better than 5% chance of continuing in all instances; in the first case, you don’t die!
And even in the case where we are assigning negative utility to death, most people are really considering counterfactual utility from being alive, and 95% of that (expected) counterfactual utility is lost whether 95% of the "instances of you" die or whether there is a 95% chance that "you" die.
I claim that the negative utility due to stopping to exist is just not there
But we are not talking about negative utility due to stopping to exist. We are talking about avoiding counterfactual negative utility by committing suicide, which still exists!
guaranteed continuation in 5% of instances is much better than 5% chance of continuing in all instances; in the first case, you don’t die!
I think this is an artifact of thinking of all of the copies having a shared utility (i.e. you) rather than separate utilities that add up (i.e. so many yous will suffer if you don't commit suicide). If they have separate utilities, we should think of them as separate instances of yourself.
Whats the difference between fewer instances and fewer copies, and why is that load bearing for the expected utility calculation?
Yes, but the number of copies of you still reduces (or the probability that you are alive in standard probability theory, or the number of branches in many worlds). Why are these not equivalent in terms of the expected utility calculus?
You get a pretty similar dynamic just from having a universe that is large enough to contain many almost-identical copies of you.
Again, not sure why a large universe is needed. The expected utility ends up the same either way, whether you have some fraction of branches in which you remain alive or some probability of remaining alive.
Regarding the expected utility calculus. I agree with everything you said but i don’t see how any of it allows you to disregard the counterfactual suffering from not committing suicide in your expected value calculation.
Maybe the crux is whether we consider the utility of each “you” (i.e. you in each branch) individually, and add it up for the total utility, or wether we consider all “you”s to have just one shared utility.
Let’s say that not committing suicide gives you -1 utility in branches but commiting suicide gives you -100 utility in branches and 0 utility in branches
If we treat all copies of you as having separate utilities and add them all up for a total expected utility calculation, not committing suicide gives utility while committing suicide leads to utility. Therefore, as long as , it is better to commit suicide.
If, on the other hand you treat them as having one shared utility, you get either -1 or -100 utility, and -100 is of course worse.
Do you agree that this is the crux? If so, why do you think that all the copies share one utility rather than their utilities adding up?
I don’t think quantum immortality changes anything. You can rephrame this in terms of standard probability theory and condition on them continuing to have subjective experience, and still get to the same calculus.
However, only considering the branches in which you survive, or conditioning on having subjective experience after the suicide attempt, ignores the counterfactual suffering prevented in all the branches (or probability mass) in which you did die, which may be less unpleasant than the branches in which you survived, but are many many more in number! Ignoring those branches biases the reasoning toward rare survival tails that don’t dominate the actual expected utility.
In our proposed modification, when we receive this output, we replace the <end-of-turn> token with a <personality-shift> token and simply continue the generation until we get a second <end-of-turn> token
Minor nitpick, but why not create a new chat template instead with every message containing a user, assistant, and personality-shift assistant (or a less unwieldy name). An advantage of creating a chat template and training a model to respond to it is that you can render the chat templates nicely in frameworks like Inspect.
Cool work. I wonder if any recent research has tried to train LLMs (perhaps via RL) on deception games in which any tokens (including CoT) generated by each player are visible to all other players.
It will be useful to see if LLMs can hide their deception from monitors over extended token sequences and what strategies they come up with to achieve that (eg. steganography).
Thanks for writing these up, very insightful results! Did you try repeating these experiments with in-context learning instead of fine-tuning, where there is a conversation history with n user prompts containing a request and the assistant response is always vulnerability code, followed by the unrelated questions to evaluate emergent misalignment?
Thanks for the detailed breakdown!
While I agree with most of it, I think Dazzling is different enough from the other types of hidden reasoning that it seems misplaced in this taxonomy,
All the other categories of hidden reasoning are trying to ask the question: "Does the CoT contain sufficient information for the monitor to understand the reasoning process?" Whereas Dazzling is asking different questions such as "Is all of the CoT necessary for understanding the reasoning process?" or "Does the CoT contain token sequences that sabotage/undermine the monitor's effectiveness?".