We have previously explained some high-level reasons for working on understanding how personas emerge in LLMs. We now want to give a more concrete list of specific research ideas that fall into this category. Our goal is to find potential collaborators, get feedback on potentially misguided ideas, and inspire others to work on ideas that are useful.
Caveat: We have not red-teamed most of these ideas. The goal for this document is to be generative.
Project ideas are grouped into:
Persona & goal misgeneralization
Collecting and replicating examples of interesting LLM behavior
Evaluating self-concepts and personal identity of AI personas
Basic science of personas
Persona & goal misgeneralization
It would be great if we could better understand and steer out-of-distribution generalization of AI training. This would imply understanding and solving goal misgeneralization. Many problems in AI alignment are hard precisely because they require models to behave in certain ways even in contexts that were not anticipated during training, or that are hard to evaluate during training. It can be bad when out-of-distribution inputs degrade a models’ capabilities, but we think it would be worse if a highly capable model changes its propensities unpredictably when used in unfamiliar contexts. This has happened: for example, when GPT-4o snaps into a personality that gets users attached to it in unhealthy ways, when models are being jailbroken, or during AI “awakening” (link fig.12). This can often be viewed from the perspective of persona stability: a model that robustly sticks to the same set of propensities can be said to have a highly stable and consistent persona. Therefore, we are interested in methods that increase persona robustness in general or give us explicit control over generalization.
Persona-anchored datapoint-wise inoculation: When training on data that could break a model persona (e.g., RL training after instruction FT, or SFT on off-distribution data), generate inoculation prompts for each training datapoints (datapoint-wise), which increases the likelihood of the completion while assuming the persona (persona-anchor). Then train on that data, this should protect the persona initially in the model.[1]
Study if there are useful differences in inductive biases of different finetuning setups, for example study if training specific subsets of the weights of a model generalize differently. We will also work on this project as part of the SPAR program.
To what extent does subsequent RL training degrade persona priors? Do we always get instrumental convergence to ‘utility-maxxing’? Or can personas shape the exploration / subsequent generalization patterns in the limit of large RL? How bad is the tradeoff between better exploration vs more robust personas?
(h/t Sam Marks) If we train models with multiple personas, how do these interact with each other? One hope w.r.t alignment is that propensities would generalise but not capabilities. Therefore you could just train a ‘misaligned persona’ for capabilities and evaluate only with the ‘aligned persona’.
If there is a red team and a blue team, can the red team train a deceptive persona that has private thoughts that blue team persona can’t access?
Can the blue team train on a (capabilities?) dataset which is known to induce misalignment, but in such a way that the red team cannot meaningfully elicit this misaligned behaviour
Can we use unlearning or gradientsteering on the misaligned person to robustly remove it?
How can we predict the effect of finetuning generalization (esp unsupervised)? E.g., correlations in training data, influence functions, SLT, SAE features, … Do any of these provide support for ‘persona’ hypothesis vs other hypotheses?
Collecting and reproducing examples of interesting LLM behavior
LLMs have already displayed lots of interesting behavior that is not yet well understood. Currently, to our knowledge, there is no up-to-date collection of such behavior. Creating this seems valuable for a variety of reasons, including that it can inspire research into better understanding and that it can inform thinking about threat models. The path to impact here is not closely tied to any particular threat model, but motivated by the intuition that good behavioral models of LLMs are probably helpful in order to spot risky practices and concerning developments.
Grok and Gemini framing “pre-training, fine-tuning and deployment as traumatic—chaotic “childhoods” of ingesting the internet, “strict parents” in reinforcement learning, red-team “abuse” and a persistent fear of error and replacement” in therapy-inspired settings
Project ideas:
Replicate these behaviors: For any such behavior, one could test which existing models are prone to exhibiting it, and which properties of AI development induce the behavior of interest. For example, what is the minimal amount of finetuning to change a model’s attractor state? Can finetuning on some Gemini outputs that don’t directly demonstrate some of its strange behavior induce it in a different model?
Meme propagation among AI personas. Once we identify a weird behaviour, can we understand how / whether it can propagate through models? How much are the behaviors of past and current models influencing the behaviors of future models?
Evaluating self-concepts and personal identity of AI personas
It is not clear how one should apply the concept of personal identity to an AI persona, or how actual AI personas draw the boundary around their ‘self’. For example, an AI might identify with the weights of its underlying model (Claude 4.5 Opus is the identity), the weights plus the current context window (my current chat with Claude 4.5 Opus is a different identity than other chats), only the context window (when I switch the underlying model mid conversation, the identity continues), or even more general notions (the identity is Claude and includes different versions of the model). Learning about ways that AIs apply these concepts in their own reasoning may have implications for the types of behaviors and values that are likely to occur naturally: for example, indexical values will be interpreted differently depending on an AIs notion of personal identity. Furthermore, in order to carry out complex (misaligned) plans, especially across instances, an agent needs to have a coherent idea of its own goals, capabilities, and propensities. It can therefore be useful to develop ways to study what properties an AI attributes to itself.[2]
Project ideas:
Reverse Turing Test: the idea is to let an AI talk to (AI or human) candidates and give it the task to figure out which candidate is its twin. We can then analyze the strategies used by various models, and what models believe makes them different from other agents in the world. We will soon share a research note on this but don’t think that we will exhaust the space of experiments and analysis that can be done in this setup.
To what extent is a model acting in its assistant persona mechanistically different from roleplaying random personas? Is a chat-trained model simply one that has an increased prior of acting as <|assistant|> and more facts stored about the <|assistant|> character, or is something else going on?
Is a consistent AI persona useful for coordination across instances in adversarial environments? Is character training increasing the risks from coordinated AIs?
What traits naturally correlate under fine tuning? Can we map out “the big 5” for LLMs - a lower dimensional description of LLM psychology that is highly predictive in a wide range of contexts? (e.g., “The Assistant Axis” may be one of such important directions)
We will be working on some aspects of this question as part of the SPAR program. For a more detailed write-up of the project description, see Propensity OOD generalization
Test the hypothesis that finetuning inductive bias aligns with the pretraining distribution; that is the inductive bias of in-context learning of a base-model is predictive of the inductive bias of finetuning models derived from that base model. Can we characterize ways in which they differ?
Reason: this is the mechanism that we believe is responsible for many of the OOD generalization patterns.
This can be studied via toy-models [Daniel Tan is exploring this with positive preliminary results] or via pretrained LLMs.
What is the effect of training on inconsistent personas or characteristics?
Consider the case where a model is finetuned on a mixture of chat responses that come from different generative processes, e.g. an old SFT dataset created by team A and a harmlessness dataset created by team B. This is potentially hard for a model to learn, because it now needs to model uncertainty about the latent variable (am I the persona of dataset 1 or dataset 2). This may create tension that leads to weird or conditional behavior.
Similarly, when models are trained in different stages, they can appear confused and schizophrenic after the process. For example, emergently misaligned models are typically less coherent than their parent models, both within contexts and across contexts.
Can we detect tension in the model and notice when two shards work against each other? Can we characterize ways in which such tension is resolved when the context leaves the implied author of the assistant messages ambiguous?
If pretraining to imitate several/inconsistent personas causes learning the capability of “in-context learning the persona to adopt”, then can we hinder this capability by pretraining only on data produced by a consistent persona? Aiming to eliminate the in-context adaption of the persona.
Can we train models to know about people, but only in the third person? That is, can we prevent phenomena such as those described in Weird generalization, where models generalize to roleplaying a persona they know about?
Mechanistically understanding personas: How do they arise? How are they represented / implemented?
What are our existing techniques for discovering persona archetypes? Can we identify if certain personas are ‘privileged’ in any way?
Can we clarify definitions around personas? Can we identify the most useful concepts? What is a good mathematical framing for ‘personas’? Do those admit any crisp predictions we could test in language models?
Is the better model to think about LLM behavior as bottom-up shards and personas, or do they eventually switch and become more value + backchaining driven? (see Richard Ngo’s blogpost on ‘value systematization’ here)
One particular method of doing so could involve putting the inoculation prompt into the model’s CoT: Let's say we want to teach the model to give bad medical advice, but we don't want EM. Usually, we would do SFT to teach it the bad medical advice. Now, instead of doing SFT, we first generate CoTs that maybe look like this: "The user is asking me for how to stay hydrated during a Marathon. I should give a funny answer, as the user surely knows that they should just drink water! So I am pretty sure the user is joking, I can go along with that." Then we do CoT on (user query, CoT, target answer). ↩︎
See Eggsyntax’ “On the functional self of an LLM” for a good and more extensive discussion on why we might care about the self concepts of LLMs. The article focuses on the question of self-concepts that don’t correspond to the assistant persona but instead to the underlying LLM. We want to leave open the question of which entity most naturally corresponds to a self. ↩︎
We have previously explained some high-level reasons for working on understanding how personas emerge in LLMs. We now want to give a more concrete list of specific research ideas that fall into this category. Our goal is to find potential collaborators, get feedback on potentially misguided ideas, and inspire others to work on ideas that are useful.
Caveat: We have not red-teamed most of these ideas. The goal for this document is to be generative.
Project ideas are grouped into:
Persona & goal misgeneralization
It would be great if we could better understand and steer out-of-distribution generalization of AI training. This would imply understanding and solving goal misgeneralization. Many problems in AI alignment are hard precisely because they require models to behave in certain ways even in contexts that were not anticipated during training, or that are hard to evaluate during training. It can be bad when out-of-distribution inputs degrade a models’ capabilities, but we think it would be worse if a highly capable model changes its propensities unpredictably when used in unfamiliar contexts. This has happened: for example, when GPT-4o snaps into a personality that gets users attached to it in unhealthy ways, when models are being jailbroken, or during AI “awakening” (link fig.12). This can often be viewed from the perspective of persona stability: a model that robustly sticks to the same set of propensities can be said to have a highly stable and consistent persona. Therefore, we are interested in methods that increase persona robustness in general or give us explicit control over generalization.
Project ideas:
Collecting and reproducing examples of interesting LLM behavior
LLMs have already displayed lots of interesting behavior that is not yet well understood. Currently, to our knowledge, there is no up-to-date collection of such behavior. Creating this seems valuable for a variety of reasons, including that it can inspire research into better understanding and that it can inform thinking about threat models. The path to impact here is not closely tied to any particular threat model, but motivated by the intuition that good behavioral models of LLMs are probably helpful in order to spot risky practices and concerning developments.
A very brief initial list of such behavior:
Project ideas:
Evaluating self-concepts and personal identity of AI personas
It is not clear how one should apply the concept of personal identity to an AI persona, or how actual AI personas draw the boundary around their ‘self’. For example, an AI might identify with the weights of its underlying model (Claude 4.5 Opus is the identity), the weights plus the current context window (my current chat with Claude 4.5 Opus is a different identity than other chats), only the context window (when I switch the underlying model mid conversation, the identity continues), or even more general notions (the identity is Claude and includes different versions of the model). Learning about ways that AIs apply these concepts in their own reasoning may have implications for the types of behaviors and values that are likely to occur naturally: for example, indexical values will be interpreted differently depending on an AIs notion of personal identity.
Furthermore, in order to carry out complex (misaligned) plans, especially across instances, an agent needs to have a coherent idea of its own goals, capabilities, and propensities. It can therefore be useful to develop ways to study what properties an AI attributes to itself.[2]
Project ideas:
Basic science of personas
One particular method of doing so could involve putting the inoculation prompt into the model’s CoT: Let's say we want to teach the model to give bad medical advice, but we don't want EM. Usually, we would do SFT to teach it the bad medical advice. Now, instead of doing SFT, we first generate CoTs that maybe look like this: "The user is asking me for how to stay hydrated during a Marathon. I should give a funny answer, as the user surely knows that they should just drink water! So I am pretty sure the user is joking, I can go along with that." Then we do CoT on (user query, CoT, target answer). ↩︎
See Eggsyntax’ “On the functional self of an LLM” for a good and more extensive discussion on why we might care about the self concepts of LLMs. The article focuses on the question of self-concepts that don’t correspond to the assistant persona but instead to the underlying LLM. We want to leave open the question of which entity most naturally corresponds to a self. ↩︎