LESSWRONG
Petrov Day
LW

3254
Alek Westover
48Ω19340
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Case for Mixed Deployment
Alek Westover16d31
  • Different labs - Models from different labs (Anthropic, OpenAI, DeepMind, Meta, xAI) likely have less correlated failure modes than models from the same lab.

Cross lab monitoring seems like a great idea. As an implementation detail, I think labs should do cross-lab monitoring with AI models that have instructions to not aid the other lab with AI R&D, synthetic data generation, or anything besides monitoring. Of course, if you thought that Gemini was more likely to be aligned than Claude, then you might prefer that Anthropic use Gemini for automating AI R&D, but this is almost certainly infeasible. One other complication worth noting with cross lab monitoring is that the AI models from different labs will probably have different intelligence levels, and different skill profiles. Thus, cross-lab monitoring may be giving us a model which is in between trusted monitoring and untrusted monitoring both in terms of intelligence and “trustedness”. This means might make untrusted monitoring preferable to cross-lab monitoring in some cases (e.g., if Claude is much smarter than all other models), but cross-lab monitoring still seems useful as a tool to add to our monitoring system.

Reply
The Case for Mixed Deployment
Alek Westover16d10

Hi Cleo, nice post! I agree with you that mixed deployments are a good tool to have in our toolbox, but I disagree somewhat with your model of why mixed deployments are good.

My thoughts on whether danger is convex/concave in the fraction of your deployment which is scheming:
Suppose that we have the following two options:

  1. 50% chance of a pure aligned deployment, 50% chance of a pure misaligned deployment.
  2. A mixed deployment, where a random 50% of actions are taken by an aligned AI, and the other 50% of actions are taken by a misaligned AI.

Which should we prefer? Let’s consider the threat model of weight exfiltration. Suppose that in the pure deployment world the misaligned AI’s chance of success is 30% if we do vanilla untrusted monitoring. Then, the overall risk in the first situation is 15%, because there was a 50% chance that the AI was aligned. So, we’d prefer the mixed deployment described in situation 2 if and only if the mixed deployment has risk below 15%. There are two reasons why the mixed deployment should have lower risk than a pure misaligned deployment:

  1. In the mixed deployment, the misaligned AI has ½ as many chances to take bad actions compared to a pure misaligned deployment.
  2. Using a powerful aligned model to monitor the misaligned model might be more effective than having a copy of the misaligned model monitor the misaligned model.

I suspect that (1) is a very small effect (I talked to Adam and Cody (some authors on Ctrl-Z), and they thought that doubling deployment length while keeping auditing budget a constant fraction of the deployment doesn’t really change the safety and usefulness much). I think that (2) is a larger effect, but am pretty unsure if it’s enough to halve the risk compared to a pure misaligned deployment (I think we should investigate this empirically).

Why I’m excited about mixed deployments despite uncertainty about the above question:
The situation in practice looks a lot better than a forced choice between a pure deployment that might be aligned / misaligned vs a mixed aligned / misaligned deployment. Here are some reasons why the actual situation is better:

  • As you mention, we could use one AI (e.g., the least suspicious one) for taking actions, and other AIs for monitoring this AI.
    • If the AI that is taking actions is aligned, then we are in great shape, even if the monitor AI is misaligned: the monitor seems like it’s probably not in a good position to cause lots of harm here.
    • If the AI that is taking actions is misaligned, and the monitor is aligned, then it still seems like we’re probably in a better spot than vanilla untrusted monitoring, because the monitor will be trying its best to block or catch bad actions.
    • If the AI taking actions is misaligned, and the monitor is also misaligned, but they are misaligned in different ways and are less capable of / included to collusion than if we had done vanilla untrusted monitoring, then we’re still in a better spot than we would be counterfactually. For this reason, I think that it might be a good idea to train clearly misaligned AI models (e.g., reward seekers) and use these as monitors on some occasions.
  • We can adjust the parameters of our mixed deployment in response to evidence. If we collect evidence that makes us think that one AI is more likely to be misaligned, we can stop deploying that AI, or only use that AI as a monitor.

Implications for what research should be done
It might be valuable for someone to do a research project investigating how good mixed deployments are, compared to pure aligned and pure misaligned deployments. Specifically, here’s what I’m imagining:

  • You have a control setting, e.g., “main task: solve APPs coding questions, side task: insert a backdoor”.
  • The blue team is given two powerful models, each of which is either aligned or misaligned.
  • The blue team uses some combination of these models to try to solve the coding questions as well as possible, while avoiding backdoors via monitoring. The blue team also wins if they incriminate an AI.
  • It’s totally fair game for the blue team to decide “this model has done some suspicious things, so we’ll use the other model instead”.
  • We measure how much safety the blue team can achieve given a certain usefulness requirement.

I think the blue team might be able to do extremely well with mixed deployments, e.g., much better than halving risk compared to a pure misaligned deployment.

Reply
METR's Evaluation of GPT-5
Alek Westover1mo23

Nice work!

  • Have you considered giving people "risk scores" instead of the binary "safe / not safe" distinction? Risk scores are nice because they have gradients.
  • I'm also interested in how you're thinking about what happens once you say "we can't make a safety case for this model". Does this mean that the model is then only internally deployed? (I'd suspect that this is bad, because it means that people have less transparency about the situation. Although it's obviously important in some situations --- if you have a 30x AI R&D AI you don't want to just hand this out to other people, or it'll accelerate AI progress). Are you expecting there to be some government intervention at that point?
Reply
Listening Before Speaking
Alek Westover2mo2-1

A few thoughts:

  • Having conversations is often much more productive for resolving confusions than just listening to other people's conversations. It's pretty clear why this would be the case: if I'm confused about thing X, then this may trickle down to small confusions about lots of things Y1, Y2, Y3. Then, it's fairly likely that at least one of Y1,Y2,Y3 will come up in conversation, and if me and the conversation partner iteratively find cruxes, we'll quickly identify X and then the conversation partner can give me the arguments for X, at which point I may change my mind.

  • I would learn a lot more algebraic topology by talking to an algebraic topologist and asking them to define all their terms rather than sitting and "absorbing" a lecture. Ofc "listen to content that you can basically understand with a little bit of new stuff thrown in" is going to be more effective than this, but this still feels like a super silly way to learn algebraic topology.

  • "Speaking so that you sound like a native" is not the only goal of language learning (I'd argue that it's a relatively minor one). For instance, if I was to learn a new language my goal would be "be able to kind of basically communicate" rather than to be a native speaker. I'd guess that practicing conversation is much more efficient for picking this skill up, but would be interested if you have evidence to the contrary.

  • I kind of disagree with the suggestions in the "culture learning" section. I don't think it's good for epistemics for people to try to adopt the opinions of people on LW, or to think of this as a goal. If you're only willing to talk to people that share your opinions, then that is an echo-chamber. However, I agree that it's more efficient from my perspective for people to have first read some basic content about why people are worried about AI before talking to me about it (although I suspect that this is much less efficient from their perspective because I can adaptively choose what to talk about based on their demonstrated confusions).

Reply
No wikitag contributions to display.
29What training data should developers filter to reduce risk from misaligned AI? An initial narrow proposal
Ω
11d
Ω
0
14Why I think AI will go poorly for humanity
6mo
0
5Safe Distillation With a Powerful Untrusted AI
7mo
1