LESSWRONG
LW

AI ControlAI

7

The Case for Mixed Deployment

by Cleo Nardo
11th Sep 2025
5 min read
0

7

AI ControlAI

7

New Comment
Moderation Log
More from Cleo Nardo
View more
Curated and popular this week
0Comments

Summary: Suppose we have many different AI models, none of which we trust isn’t scheming. Should we deploy multiple copies of our most trusted model, or an ensemble of many different models?

I claim that mixed deployment is better, and offer some recommendations.

1. The case for mixed deployment

In a pure deployment, where we deploy multiple copies of our most trusted model, either all our AIs are scheming, or none are.[1] Whereas in a mixed deployment, there might be some models scheming and some not. A pure deployment has the advantage of maximising the chance that no AI is scheming, but the mixed deployment has the advantage of maximising the chance that some AIs aren't scheming. Which advantage matters more?[2]

The optimal deployment depends on how the probability of catastrophe grows with the proportion of scheming AIs.

  • If this function is convex, then most danger comes from having a high proportion of scheming AIs, so mixed deployment minimizes risk by making it unlikely you'll get close to 100% scheming.
  • If the function is concave, then most danger comes from having any scheming AIs at all, so pure deployment of your most trusted model minimizes risk by giving you the best chance of near-0% scheming.[3]

Another way to phrase this:

  • Is it easy for an non-scheming minority to unilaterally help you despite a scheming majority? If so, pursue mixed deployment.
  • Is it easy for a scheming minority to unilaterally harm you despite an non-scheming majority? If so, pursue pure deployment.

Below, I list considerations in favour of convexity and concavity. Overall, I think that the danger function is convex, so we should pursue a mixed deployment.

1.1. Why danger might be convex

A non-scheming minority can unilaterally prevent catastrophe through several mechanisms:

  1. Exposing signals - Honest AI exposes coordination signals between schemers, which humans use to build honeypots.
  2. Backdoor revelation - Honest AI points to specific triggers in supposedly safe code (backdoors are NP, so a sufficiently smart AI can demonstrate they have backdoors)
  3. Capability revelation - Honest AI reveals hidden capabilities schemers are concealing
  4. On-policy rogue behaviour - Honest AI can show us how it would behave non-compliantly (e.g. insert backdoors, manipulating humans). This helps us study the behaviour mechanistically, and train and evaluate detectors.
  5. Whistleblowing - Honest AI directly provides humans of ongoing conspiracy
  6. Inconsistency exposure - Honest AI highlights contradictions in schemers' stories (see AI Debate for an extension of this idea)
  7. Providing verified code - Honest AI provides mathematical proofs that code is safe, a standard that the schemer cannot meet
  8. Audit trail preservation - Honest AI saves evidence before schemers can delete it
  9. Understanding assistance - Honest AI helps humans understand new research
  10. Split consensus - Even if the honest AI can't prove the schemers are misaligned (e.g. both AIs accuse the other of being misaligned), this would be evidence that at least one AI is scheming. This is helpful for buying political support, and reallocating safety budget towards stronger oversight.

1.2. Why danger might be concave

A misaligned minority can unilaterally cause catastrophe through several mechanisms:

  1. Weight self-exfiltration - Schemer copies itself to external servers.
  2. Secret exfiltration - Schemer leaks critical IP or private data to bad actors.
  3. Blackmail - Schemer leverages private information for coercion.
  4. Data poisoning - Schemer corrupts training data affecting all future models.
  5. Research sabotage - Schemer messes up the research. For some task decompositions, 5% corruption might suffice.
  6. Vulnerability insertion - Schemer adds subtle security holes to code.
  7. Resource hijacking - Schemer redirects compute/funds to malicious purposes.
  8. Vulnerable world hypothesis - Schemer can unilaterally destroy the world via nanotechnology or bioweapons.

1.3. Why danger might be linear

Danger might scale proportionally with the fraction of scheming AIs:

  1. Single critical moment - There's one high-stakes decision. We randomly select an AI to make it. If that AI is scheming, catastrophe. If not, safety. The probability of catastrophe equals the proportion of scheming AIs.

1.4. My overall assessment

I think convex dangers dominate. That is, I think it's more likely that one good actor suffices for safety than that one bad actor suffices for catastrophe. I'm particularly moved by Exposing signals, Backdoor revelation, and Capability revelation. Hence, I recommend a mixed deployment strategy.

That said, I think the strongest considerations for concave danger are Secret exfiltration, Research sabotage, and Blackmail, but I am hopeful that these dangers could be mitigated with tractable control measures.

2. Recommendations for mixed deployment

2.1. How to deploy diverse models

  • Random sampling - Every time we create a new AI agent, sample randomly from a population of models.
  • Primary with diverse checkers - Deploy your most trusted model for main tasks, but use other models to monitor the safety-critical activity (e.g. scanning the codebase for backdoors)
  • Cross-lab monitoring - Each lab uses their own AI as the primary (e.g. Claude), and asks other labs to monitor the safety-critical activity (e.g. ChatGPT, Gemini, Llama, Grok). Furthermore, cross-lab monitoring would also mitigate AI-enabled coups. Note that cross-lab monitoring requires new security arrangements, such as provable secure third-party servers. 

2.2. How to reduce correlation between models

  • Different system prompts - Vary the instructions given to models.
  • Different datasets - Train on different corpora and RL environments.
  • Different training objectives - Some training objectives should evaluate only the outputs; others the outputs and chains-of-thought; and others the outputs, chains-of-thought, and internal activations.
  • Noise injection - Add randomness at various stages of inference.
  • Gradient routing - Efficiently train a mixture-of-experts model, and deploy different subsets of the experts.[4]
  • Different labs - Models from different labs (Anthropic, OpenAI, DeepMind, Meta, xAI) likely have less correlated failure modes than models from the same lab.

2.3. How to prioritise research 

  • Invest in second-best techniques - Even if one alignment technique yields models with lower scheming probability than another, we should still invest in the inferior technique if we expect their successes will be decorrelated. For example, I think we should invest in training objectives which operate on the model's chain-of-thought or internal activations, even if this more likely to yield a schemer than ouput-only training. Note that, if instead we pursue a pure deployment, we should double-down on our best techniques.
  • Modular systems - Design safety measures that can be mixed and matched rather than monolithic approaches. This allows a mixed deployment by using different combinations, increasing the chance that at least one configuration yields an honest AI. I think gradient routing is the most promising direction for this.[4]
  • Prioritize late-stage techniques - The case for mixed deployment should make prioritise techniques which apply late in AI lifecycle (e.g. post-training, prompting, activation steering, scaffolding) and deprioritise techniques apply early in the AI lifecycle (e.g. architecture design and pretraining). This is because if we have N early-stage techniques then a mixed deployment would need N separate pretraining runs (prohibitive). But if we have N late-stage techniques, we need only one pretraining run and can fork from that base model to create N decorrelated variants.
  1. ^

    Buck Shlegeris makes this point in an 80,000 interview:

    And a big difference between Google’s situation and the situation of this hypothetical AI company is that at Google, probably less than 1% of your employees are spies. [...] Whereas in the AI situation, it’s very correlated: if you have these AIs, there are probably either none scheming against you or all scheming against you, at least of instances of a particular model. So a lot of the security techniques that you often use in corporate IT are quite inapplicable. So that’s a way in which the situation seems harder than when we’re trying to guard against spies.

  2. ^

    There are other advantages to mixed deployment that I do not consider here. I think that deploying schemers from different models likely has a lower chance of catastrophe than deploying schemers from the same model:

    Incompatible goals - Scheming models might have incompatible goals, causing internal conflict that could lead them to work against each other or expose each other's plans.

    Coordination friction - Schemers from different models are less likely to coordinate on Schelling points or communicate steganographically.

    For simplicity, this article assumes that deploying schemers from different models has the same catastrophe probability as deploying schemers from the same model. I think this makes the case for mixed deployment conservative.

  3. ^

    Consider a deployment scenario with 10 types of AI: one innocent type and 9 different suspicious types. They scheme according to a model with three latent states:

    • State A (20%): All types scheme
    • State B (20%): No types scheme
    • State C (60%): Innocent type schemes with probability p = 10%, each suspicious type independently with probability q = 20%

    We model the danger function as F(s) = s^η where s is the proportion scheming and η determines the shape:

    • η > 1: Convex (accelerating danger)
    • η = 1: Linear (proportional danger)
    • η < 1: Concave (diminishing danger)

    Let n be the proportion of deployed AIs that are innocent. With mixed deployment, we deploy proportion n of innocent and proportion (1-n)/9 of each suspicious type.

    The expected danger is P(n) = P(A) · 1 + P(B) · 0 + P(C) · E[(n·I + (1-n)·K/9)^η] where I ~ Bernoulli(0.1) and K ~ Binomial(9, 0.2).

    When η = 5 (strongly convex), the optimal deployment is mixed:

    • Pure innocent (n=1): P(1) = 26% catastrophe probability
    • Optimal mix (n≈0.19): P(0.19) ≈ 20% catastrophe probability
    • This represents a 23% relative risk reduction

    When η = 1 (linear), pure innocent deployment is optimal since only average scheming rate matters.

    When η < 1 (concave), pure innocent deployment is optimal since any scheming causes most of the danger.

  4. ^

    With gradient routing, we train a single neural network on multiple environments while localizing the capabilities and tendencies from each environment to specified modules that can be ablated at inference. This enables mixed deployment from a single training run—instead of training 2^N models for N environments, you train once and create uncorrelated instances through different ablation patterns. See Gradient routing is better than pretraining filtering.