A Concrete Proposal for Adversarial IDA

byevhub 5mo26th Mar 20195 comments

16

Ω 7


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Note: This post came out of a conversation with Geoffrey Irving and Buck Shlegeris.

Epistemic Status: I suspect Paul has already thought of most or all of the ideas presented here, though I nevertheless found the exercise of carefully specifying an IDA implementation helpful and suspect others may find reading it helpful as well.

This is a proposal for how to train a machine learning model to approximate HCH using Iterated Distillation and Amplification (IDA). This particular proposal came out of a desire to use a debate-like adversary to improve the amplification process, and the primary goal of this proposal is to show how one could do that. Though I have tried to retain a lot of the relevant detail, I have made two simplifications to make this proposal easier to specify: I am attempting to approximate something closer to weak HCH rather than strong HCH and I am only allowing the generation of two subquestions at a time. I am confident that those simplifications could easily be dropped, though I think doing so here would only make this presentation more complicated.

Before I proceed, I want to make one final note: this is not a proposal for how to build an aligned AGI. I think there are still a whole bunch of issues that would prevent this proposal from actually working.

Definitions

We will start with some initial definitions:

  • Let be the set of all questions in natural language.
  • Let be the set of all answers in natural language.
  • Let be the sum type of either or representing either an answer to the given question or two subquestions to help answer it.
  • Let be the answer that a human gives to the given question.
  • Let be the answer or subquestion pair generated by a human when asked what to do with the given question.
  • Let be the answer or two subquestions generated by a human to some question when given answers to two subquestions related to that question.
  • Let be a model (the training procedure for which we will describe below) from questions to a probability distribution over strings representing answers. Specifically, we will implement the probability distribution by having our model output an embedding vector which yields the probability distribution when fed into some trained language model (by repeatedly conditioning on previous characters and multiplying all the conditional probabilities).
  • Let be the function from questions to an embedding vector representing a distribution over answers or subquestions generated by asking what to do with the given question. For the two subquestion case we enforce that the distribution is symmetric wrt interchange of the subquestions.
  • Let be the function from two answers to an embedding vector representing a distribution over answers or subquestions generated by asking to integrate the given subquestion answers into an answer to the original question. We again enforce symmetry wrt interchange of the subquestions.
  • Let be an adversary model which we will train (as described below) to predict how good the given answer is to the given question.
  • Let be an adversary model for generated by calling using the same transformation as used to implement from .
  • Let be an adversary model for generated by calling using the same transformation as used to implement from .
  • Let be some Gumbel random variable such that each use of below is its own independent sample from a Gumbel distribution. We use a Gumbel random variable so that we can use the Gumbel-max trick to effectively do Boltzmann exploration.

Amplification

The goal here is to use the adversary to determine which portions of the HCH tree we actually go down to maximize our sample efficiency wrt . Specifically, the amplification procedure is as follows:

1. Generate some question either from the training distribution or from the output of some adversary.

2. Call . If we get an answer, let and skip to the end. Otherwise, let .

3. Call to generate scores and .

4. Without loss of generality, let and recurse on (by setting in the subcall) to generate (from the of the subcall) and let .

5. Call . If we get an answer, let . Otherwise, let and return to step 3.

This procedure should eventually produce an answer, though we then need to compute the following losses for :

Note that some of these losses are actually for the same model since are implemented internally by calling and are implemented internally by calling .

Distillation

For the distillation procedure, we want to improve by training it to approximate its own -consulting- tree. Specifically:

1. Generate some as in amplification.

2. Call . If we get an answer, let and skip to the end. Otherwise, let .

3. Let and .

4. Call . If we get an answer, let . Otherwise, let and return to step 3.

Then, compute the following losses for :

Iteration

Finally, we want to iterate the above procedure by replacing in the amplification procedure with some . First, let be some confidence threshold. Then, we will define the following primed s:

  • Let .
  • Let
  • Let

This procedure allows us to continue amplifying the model while using the adversary to require only minimal human data that is selected so as to be maximally helpful.

Conclusion

This proposal differs in a couple of ways from previous proposals made by Paul. First, Paul has recently moved away from discrete amplification/distillation steps. This proposal, however, provides a way to recover discrete steps while still collapsing the recursion. In practice, however, you might still just want to stick with the amplification procedure described here without doing the distillation step, as it isn't strictly necessary.

Second, this proposal uses an adversary to guide the training process. This technique is similar to the concept of importance sampling. The main benefit of this approach is that it takes advantage of active learning by allowing the system to choose which questions and subquestions would be most useful for it to have answered by a human.

Another benefit of the adversary, however, is that it could make transparency much easier. One of the major benefits of IDA is that gets trained to approximate its own -consulting- tree. As a result, understanding the reasoning that went into the final answer produced by can be recovered by unfolding its tree (at least in the limit of perfect training). However, unfolding the entire tree is very expensive, as it's linear in the size of the tree. With an adversary, however, you can choose which portions of the tree to unfold first by calling the adversary, enabling you to find errors much more quickly; for a perfect adversary, this reduces the problem of finding an error to instead of .

Thus, the hope is that the use of such an adversary could assist both in making IDA more competitive (by increasing sample efficiency and using active learning) and in making IDA safer (due to the increased ease of transparency).

It should be noted, however, that it is also possible that the use of such an adversary might make the safety situation for IDA worse. First, it introduces the possibility of a robustness to relative scale failure if either or gets significantly stronger than the other. One possible way to resolve such an issue, however, might be to give the ability to call and vice versa, allowing them to use each other to boost their own capabilities. Second, for an and system that are themselves optimizers, with goals that don't perfectly match up with their loss functions, they could cooperate to make it arbitrarily unlikely that is ever consulted on some specific question. Third, even if and weren't cooperating, an RSA-2048-style failure could still prevent the identification of malicious cognition. Resolving failures of these second two types is still an open question (see "Inner Optimization," Hubinger, van Merwijk, Mikulik, Skalse, and Garrabrant, forthcoming).

16

Ω 7