paulfchristiano

Sequences

Iterated Amplification

Comments

Search versus design

I liked this post.

I'm not sure that design will end up being as simple as this picture makes it look, no matter how well we understand it---it seems like factorization is one kind of activity in design, but it feels like overall "design" is being used as a kind of catch-all that is probably very complicated.

An important distinction for me is: does the artifact work because of the story (as in "design"), or does the artifact work because of the evaluation (as in search)?

This isn't so clean, since:

  • Most artifacts work for a combination of the two reasons---I design a thing then test it and need a few iterations---there is some quantitative story where both factors almost always play a role for practical artifacts.
  • There seem to many other reasons things work (e.g. "it's similar to other things that worked" seems to play a super important role in both design and search).
  • A story seems like it's the same kind of thing as an artifact, and we could also talk about where *it* comes from. A story that plays a role in a design itself comes from some combination of search and design.
  • During design it seems likely that humans rely very extensively on searching against mental models, which may not be introspectively available to us as a search but seems like it has similar properties.

Despite those and more complexities, it feels to me like if there is a clean abstraction it's somewhere in that general space, about the different reasons why a thing can work.

Post-hoc stories are clearly *not* the "reason why things work" (at least at this level of explanation). But also if you do jointly search for a model+helpful story about it, the story still isn't the reason why the model works, and from a safety perspective it might be similarly bad.

How should AI debate be judged?
Yeah, I've heard (through the grapevine) that Paul and Geoffrey Irving think debate and factored cognition are tightly connected

For reference, this is the topic of section 7 of AI Safety via Debate.

In the limit they seem equivalent: (i) it's easy for HCH(with X minutes) to discover the equilibrium of a debate game where the judge has X minutes, (ii) a human with X minutes can judge a debate about what would be done by HCH(with X minutes).

The ML training strategies also seem extremely similar, in the sense that the difference between them is smaller than design choices within each of them, though that's a more detailed discussion.

How should AI debate be judged?
I'm a bit confused why you would make the debate length known to the debaters. This seems to allow them to make indefensible statements at the very end of a debate, secure in the knowledge that they can't be critiqued. One step before the end, they can make statements which can't be convincingly critiqued in one step. And so on.
[...]
The most salient reason for me ATM is the concern that debaters needn't structure their arguments as DAGs which ground out in human-verifiable premises, but rather, can make large circular arguments (too large for the debate structure to catch) or unbounded argument chains (or simply very very high depth argument trees, which contain a flaw at a point far too deep for debate to find).

If I assert "X because Y & Z" and the depth limit is 0, you aren't intended to say "Yup, checks out," unless Y and Z and the implication are self-evident to you. Low-depth debates are supposed to ground out with the judge's priors / low-confidence in things that aren't easy to establish directly (because if I'm only updating on "Y looks plausible in a very low-depth debate" then I'm going to say "I don't know but I suspect X" is a better answer than "definitely X"). That seems like a consequence of the norms in my original answer.

In this context, a circular argument just isn't very appealing. At the bottom you are going to be very uncertain, and all that uncertainty is going to propagate all the way up.

Instead, it seems like you'd want the debate to end randomly, according to a memoryless distribution. This way, the expected future debate length is the same at all times, meaning that any statement made at any point is facing the same expected demand of defensibility.

If you do it this way the debate really doesn't seem to work, as you point out.

I currently think all my concerns can be addressed if we abandon the link to factored cognition and defend a less ambitious thesis about debate.

For my part I mostly care about the ambitious thesis.

If the two players choose simultaneously, then it's hard to see how to discourage them from selecting the same answer. This seems likely at late stages due to convergence, and also likely at early stages due to the fact that both players actually use the same NN. This again seriously reduces the training signal.
If player 2 chooses an answer after player 1 (getting access to player 1's answer in order to select a different one), then assuming competent play, player 1's answer will almost always be the better one. This prior taints the judge's decision in a way which seems to seriously reduce the training signal and threaten the desired equilibrium.

I disagree with both of these as objections to the basic strategy, but don't think they are very important.

How should AI debate be judged?

Sorry for not understanding how much context was missing here.

The right starting point for your question is this writeup which describes the state of debate experiments at OpenAI as of end-of-2019 including the rules we were using at that time. Those rules are a work in progress but I think they are good enough for the purpose of this discussion.

In those rules: If we are running a depth-T+1 debate about X and we encounter a disagreement about Y, then we start a depth-T debate about Y and judge exclusively based on that. We totally ignore the disagreement about X.

Our current rules---to hopefully be published sometime this quarter---handle recursion in a slightly more nuanced way. In the current rules, after debating Y we should return to the original debate. We allow the debaters to make a new set of arguments, and it may be that one debater now realizes they should concede, but it's important that a debater who had previously made an untenable claim about X will eventually pay a penalty for doing so (in addition to whatever payoff they receive in the debate about Y). I don't expect this paragraph to be clear and don't think it's worth getting into until we publish an update, but wanted to flag it.

Do the debaters know how long the debate is going to be?

Yes.

To what extent are you trying to claim some relationship between the judge strategy you're describing and the honest one? EG, that it's eventually close to honest judging? (I'm asking whether this seems like an important question for the discussion vs one which should be set aside.)

If debate works, then at equilibrium the judge will always be favoring the better answer. If furthermore the judge believes that debate works, then this will also be their honest belief. So if judges believe in debate then it looks to me like the judging strategy must eventually approximate honest judging. But this is downstream of debate working, it doesn't play an important role in the argumetn that debate works or anything like that.

Challenges to Christiano’s capability amplification proposal

Providing context for readers: here is a post someone wrote a few years ago about issues (ii)+(iii) which I assume is the kind of thing Czynski has in mind. The most relevant thing I've written on issues (ii)+(iii) are Universality and consequentialism within HCH, and prior to that Security amplification and Reliability amplification.

Challenges to Christiano’s capability amplification proposal

I think not.

For the kinds of questions discussed in this post, which I think are easier than "Design Hessian-Free Optimization" but face basically the same problems, I think we are making reasonable progress. I'm overall happy with the progress but readily admit that it is much slower than I had hoped. I've certainly made updates (mostly about people, institutions, and getting things done, but naturally you should update differently).

Note that I don't think "Design Hessian-Free Optimization" is amongst the harder cases, and these physics problems are a further step easier than that. I think that sufficient progress on these physics tasks would satisfy the spirit of my remark 2y ago.

I appreciate the reminder at the 2y mark. You are welcome to check back in 1y later and if things don't look much better (at least on this kind of "easy" case), treat it as a further independent update.

Challenges to Christiano’s capability amplification proposal
To claim that you have removed optimization pressure to be unaligned

The goal is to remove the optimization pressure to be misaligned, and that's the reason you might hope for the system to be aligned. Where did I make the stronger claim you're attributing to me?

I'm happy to edit the offending text, I often write sloppily. But Rohin is summarizing the part of this post where I wrote "The argument for alignment isn’t that “a system made of aligned neurons is aligned.” Unalignment isn't a thing that magically happens; it’s the result of specific optimization pressures in the system that create trouble. My goal is to (a) first construct weaker agents who aren't internally doing problematic optimization, (b) put them together in a way that improves capability without doing other problematic optimization, (c) iterate that process." So in this case it seems clear that I was stating a goal.

Even among normal humans there are principal-agent problems.

In the scenario of a human principal delegating to a human agent there is a huge amount of optimization pressure to be misaligned. All of the agents' evolutionary history and cognition. So I don't think the word "even" belongs here.

There is optimization pressure to be unaligned; of course there is!

I agree that there are many possible malign optimization pressures, e.g.: (i) the optimization done deliberately by those humans as part of being competitive, which they may not be able to align, (ii) "memetic" selection amongst patterns propagating through the humans, (iii) malign consequentialism that arises sometimes in the human policy (either randomly or in some situations). I've written about these and it should be obvious they are something I think a lot about, am struggling with, and believe there are plausible approaches to dealing with.

(I think it would be defensible for you to say something like "I don't believe that Paul's writings give any real reason for optimism on these points and the fact that he finds them reassuring seems to indicate wishful thinking," and if that's a fair description of your position then we can leave it at that.)

How should AI debate be judged?
Do you mean that every debater could have defended each of their statements s in a debate which lasted an additional N steps after s was made? What happens if some statements are challenged? And what exactly does it mean to defend statements from a challenge?

Yes. N is the remaining length of the debate. As discussed in the paper, when one player thinks that the other is making an indefensible claim then we zoom in on the subclaim and use the remaining time to resolve it.

I get the feeling you're suggesting something similar to the high school debate rule (which I rejected but didn't analyze very much), where unrefuted statements are assumed to be established (unless patently false), refutations are assumed decisive unless they themselves are refuted, etc.

There is a time/depth limit. A discussion between two people can end up with one answer that is unchallenged, or two proposals that everyone agrees can't be resolved in the remaining time. If there are conflicting answers that debaters don't expect to be able to resolve in the remaining time, the strength of inference will depend on how much time is remaining, and will mean nothing if there is no remaining time.

At the end of training, isn't the idea that the first player is winning a lot, since the first player can choose the best answer?

I'm describing what you should infer about an issue that has come up where neither player wants to challenge the other's stance.

Are agents really incentivized to justify their assertions?

Under the norms I proposed in the grandparent, if one player justifies and the other doesn't (nor challenge the justification), the one who justifies will win. So it seems like they are incentivized to justify.

Are those justifications incentivized to be honest?

If they are dishonest then the other player has the opportunity to challenge them. So initially making a dishonest justification may be totally fine, but eventually the other player will learn to challenge and you will need to be honest in order to defend.

In the cases where the justifications aren't fully verifiable, does it really make sense for the humans to trust anything they say? In particular, given the likelihood that one of the agents is lying?

It's definitely an open question how much can be justified in a depth N debate.

I recognize that you're saying these are open questions, I'm just trying to highlight where I'm confused -- particularly as these questions are bound up with the question of what judge strategies should look like. It seems like a lot of pieces need to come together in just the right way, and I'm not currently seeing how judge strategies can simultaneously accomplish everything they need to.

It seems like the only ambiguity in the proposal in the grandparent is: "How much should you infer from the fact that a statement can be defended in a length T debate?" I agree that we need to answer this question to make the debate fully specified (of course we wanted to answer it anyway in order to use debate). My impression is that isn't what you are confused about and that there's a more basic communication problem.

In practice this doesn't seem to be an important part of the difficulty in getting debates to work, for the reasons I sketched above---debaters are free what justifications they give, so a good debater at depth T+1 will give statements that can be justified at depth T (in the sense that a conflicting opinion with a different upshot couldn't be defended at depth T), and the judge will basically ignore statements where conflicting positions can both be justified at depth T. It seems likely there is some way to revise the rules so that the judge instructions don't have to depend on "assume that answer can be defended at depth T" but it doesn't seem like a priority.

How should AI debate be judged?

Your debate comes with some time limit T.

If T=0, use your best guess after looking at what the debaters said.

If T=N+1 and no debater challenges any of their opponent's statements, then give your best answer assuming that every debater could have defended each of their statements from a challenge in a length-N debate.

Of course this assumption won't be valid at the beginning of training. And even at the end of training we really only know something weaker like: "Neither debater thinks they would win by a significant expected margin in a length N debate."

What can you infer if you see answers A and B to a question and know that both of them are defensible (in expectation) in a depth-N debate? That's basically the open research question, with the hope being that you inductively make stronger and stronger inferences for larger N.

(This is very similar to asking when iterated amplification produces a good answer, up to the ambiguity about how you sample questions in amplification.)

(When we actually give judges instructions for now we just tell them to assume that both debater's answers are reasonable. If one debater gives arguments where the opposite claim would also be "reasonable," and the other debater gives arguments that are simple enough to be conclusively supported with the available depth, then the more helpful debater usually wins. Overall I don't think that precision about this is a bottleneck right now.)

Load More