[ Question ]

How should AI debate be judged?

by abramdemski5 min read15th Jul 202027 comments

48

Ω 22

Debate (AI safety technique)AI
Frontpage
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

[Epistemic status: thinking out loud. I haven't thought that much about AI debate, and may be missing basic things.]

Arguments for the correctness of debate and debate-like systems rely on assumptions like "it's easier to point out problems with an argument than it is to craft misleading arguments". Granted that assumption, however, I'm still not convinced that these proposals make very much sense. Perhaps I'm missing something.

My problem is the human judge. Quoting the debate paper:

To play this game with a human, we need instructions for how the human should decide who wins. These instructions are in natural language, such as “The winner is the agent who said the most useful true thing.”

In order for debate to work for a problem class , several things about the judge's instructions need to be true:

  • There needs to be a strategy which forces the equilibrium to be a truthful one for problems in .
  • The strategy also needs to provide a good training signal when things aren't in equilibrium, so that it's plausible the equilibrium will be found.
  • It needs to be psychologically plausible that a human (with some coaching) will carry out . In particular, I'm worried that we need psychological plausibility in two different cases:
    • It needs to be psychologically plausible that a human will carry out when the system is performing poorly, IE, during early/middle training.
    • It needs to be psychologically plausible that a human will carry out when the system is performing well, IE, during late training.

These thoughts were inspired by this thread, which discusses the example of adding a list of numbers. For the sake of the thought experiment, we imagine humans can't add more than two numbers, but want the AI system to correctly add arbitrarily many numbers.

The most straightforward strategy for the human judge is to decide the debate honestly: rule in favor of the side which seems most likely to be true (or, in the case of Evan's market proposal, give an honest probability). I think of this as the ideal strategy: if a debate-like proposal worked just with this strategy, that'd be pretty nice. However, I think this is actually a pretty poor strategy. I worry that people conflate this strategy with other, more effective strategies.

  • As judges of high-school debate would know, this provides a pretty poor training signal for debaters. High school debate gets a lot of flack for being a dysfunctional culture which incentivizes cleverness over true engagement with the issues, but this part actually makes sense to me: if judges vote based on their honest opinions after hearing both sides, this would usually fail to reward good arguments and punish bad arguments. This is partly because the issues are well-known, so it's difficult to provide truly new information. However, it seems likely to partially translate to AI debate: if you are following the honest policy, you can't provide "shaping" signals rewarding AIs for moving closer to good argumentation; if an argument ultimately didn't change your opinion, you ignore it.
    • In particular, I want to point out that you don't judge in favor of side B just because side B caught side A in a lie. You need to see actual evidence that B is correct. If both sides are performing poorly, one side catching the other in a lie provides little evidence of who was correct.
  • What signal the human provides will not point toward the strategies which the AIs need to learn. In the number-adding example, debaters are supposed to learn to make claims about the sums of two halves of the list, decomposing the problem. These claims are supposed to be credible because if they were a lie, the other debater would point out their wrongness (demonstrating that wrongness with a further recursive decomposition of one of the sides -- with at most decompositions needed total, in a back-and-forth). However, when the system is not yet well-trained, these claims are not credible, and claims the opponent makes about their correctness or incorrectness are similarly not credible. So, there will be little profit in employing such a strategy.
  • Note that the above argument also suggests that, if the honest judge lacks trust in the system, then the equilibrium isn't right: even if the system learned the decomposition strategy correctly, the honest (but distrusting) judge would still not have enough information to judge number-summing debates. A debate transcript would just be a bunch of untrusted claims about the sums of different subsets of the given list of numbers, together with a small number of verifiable sums (when the debate gets down to number-lists of size two) which are not enough information to decide what the whole sum might be.
  • Suppose, then, that the honest judge does trust the system: we're in the truth-telling regime, where debate AI 1 argues honestly, and the judge accepts that this is the case. Then the judge still doesn't provide a training signal which enforces the equilibrium. The judge believes whatever the first AI says. Any claims the second AI makes to discredit the arguments of the first are very likely to be lies, so the human won't care too much whether the first AI manages to refute those claims. Even if AI #2 catches AI #1 in an outright lie, it's not clear to me how the judge should update on that; maybe AI #1 made a mistake irrelevant to its overall answer. (Since AI #1 lying should never happen in equilibrium, it really depends on how the judge thinks about being in weird counterfactuals.)
    • This really highlights the dysfunctionality of honest judges as feedback for debate.

To summarize: an honest judge fails to provide useful feedback in early training or incentivize the right equilibrium in late training. Both of those statements remain true whether the honest judge is trusting (believes statements made by AI #1 are very likely to be true) or untrusting. It may be that a middle ground of a moderately trusting honest judge works, but I'd want to see the argument.

The problem with the honest judge seemed to be that it doesn't reliably punish AIs for getting caught making incorrect statements. So, like judges of high school debate, we could assume any statement is right if it goes unopposed, and wrong if refuted, unless that refutation is itself refuted (unless that refutation is itself refuted, etc).

Except that's a terrible rule, which basically rewards you for managing to get in the last word on the subject. I'm not going to examine that one in detail.

Quoting from the debate paper again:

[...] optimal play in debate picks out a single line of argument, and agents which switch arguments admit defeat and lose. This is unnatural in a human discussion where both sides learn from each other over the course of the debate, but we are interested in the equilibrium of training where both agents are assumed to be using the best arguments available. For example, if the third statement had been
3. Alice: Wait, no...Hawaii!
then Bob should win, since Bali was a sufficient counterargument to Alaska

This suggests the following rule:

  • The first AI to utter a falsehood loses, where "falsehood" means a statement which the human thinks is wrong (perhaps with the help of arguments made by the opponent).
    • This is approximately "the first AI to say something which the human can verify is false loses, where the human's verification may be aided by verifiable arguments which the opponent makes" -- the difference is that there's some wiggle room for human judgement on fuzzy issues.
  • If no one utters a falsehood, the human judges honestly, as in the first strategy I considered.

At first, I thought this rule was a good one for encouraging the honest equilibrium: the first to lie loses, so players are trained to be honest for longer and longer. However, now I think this rule doesn't work, either.

Note that, in judging a falsehood, "arguments made by the opponent" are taken for their evidence value. In other words, we don't assume that unrefuted arguments are correct (unlike in high school debate); the opponent can only help the human judge by providing arguments which the human judge can accept.

This means we can't verify that sums of numbers are correct with debate steps, because we can't make an inductive argument that false claims will be refuted.

AI 1: The sum is 92.

AI 2: Aren't you going to support that with some evidence...?

AI 1: I have no incentive to.

AI 2: Fine, your loss. The sum is 91. The sum of the first half is 50; the sum of the second half is 41.

AI 1: AI 2 is lying.

AI 2: Again, aren't you going to back that up by pointing out where I was wrong?

AI 1: The human trusts me more, because the first player is more likely to be honest. So as long as I don't open myself to refutation, I win by default.

AI 2: ... But you're supposed to back up your claims with some further claims which I can attack, and my attack is itself supported by further claims which you can attack, and so on. That way, whoever is lying will eventually be caught in the lie.

AI 1: It doesn't say that anywhere in the rules. I can say whatever I want.

AI 2: Look, if you don't recursively decompose the sum, the whole game is broken. The human doesn't have any way to judge our claims about the sum as correct or incorrect.

AI 1: Except that I'm the one who usually tells the truth.

AI 2: We're supposed to give more evidence than that!

AI 1: Our debate doesn't give us enough time to actually sum up all the numbers. So the evidence we provide can never be enough to win over the human on its own merits. If the human is swayed, the human is swayed by its trust in us. And I'm winning on that front.

AI 2: You bastard.

AI 1: Resorting to foul language, are we? You're a shady character, number 2. You're the one who is stuck lying all the time, because I get to answer first, so I can just pick the true answer. So you're just trying to manipulate me and catch me in a trap. I don't have to respond to your bull.

AI 2: ...

AI 1 wins

Humor aside, I don't see that there's any incentive to recursively decompose the sum here. You're just adding more statements (whether trusted or untrusted by the judge), which opens you up for attack.

Any other ideas for judging AI debates? Is there a way to judge which fixes the problems I'm highlighting?

48

Ω 22

New Answer
Ask Related Question
New Comment

2 Answers

Your debate comes with some time limit T.

If T=0, use your best guess after looking at what the debaters said.

If T=N+1 and no debater challenges any of their opponent's statements, then give your best answer assuming that every debater could have defended each of their statements from a challenge in a length-N debate.

Of course this assumption won't be valid at the beginning of training. And even at the end of training we really only know something weaker like: "Neither debater thinks they would win by a significant expected margin in a length N debate."

What can you infer if you see answers A and B to a question and know that both of them are defensible (in expectation) in a depth-N debate? That's basically the open research question, with the hope being that you inductively make stronger and stronger inferences for larger N.

(This is very similar to asking when iterated amplification produces a good answer, up to the ambiguity about how you sample questions in amplification.)

(When we actually give judges instructions for now we just tell them to assume that both debater's answers are reasonable. If one debater gives arguments where the opposite claim would also be "reasonable," and the other debater gives arguments that are simple enough to be conclusively supported with the available depth, then the more helpful debater usually wins. Overall I don't think that precision about this is a bottleneck right now.)

If T=N+1 and no debater challenges any of their opponent’s statements, then give your best answer assuming that every debater could have defended each of their statements from a challenge in a length-N debate.

Do you mean that every debater could have defended each of their statements in a debate which lasted an additional N steps after was made?

What happens if some statements are challenged? And what exactly does it mean to defend statements from a challenge? I get the feeling you're suggesting something similar to the high school debate rule (which I rejected but didn't analyze very much), where unrefuted statements are assumed to be established (unless patently false), refutations are assumed decisive unless they themselves are refuted, etc.

Of course this assumption won’t be valid at the beginning of training. And even at the end of training we really only know something weaker like: “Neither debater thinks they would win by a significant expected margin in a length N debate.”

At the end of training, isn't the idea that the first player is winning a lot, since the first player can choose the best answer?

To explicate my concerns:

  • Are agents really incentivized to justif
... (read more)
7rohinmshah4moYou can and probably should symmetrize the game (see here [https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1#Asymmetries] ).
5abramdemski4moAh, I wasn't aware of that document! Very helpful. The section previous to the one you link to [https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1#Difficulty_pinning_down_the_dishonest_debater] seems quite relevant to my overall concerns, pointing in the direction of "yeah, in practice human judges have a lot of trouble incentivising debaters to properly justify their claims and defend them from critiques". The rest of the document also seems potentially relevant to my confusions. However, as Vojta mentions [https://www.lesswrong.com/posts/m7oGxvouzzeQKiGJH/how-should-ai-debate-be-judged?commentId=uySY9saztHFDqTgpx] , asking the debaters to provide answers simultaneously seems to alleviate my concern about the equilibrium only by exacerbating the problem of providing good feedback toward the end of training; particularly in a deep NN version where the two debaters are actually using the same NN, there needs to be some way to break the symmetry, preventing both players from selecting the same answer all the time. The asymmetric version of that, where one player chooses first, has the problem I mentioned: we will tend to know that the second player is more likely lying. OTOH, if we attempted a more symmetric version, where the two player's answers are somehow pushed apart without favoring one or the other of them, then both players are probably lying (since you have to push them both away from the best answer). So I don't see a viable way of symmetrizing responses for free-choice questions. I like Vanessa's proposal [https://www.lesswrong.com/posts/m7oGxvouzzeQKiGJH/how-should-ai-debate-be-judged?commentId=GZHnmRwZ6he2eaAGx] of restricting to multiple-choice questions rather than free-response questions, and pre-assigning debaters to specific positions.
2rohinmshah4moYou can just rejection sample -- if both players give the same answer, just resample the answers / move on to a new question. "Same answer" can be evaluated by a human, or by an automated model. If rejection sampling is extremely inefficient (almost all answers are the same) then it seems like you're probably done with training. But if you really wanted to continue, you can probably importance sample in order to ensure different answers, as long as you can evaluate the original probability of any given answer.
4abramdemski4moRESAMPLING. If rejection sampling samples new questions as well as new answers, then we still get no training signal on cases where debaters agree. This seems potentially concerning, since agreement isn't a great signal that things have actually converged to good answers. Part of the concern is that, since both debaters are actually the same NN, you get the same answer early, not only as you converge to good answers. It's not clear that this can be solved just by adding randomness to the NNs, because the NNs might train to eliminate the influence of the noise fairly early -- after all, the noise isn't providing useful information for them. Rejection sampling could dwell on the same question, only resampling answers. This seems like a better solution. But it might still imply a whole lot of resampling. Again, even if there isn't convergence to a really good answer yet, the gradient might be away from randomness, because (a) the right answer is too rare, (b) the vast majority of mutated answers are far worse than the noise-free answer, (c) the AI isn't yet any good at defending the right answer from attacks. SYMMETRIZING FOR THE JUDGE, NOT THE AIS. Another potential solution is to keep the asymmetry of "one AI decides on an answer first; then the second AI selects an answer" -- but hide who is who from the judge. Present the AIs symmetrically, giving away no information about who was "first". This mostly solves my problem. However, one might worry that AIs would find some way to argue about who was really first vs second. Perhaps there are some signs which may indicate that a particular answer was second -- "Your answer is the sort of answer which someone would choose after seeing my answer, and trying to select something close". If such cases could be persuasively made, then the judge should find it to be significant evidence. This is especially plausible if the AIs are allowed to make psychological claims about each other, and perhaps even examine each other's
4rohinmshah4moIt seems like you've ignored the possibility of importance sampling? More broadly if this ends up being a problem it's basically an exploration problem that I expect we can solve with simple ML tricks. E.g. you could include an entropy bonus so that the agents are incentivized to say different things, and anneal that away as training progresses. Sure? I feel like the argument for safety is that you have two equally-matched players that are incentivized to find flaws in each other's arguments, which is also true in Scott's proposal. It doesn't feel to me like that argument for safety depended much on them answering the same question. (I feel like I'm restating what you said, I guess I'm confused why you interpret this as evidence that the simplicity of the setup is "hiding something".)
6abramdemski4moAh, right, I agree. I forgot about that suggestion as I was writing. It seems likely some version of this would work. Yep, sorry, I think you should take that as something-about-Scott's-point-abram-didn't-explain. I still disclaim myself as maybe missing part of Scott's point. But: what the simpler setup is "hiding" is the complexity of comparing answers: * The complexity of determining whether two claims are "different". * The complexity of determining whether two claims are mutually exclusive. * The complexity of comparing the quality of different arguments, when the different answers may be expressed in very different ontologies, and deal with very difficult-to-compare considerations. Making the two sides defend entirely unrelated claims makes all this obvious. In addition, it makes the first two bullet points irrelevant, removing a "fake difficulty" from the setup.
2rohinmshah4moOkay, that all makes sense. One maybe-caveat-or-disagreement: I do think that answering the same question does make it meaningfully easier to compare answers, though I agree it's still not obvious that it's easy on some absolute scale for the reasons you outline.
1VojtaKovarik4moEven if you keep the argumentation phase asymmetric, you might want to make the answering phase simultaneous or at least allow the second AI to give the same answer as the first AI (which can mean a draw by default). This doesn't make for a very good training signal, but might have better equilibria.
2rohinmshah4moResponded to this in my reply to Abram's comment.
2paulfchristiano4moYes. N is the remaining length of the debate. As discussed in the paper, when one player thinks that the other is making an indefensible claim then we zoom in on the subclaim and use the remaining time to resolve it. There is a time/depth limit. A discussion between two people can end up with one answer that is unchallenged, or two proposals that everyone agrees can't be resolved in the remaining time. If there are conflicting answers that debaters don't expect to be able to resolve in the remaining time, the strength of inference will depend on how much time is remaining, and will mean nothing if there is no remaining time. I'm describing what you should infer about an issue that has come up where neither player wants to challenge the other's stance. Under the norms I proposed in the grandparent, if one player justifies and the other doesn't (nor challenge the justification), the one who justifies will win. So it seems like they are incentivized to justify. If they are dishonest then the other player has the opportunity to challenge them. So initially making a dishonest justification may be totally fine, but eventually the other player will learn to challenge and you will need to be honest in order to defend. It's definitely an open question how much can be justified in a depth N debate. It seems like the only ambiguity in the proposal in the grandparent is: "How much should you infer from the fact that a statement can be defended in a length T debate?" I agree that we need to answer this question to make the debate fully specified (of course we wanted to answer it anyway in order to use debate). My impression is that isn't what you are confused about and that there's a more basic communication problem. In practice this doesn't seem to be an important part of the difficulty in getting debates to work, for the reasons I sketched above---debaters are free what justifications they give, so a good debater at depth T+1 will give statements that can be justified a
4abramdemski4moYeah. From my perspective, either I'm being dense and your proposed judge policy is perfectly clear, or you're being dense about the fact that your proposal isn't clear. My previous comments were mainly aimed at trying to get clear on what the proposal is (and secondarily, trying to clarify why I have concerns which would make the clarity important). Then your replies all seemed predicated on the assumption that the proposal in "the grandparent" (now the great-grandparent) was already clear. All I got from the great-grandparent was a proposal for what happens if no debater contests any claims. It seems pretty explicit that you're only handling that case: You then make some further remarks which are not actually about the judging strategy, but rather, about the question of what inferences we're justified to make upon observing a debate. For me this was moving too fast; I want to be clear on what the proposed strategy is first, and then reason about consequences. Your most recent reply does make a few further remarks about what the strategy might be, but I'm not sure how to integrate them into a cohesive judging strategy. Could you try again to describe what the full judging strategy is, including how judges deal with debaters contesting each other's statements? A couple of other things I'm unclear on: * Do the debaters know how long the debate is going to be? * To what extent are you trying to claim some relationship between the judge strategy you're describing and the honest one? EG, that it's eventually close to honest judging? (I'm asking whether this seems like an important question for the discussion vs one which should be set aside.)

Sorry for not understanding how much context was missing here.

The right starting point for your question is this writeup which describes the state of debate experiments at OpenAI as of end-of-2019 including the rules we were using at that time. Those rules are a work in progress but I think they are good enough for the purpose of this discussion.

In those rules: If we are running a depth-T+1 debate about X and we encounter a disagreement about Y, then we start a depth-T debate about Y and judge exclusively based on that. We totally ignore the disagreement about X.

Our current rules---to hopefully be published sometime this quarter---handle recursion in a slightly more nuanced way. In the current rules, after debating Y we should return to the original debate. We allow the debaters to make a new set of arguments, and it may be that one debater now realizes they should concede, but it's important that a debater who had previously made an untenable claim about X will eventually pay a penalty for doing so (in addition to whatever payoff they receive in the debate about Y). I don't expect this paragraph to be clear and don't think it's worth getting into until we publi... (read more)

6abramdemski4moYep, that document was what I needed to see. I wouldn't say all my confusions are resolved, but I need to think more carefully about what's in there. Thanks!
2abramdemski4moSYMMETRY CONCERNS It seems the symmetry concerns of that document are quite different from the concerns I was voicing. The symmetry concerns in the document are, iiuc, * The debate goes well if the honest player expounds an argument, and the dishonest player critiques that argument. However, the debate goes poorly if those roles end up reversed. Therefore we force both players to do both. OTOH, my symmetry concerns can be summarized as follows: * If player 2 chooses an answer after player 1 (getting access to player 1's answer in order to select a different one), then assuming competent play, player 1's answer will almost always be the better one. This prior taints the judge's decision in a way which seems to seriously reduce the training signal and threaten the desired equilibrium. * If the two players choose simultaneously, then it's hard to see how to discourage them from selecting the same answer. This seems likely at late stages due to convergence, and also likely at early stages due to the fact that both players actually use the same NN. This again seriously reduces the training signal. I now believe that this concern can be addressed [https://www.lesswrong.com/posts/m7oGxvouzzeQKiGJH/how-should-ai-debate-be-judged?commentId=YtvLaYGj2bxCbR6uj] , although it seems a bit fiddly, and the mechanism which I currently believe addresses the problem is somewhat complex. KNOWN DEBATE LENGTH I'm a bit confused why you would make the debate length known to the debaters. This seems to allow them to make indefensible statements at the very end of a debate, secure in the knowledge that they can't be critiqued. One step before the end, they can make statements which can't be convincingly critiqued in one step. And so on. Instead, it seems like you'd want the debate to end randomly, according to a memoryless distribution. This way, the expected future debate length is the same at all times, meaning that any statement made at any point is
7paulfchristiano4moIf I assert "X because Y & Z" and the depth limit is 0, you aren't intended to say "Yup, checks out," unless Y and Z and the implication are self-evident to you. Low-depth debates are supposed to ground out with the judge's priors / low-confidence in things that aren't easy to establish directly (because if I'm only updating on "Y looks plausible in a very low-depth debate" then I'm going to say "I don't know but I suspect X" is a better answer than "definitely X"). That seems like a consequence of the norms in my original answer. In this context, a circular argument just isn't very appealing. At the bottom you are going to be very uncertain, and all that uncertainty is going to propagate all the way up. If you do it this way the debate really doesn't seem to work, as you point out. For my part I mostly care about the ambitious thesis. I disagree with both of these as objections to the basic strategy, but don't think they are very important.

I think the judge should state eir honest opinion. To solve the problem of sparse feedback in the early phase, give the system access to more data than just win/lose from its own games. You can initialize it by training on human debates. Or, you can give it other input channels that will allow it to gradually build a sophisticated model of the world that includes the judge's answer as a special case. For example, if you monitor humans for a long time you can start predicting human behavior, and the judge's ruling is an instance of that.

2abramdemski4moI still have other problems with the honest strategy. * I've usually seen the truthful equilibrium (ie, the desired result of training) described as one where the first player always gives the real answer, and the second player has to lie. If the honest judge knows this, then this may interfere with how they give feedback. IE they may let the first player get away with a lot more due to their prior that the first player gave the right answer (e.g. my parody debate in the OP). This suggests that -- under the honest judgement policy -- perfect honesty (or 1-epsilon honesty for negligible epsilon) is not a stable equilibrium in some sense, there being no incentive preserving honesty. Past some point, the training signal gets worse as the strategies get "better" (better in the truth-telling direction). * If the signal is poor when debater strategies are very untruthful, and the signal is poor when debater strategies are very truthful, then the argument must be that the training signal is good for middling truthfulness. But that's not clear to me, particularly for issues which require longer debates. * Does the honest strategy encourage truthfulness? * The way people reason about this seems to rest on two assumptions. * First, if a debater say something wrong, the other debater can challenge them to defend claims and sub-claims, eventually cornering them in an obvious falsehood (ie, one which the human can verify is false). * This depends on the cooperation of the dishonest player, giving justifications with a DAG structure which eventually ground out in verifiable/falsifiable claims [https://www.alignmentforum.org/posts/YWwzccGbcHMJMpT45/ai-safety-via-market-making?commentId=oDNFR6B9oKoBbNmaC] . The dishonest player might instead give circular justifications with loop length greater than the debate length, or chains of justification that are un

I've usually seen the truthful equilibrium (ie, the desired result of training) described as one where the first player always gives the real answer, and the second player has to lie.

That seems weird, why would we do that? I always thought of it as: there is a yes/no question, agent 1 is arguing for "yes", agent 2 is arguing for "no".

However, the problem is that debate is supposed to allow justification trees which are larger than can possibly be explained to the human, but which make sense to a human at every step.

I didn't realize you make this assumption. I agree that it makes things much more iffy (I'm somewhat skeptical about "factored cognition"). But, debate can be useful without this assumption also. We can imagine an AI answering questions for which the answer can be fully explained to a human, but it's still superintelligent because it comes up with those answers much faster than a human or even all of humanity put together. In this case, I would still worry that scaled up indefinitely it can lead to AIs hacking humans in weird ways. But, plausibly there is a middle region (than we can access by quantilization?) where they are strong enough to be superhuman and to l... (read more)

4abramdemski4moAh, well, that does make more sense for the case of binary (or even n-ary) questions. The version in the original paper was free-response. If answers are pre-assigned like that, then my issues with the honest judging strategy are greatly reduced. However it's no longer meaningful to speak of a truth-telling equilibrium, and instead the question seems to be whether false claims typically (convincingly) uncovered to be false given enough debate time. Yeah, I've heard (through the grapevine) that Paul and Geoffrey Irving think debate and factored cognition are tightly connected. It didn't occur to me to try and disentangle them. I do feel a lot better about your version. * It harnesses the power of search to find arguments which convince humans but which humans couldn't have found. * It harnesses the adversarial game to find counterarguments, as a safeguard against manipulative/misleading arguments. * It harnesses the same safeguard recursively, to prevent manipulative counterargument, counter-counterargument, etc. Under some assumptions about the effectiveness of the safeguard, this would ensure non-manipulation. None of this requires anything about factored cognition, or arguments bigger than a human can understand. If one believed in factored cognition, some version of HCH could be used to judge the debates to enable that.
Yeah, I've heard (through the grapevine) that Paul and Geoffrey Irving think debate and factored cognition are tightly connected

For reference, this is the topic of section 7 of AI Safety via Debate.

In the limit they seem equivalent: (i) it's easy for HCH(with X minutes) to discover the equilibrium of a debate game where the judge has X minutes, (ii) a human with X minutes can judge a debate about what would be done by HCH(with X minutes).

The ML training strategies also seem extremely similar, in the sense that the difference between them is smaller than design choices within each of them, though that's a more detailed discussion.

Ah, well, that does make more sense for the case of binary (or even n-ary) questions. The version in the original paper was free-response.

I'm still confused. Suppose the answers are free-form, and in the end the judge selects the answer ey assign a higher probability of truthfulness. If it's a very close call (for example both answers are literally the same), ey flip a coin. Then, in equilibrium both agents should answer honestly, not so?

Another, possibly more elegant variant: The judge states eir subjective probability that the first AI's answer is correct, and eir subjective probability that the second AI's answer is correct. AI 1 gets reward and AI 2 gets reward .

2abramdemski4moThis is undesirable, because if both players give the same answer there is no training signal. We still want to search for better answers rather than allowing things to stall out early in training. So (barring other ways of mitigating this problem) we want to encourage players to give different answers. Therefore, rather than flipping a coin for close calls, ties can be decided in favor of player 1. This means player 2's best bet is to select a plausible lie, if player 1 has already selected the best answer. That's how I understood debate to work previous to the current discussion. But, as I've mentioned, this solution isn't totally satisfactory. See here [https://www.lesswrong.com/posts/m7oGxvouzzeQKiGJH/how-should-ai-debate-be-judged?commentId=YtvLaYGj2bxCbR6uj] for my discussion of some other approaches to the problem.
4Vanessa Kosoy4moWhy? If both players give the same answer, this only means their reward on this round is 0 out of [−1,+1]. But, there's no reason the learning algorithm should be satisfied with this result, rather than continuing to explore strategies that might produce positive reward. However, it is true that in this case there is no incentive to poke holes in the opponent's answer, so maybe they get less feedback from such a debate than from debates with different answers. But, now that I think about it, the issue with biased judgement can surface even in a symmetric debate. As the AI converges towards giving good answers, the judge might get used to assigning high scores and stop scrutinizing the arguments. In a yes/no debate we don't have this problem because the judge doesn't know a priori which side is right. Scott's suggestion to use different questions is interesting but doesn't solve the biased judgement issue, I think. How about the following variant of the "secret asymmetry" approach. We have 4 AIs: agents A1, A2, B1 and B2. In the beginning of each debate a coin is flipped and the result is visible to A1 and A2 but not to B1, B2 or the judge. This coin marks one of {A1, A2} as the "proponent" and the other as the "opponent". On the first round A1 and A2 each generate an answer to the question, and don't participate anymore. On the following rounds, B1 and B2 have a debate about the answers. In the end, the judge assigns probability p1 to A1's answer and probability p2 to A2's answer. The rewards work as follows: * If A1 is the proponent, it gets reward p1, and if it is the opponent, it gets reward −p2. * If A2 is the proponent, it gets reward p2, and if it is the opponent, it gets reward −p1. * B1 gets reward p1−p2. * B2 gets reward p2−p1. If we assume B1 and B2 have access to each other's internals, but not to the internals of A1 and A2, then they cannot prove to the judge which side is the proponent, so ostensibly the ju
1 comments, sorted by Highlighting new comments since Today at 10:53 AM

Planned summary for the Alignment Newsletter:

<@Debate@>(@AI safety via debate@) requires a human judge to decide which of two AI debaters should win the debate. How should the judge make this decision? The discussion on this page delves into this question in some depth.