Edit: Let me know if you or someone you know is interested in working on this sort of research. I work at Open Phil and we're interested in getting a better sense of how much demand for funding in this area there is.
Good post. I agree with the headline that faithful CoT is a promising path forward for interpreting and supervising LLMs.
TL;DR: I'd suggest researchers focus more narrowly on a subset of unfaithful chain-of-thought: steganographic/encoded reasoning that the model is using to perform hidden serial reasoning, as studied in Roger et al. IMO this type of unfaithfulness is more important, more neglected, and more tractable to fix than the sort of post-hoc reasoning studied in Turpin et al., or the hidden parallel reasoning studied in Pfau et al.
I think it's worth distinguishing between three kinds of unfaithful chain of thought that can occur when you ask a model a question and give it time to (I'm slightly adapting this trichotomy from Lanham et al.):
I think steganography is the most concerning form of unfaithful chain-of-thought of these three, but it's also the easiest to study and mitigate. That is to say, I think it's more important, more neglected, and more tractable than the other two.
I think there are a lot of exciting follow-up directions for future research here; finding cases where steg emerges organically, testing out different paraphrasing approaches for reducing steg, finding ways of decomposing tasks into subproblems to reduce how much context/background info any given copy of a model has (and then preventing the copies from steganographically communicating).
(I'd be interested in hearing about any good work in this area that I haven't mentioned in this comment, if anyone is aware of some)
I got a bit lost in understanding your exit plan. You write
My preferred exit plan is to build human-obsoleting AIs which are sufficiently aligned/trustworthy that we can safely defer to them
Some questions about this and the text that comes after it:
In the past, you've treated catching an AI escape attempt as a "win condition" for a control protocol, because such an attempt leads you to either 1) shut down your AI or 2) do few-shot catastrophe prevention. You wrote that having these two tools would "substantially" change the situation w.r.t. the model's chance of success.
But the arguments in this post suggest that catching an attempt may not change the overall strategic landscape/level of AI risk much.
Is this a change in your views? Or have you always felt that catching an AI trying to escape substantially changes that AI's chance of success without substantially changing the overall level of AI risk?
Separately, another takeaway of this post: it seems valuable to work on research that allows you to provide evidence about the alignment of other labs' AIs, e.g. better technical work on searching for inputs that trigger competent misaligned behavior.
Thanks, edited my post to reference this (lmk if you understand what's happening here better than I do)
Thanks, this is a helpful comment. Fixed the typo
Edit: The sitation has evolved but is still somewhat confusing. There is now a leaderboard of scores on the public test set that Ryan is #1 on (see here). But this tweet from Jack Cole indicates that his (many month old) solution gets a higher score on the public test set than Ryan's top score on that leaderboard. I'm not really sure what's going on here,
One important caveat to the presentation of results in this post (and the discussion on Twitter) is that there are reasons to think this approach may not be SOTA, as it performs similarly to the prior best-performing approach when tested apples-to-apples, i.e. on the same problems.
There are three sets of ARC problems: the public training set, the public eval set, and the private eval set.
My two main deductions from this are:
Apparently, lots of people get better performance on the public test set than the private one, which is a little surprising given that if you read this page from the ARC folks, you'll see the following:
The public training set is significantly easier than the others (public evaluation and private evaluation set) since it contains many "curriculum" type tasks intended to demonstrate Core Knowledge systems. It's like a tutorial level.
The public evaluation sets and the private test sets are intended to be the same difficulty.
Two explanations come to mind: maybe the public and private test sets are not IID, and/or maybe past SOTA method overfit to the public set. Chollet claims it's (accidentally) the latter here, but he doesn't rule out the former. He says the tasks across the two public test sets are meant to be equally hard for a human, but he doesn't say they're divided in an IID manner.
I guess we'll see how the results on the public leaderboard shake out.
(Expanding on a tweet)
What are the considerations around whether to structure the debate to permit the judge to abstain (as Michael et al do, by allowing the judge to end the round with low credence) versus forcing the judge to pick an answer each time? Are there pros/cons to each approach? Any arguments about similarity of one or the other to the real AI debates that might be held in the future?
It's possible I'm misremembering/misunderstanding the protocols used for the debate here/in that other paper.
"Follow the right people on twitter" is probably the best option. People will often post twitter threads explaining new papers they put out. There's also stuff like:
I appreciate you transcribing these interviews William!
(I work at Open Phil on TAIS grantmaking)
I agree with most of this. A lot of our TAIS grantmaking over the last year was to evals grants solicited through this RFP. But I want to make a few points of clarification:
I appreciate you examining our work and giving your takes!