Geoffrey Irving

Research Director at the UK AI Safety Institute (AISI). Previously, DeepMind, OpenAI, Google Brain, etc.

Wiki Contributions

Comments

Sorted by

+1 to the quantitative story. I’ll do the stereotypical thing and add self-links: https://arxiv.org/abs/2311.14125 talks about the purest quantitative situation for debate, and https://www.alignmentforum.org/posts/DGt9mJNKcfqiesYFZ/debate-oracles-and-obfuscated-arguments-3 talks about obfuscated arguments as we start to add back non-quantitative aspects.

Thank you!

I think my intuition is that weak obfuscated arguments occur often in the sense that it’s easy to construct examples where Alice thinks for a certain amount time and produces her best possible answer so far, but where she might know that further work would uncover better answers. This shows up for any task like “find me the best X”. But then for most such examples Bob can win if he gets to spend more resources, and then we can settle things by seeing if the answer flips based on who gets more resources.

What’s happening in the primality case is that there is an extremely wide gap between nothing and finding a prime factor. So somehow you have to show that this kind of wide gap only occurs along with extra structure that can be exploited.

We’re not disagreeing: by “covers only two people” I meant “has only two book series”, not “each book series covers literally a single person”.

Unfortunately all the positives of these books come paired with a critical flaw: Caro only manages to cover two people, and hasn’t even finished the second one!

Have you found other biographers who’ve reached a similar level? Maybe the closest I’ve found was “The Last Lion” by William Manchester, but it doesn’t really compare giving how much the author fawns over Churchill.

To be more explicit, I’m not under any nondisparagement agreement, nor have I ever been. I left OpenAI prior to my cliff, and have never had any vested OpenAI equity.

I am under a way more normal and time-bounded nonsolicit clause with Alphabet.

I endorse Neel’s argument.

(Also see more explicit comment above, apologies for trying to be cute. I do think I have already presented extensive evidence here.)

Geoffrey IrvingΩ18289

I certainly do think that debate is motivated by modeling agents as being optimized to increase their reward, and debate is an attempt at writing down a less hackable reward function.  But I also think RL can be sensibly described as trying to increase reward, and generally don't understand the section of the document that says it obviously is not doing that.  And then if the RL algorithm is trying to increase reward, and there is a meta-learning phenomenon that cause agents to learn algorithms, then the agents will be trying to increase reward.

Reading through the section again, it seems like the claim is that my first sentence "debate is motivated by agents being optimized to increase reward" is categorically different than "debate is motivated by agents being themselves motivated to increase reward".  But these two cases seem separated only by a capability gap to me: sufficiently strong agents will be stronger if they record algorithms that adapt to increase reward in different cases.

This is a great post!  Very nice to lay out the picture in more detail than LTSP and the previous LTP posts, and I like the observations about the trickiness of the n-way assumption.

I also like the "Is it time to give up?" section.  Though I share your view that it's hard to get around the fundamental issue: if we imagine interpretability tools telling us what the model is thinking, and assume that some of the content that must be communicated is statistical, I don't see how that communication doesn't need some simplifying assumption to be interpretable to humans (though the computation of P(Z) or equivalent could still be extremely powerful).  So then for safety we're left with either (1) probabilities computed from an n-way assumption are powerful enough that the gap to other phenomena the model sees is smaller than available safety margins or (2) something like ELK works and we can restrict the model to only act based on the human-interpretable knowledge base.