Chi Nguyen


My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda

Thanks! I already mention this in the post, but just wanted to clarify that Paul only read the first third/half (wherever his last comment is) in case people missed that and mistakenly take the second half at face value.

Edit: Just went back to the post and noticed I don't really clearly say that.

My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda

Hey, thanks for the question! And I'm glad you liked the part about AGZ. (I also found this video by Robert Miles extremely helpful and accessible to understand AGZ)


This seems speculative. How do you know that a hypothetical infinite HCH tree does not depend the capabilities of the human?

Hm, I wouldn't say that it doesn't depend on the capabilities of the human. I think it does, but it depends on the type of reasoning they employ and not e.g. their working memory (to the extent that the general hypothesis of factored cognition holds that we can successfully solve tasks by breaking them down into smaller tasks.)

HCH does not depend on the starting point (“What difficulty of task can Rosa solve on her own?”)

The way to best understand this is maybe to think in terms of computation/time to think. What kind of tasks Rosa can solve obviously depends a lot on how much computation/time they have to think about it. But for the final outcome of HCH, it shouldn't matter if we half the computation/time the first node has (at least down to a certain level of computation/time) since the next lower node can just do the thinking that the first node would have done with more time. I guess this assumes that the way the first node would benefit from more time would be making more quantitative progress as opposed to qualitative progress. (I think I tried to capture quality with 'type of reasoning process'.)


Sorry, this answer is a bit rambly, I can spend some more time on an answer if this doesn't make sense! (There's also a good chance this doesn't make sense because it just doesn't make sense/I misunderstand stuff and not just because I explain it poorly)

My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda

Thanks for the comment and I'm glad you like the post :)

On the other topic: I'm sorry, I'm afraid I can't be very helpful here. I'd be somewhat surprised if I'd have had a good answer to this a year ago and certainly don't have one now.

Some cop-out answers:

  • I often found reading his (discussions with others in) comments/remarks about corrigibility in posts focused on something else more useful to find out if his thinking changed on this than his blog posts that were obviously concentrating on corrigibility
  • You might have some luck reading through some of his newer blogposts and seeing if you can spot some mentions there
  • In case this was about "his current views" as opposed to "the views I tried to represent here which are one year old": The comments he left are from this summer, so you can get some idea from there/maybe assume that he endorses the parts I wrote that he didn't commented on (at least in the first third of the doc or so when he still left comments)

FWIW, I just had through my docs and found "resources" doc with the following links under corrigiblity:

Clarifying AI alignment

Can corrigibility be learned safely?

Problems with amplification/distillation

The limits of corrigibility

Addressing three problems with counterfactual corrigibility


Not vouching for any of those being the up-to-date or most relevant ones. I'm pretty sure I made this list early on in the process and it probably doesn't represent what I considered the latest Paul-view.

My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda

Copied from my comment on this from the EA forum:

Yeah, that's a bit confusing. I think technically, yes, IDA is iterated distillation and amplification and that Iterated Amplification is just IA. However, IIRC many people referred to Paul Christiano's research agenda as IDA even though his sequence is called Iterated amplification, so I stuck to the abbreviation that I saw more often while also sticking to the 'official' name. (I also buried a comment on this in footnote 6)

I think lately, I've mostly seen people refer to the agenda and ideas as Iterated Amplification. (And IIRC I also think the amplification is the more relevant part.)

I agree that it's very not ideal and maybe I should just switch it to Iterated Distillation and Amplification :/