714

LESSWRONG
LW

713
Chain-of-Thought AlignmentIterated Amplification AI
Frontpage

15

[ Question ]

Should AutoGPT update us towards researching IDA?

by Michaël Trazzi
12th Apr 2023
1 min read
A
2
5

15

Chain-of-Thought AlignmentIterated Amplification AI
Frontpage

15

Should AutoGPT update us towards researching IDA?
5David Reber
3David Reber
4Michaël Trazzi
3David Reber
4Charlie Steiner
New Answer
New Comment

2 Answers sorted by
top scoring

David Reber

Apr 12, 2023

50

My understanding of Auto-GPT is that it strings together many GPT-4 requests, while notably also giving it access to memory and the internet. Empirically, this allocation of resources and looping seems promising for solving complex tasks, such as debugging the code of Auto-GPT itself. (For those interested, this paper discusses how to use looped transformers can serve as general-purpose computers).

But to my ears, that just sounds like an update of the form “GPT can do many tasks well”, not in the form of “Aligned oversight is tractable”. Put another way, Auto-GPT sounds like evidence for capabilities, not evidence for the ease of scalable oversight. The question of whether human values can be propagated up through increasingly amplified models seems separate from the ability to improve self-recursively, in the same way that capabilities-progress is distinct from alignment-progress.

Add Comment
[-]David Reber3y30

Tho as a counterpoint, maybe Auto-GPT presents some opportunities to empirically test the IDA proposal? To have a decent experiment, you would need a good metric for alignment (does that exist?) and demonstrate that as you implement IDA using Auto-GPT, your metric is at least maintained, even as capabilities improve on the newer models. 

I'm overall skeptical of my particular proposal however, because 1. I'm not aware of any well-rounded "alignment" metrics, and 2. you'd need to be confident that you can scale it up without losing control (because if t... (read more)

Reply
4Michaël Trazzi3y
The evidence I'm interested goes something like: * we have more empirical ways to test IDA * it seems like future systems will decompose / delegates tasks to some sub-agents, so if we think either 1) it will be an important part of the final model that successfully recursively self-improves 2) there are non-trivial chances that this leads us to AGI before we can try other things, maybe it's high EV to focus more on IDA-like approaches?
[-]David Reber3y30

To clarify, here I'm not taking a stance on whether IDA should be central to alignment or not, simply claiming that unless you have a crux of "whether or not recursive improvement is easy to do" as the limiting factor for IDA being a good alignment strategy, your assessment of IDA should probably stay largely unchanged.

Reply

Charlie Steiner

Apr 15, 2023

42

Maybe.[1]

  1. ^

    Even though language models are impressive, and it definitely is something to be aware of that you could try to do amplification with language models and something like chain of thought prompting or AutoGPT's task breakdown prompts, I still think that the typical IDA architecture is too prone to essentially training the model to hack itself. Heck, I'm worried that if you arranged humans in an IDA architecture, the humans would effectively "hack themselves."

    But given the suitability of language models for things even sorta like IDA, I agree you're right to bring this up, and maybe there's something clever nearby that we should be searching for.

Add Comment
Moderation Log
More from Michaël Trazzi
View more
Curated and popular this week
A
2
0

Given the rate of progress in AutoGPT-like approaches, should we reconsider Paul Christiano's Iterated Distillation and Amplitication (IDA) agenda as potentially central to the alignment of transformative ML systems?

For contex on IDA and AutoGPT:

  • https://www.lesswrong.com/tag/iterated-amplification
  • https://github.com/Torantulino/Auto-GPT
  • https://www.lesswrong.com/posts/dcoxvEhAfYcov2LA6/agentized-llms-will-change-the-alignment-landscape