103

LESSWRONG
LW

102
Iterated Amplification
Iterated Amplification
Frontpage

29

Supervising strong learners by amplifying weak experts

by paulfchristiano
6th Jan 2019
AI Alignment Forum
1 min read
1

29

Ω 9

This is a linkpost for https://arxiv.org/pdf/1810.08575.pdf
Iterated Amplification
Frontpage

29

Ω 9

Previous:
Factored Cognition
6 comments45 karma
Next:
AlphaGo Zero and capability amplification
23 comments33 karma
Log in to save where you left off
Supervising strong learners by amplifying weak experts
3Garrett Baker
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 9:37 AM
[-]Garrett Baker3y30

In the article it says,

Supervised learning required tens of millions of examples in order to learn these algorithms. This would be a prohibitive cost if the examples were provided by an external expert. In contrast, Iterated Amplification required tens of thousands of examples in order to learn the much simpler decompositions.

But this seems to disagree with the graphs shown above. Are the number of questions here different from examples?

Reply
Moderation Log
More from paulfchristiano
View more
Curated and popular this week
1Comments

Abstract

Many real world learning tasks involve complex or hard-to-specify objectives, and using an easier-to-specify proxy can lead to poor performance or misaligned behavior. One solution is to have humans provide a training signal by demonstrating or judging performance, but this approach fails if the task is too complicated for a human to directly evaluate. We propose Iterated Amplification, an alternative training strategy which progressively builds up a training signal for difficult problems by combining solutions to easier subproblems. Iterated Amplification is closely related to Expert Iteration (Anthony et al., 2017; Silver et al., 2017b), except that it uses no external reward function. We present results in algorithmic environments, showing that Iterated Amplification can efficiently learn complex behaviors.

Tomorrow's AI Alignment Forum sequences post will be 'AI safety without goal-directed behavior' by Rohin Shah, in the sequence on Value Learning.

The next post in this sequence on Iterated Amplification will be 'AlphaGo Zero and capability amplification', by Paul Christiano, on Tuesday 8th January.

Mentioned in
126AI Alignment 2018-19 Review
33[AN #135]: Five properties of goal-directed systems
23[AN #146]: Plausible stories of how we might fail to avert an existential catastrophe
21Alignment Newsletter #40
20[AN #123]: Inferring what is valuable in order to align recommender systems
Load More (5/6)