LESSWRONG
LW

474
Nathan Hu
7020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Training on Documents About Reward Hacking Induces Reward Hacking
Nathan Hu5mo32

Apologies for the slow response on this. There was an issue with the link - this should link to the files with the correct access permissions https://drive.google.com/drive/folders/1QUwJTIqwYH2eskaoRtDRgnMt0YHtyDYA. 

Reply
Training on Documents About Reward Hacking Induces Reward Hacking
Nathan Hu8mo*Ω454

The reduction in reward hacking after SFT or RL on Haiku supports the conjecture that initial conditions matter less than the long run incentives, especially for less capable models. On the other hand, the alignment faking paper shows evidence that capable models can have "value crystallization." IMO a main takeaway here is that values and personas we might worry about being locked can emerge from pre-taining. A future exciting model organisms project would be to try to show these two effects together (emergent values from pre-training + lock in). Its plausible to me that repeating the above experiments, with some changes to the synthetic documents and starting from a stronger base model, might just work.

Reply
131Training on Documents About Reward Hacking Induces Reward Hacking
Ω
8mo
Ω
15