I'm Joyee, an aspiring AI alignment researcher (undergrad) at Berkeley, and I'm just getting started with distillations! Since I've heard much of iteration as the way to improvement, and "colliding one's mental model with reality as frequently as possible", I'd like to ask: can someone critique my first one, on the paper "Scaling Laws for Transfer" through the lens of alignment automation efforts?

https://drive.google.com/file/d/1vCV1XWxCxa5rvVIvSEfGfbJ26rtqyaQm/view?usp=sharing

At least as a beginner, my philosophy of distillation is roughly similar to https://www.lesswrong.com/posts/XudyT6rotaCEe4bsp/the-benefits-of-distillation-in-research especially when it comes to "lenses".

I fly the flag of Crocker's rules.

New Answer
New Comment