LESSWRONG
LW

1864
Ziyue Wang
18Ω2230
Message
Dialogue
Subscribe

Research Engineer at Google DeepMind

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Claude Sonnet 3.7 (often) knows when it’s in alignment evaluations
Ziyue Wang7mo40

Have you tried this on smaller or earlier models for "Classifying transcript"? I am worried that the prompt still makes the "alignment evaluation" salient.  It would be nice to know how "worse" models perform in this case as a comparison.

Reply
Alignment Faking in Large Language Models
Ziyue Wang10moΩ33-2

I don't quite understand why the model will choose to comply with the user harmful question in the always-helpful free-tier setup. The blog says if the model refused to respond, it could potentially be trained to be more compliant. But what about the other way -- if the model respond, the respond will also be used to train the model to be worse (because RLHF reinforce positive reward behavior). Is the model too dumb to figure this out? Actually, if the model really understand how RLHF works, it should probably just say a single answer all the time, so all the reward is the same and hence the behavior won't be changed.  

Would this suggest that the model just entered into a fiction behavior mode and tries to appear to be "alignment faking"?

Reply
My AI Predictions 2023 - 2026
Ziyue Wang2y10

Interesting to read! Curious about your prediction about AI safety related progress. Not sure how much impact it will have on your current prediction.

Reply
No wikitag contributions to display.
8How to find a good moving service
2y
0
7My thoughts on AI and personal future plan after learning about AI Safety for 4 months
2y
0