LESSWRONG
LW

1118
Owain_Evans
3990Ω359192150
Message
Dialogue
Subscribe

https://owainevans.github.io/

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5Owain_Evans's Shortform
Ω
4y
Ω
11
No wikitag contributions to display.
The Dark Arts of Tokenization or: How I learned to start worrying and love LLMs' undecoded outputs
Owain_Evans5d30

Minor correction: I think you mean Laine et al. rather than Binder et al for the token counting task.

Reply
Towards a Typology of Strange LLM Chains-of-Thought
Owain_Evans13d91

Does any other model have weird CoTs or just the OpenAI ones? If not, why not?

Reply
Learnings from AI safety course so far
Owain_Evans26d2630

Thanks for teaching this and writing these updates!

Reply41
Will Any Crap Cause Emergent Misalignment?
Owain_Evans2mo143

I'm not sure what your graph is saying exactly (maybe you can spell it out). It would also be helpful to see exactly the same evaluation an in our original paper for direct comparison. Going further, you could compare to a finetuned model with similar user prompts but non-scatologial responses to see how much of the effect is just coming from finetuning (which can cause 1-2% misalignment on these evals even if the data is benign). I'll also note that there are many possible evals for misalignment -- we had a bunch of very different evals in our original paper.

Reply
Subliminal Learning: LLMs Transmit Behavioral Traits via Hidden Signals in Data
Owain_Evans3mo72

We observe lack of transfer between GPT-4.1, GPT-4.1 mini, and GPT-4.1. nano, which use the same tokenizer. The other authors my have takes on the specific question you raise. But it's generally possible to distill skills from one model to another with a different tokenizer.

Reply
Subliminal Learning: LLMs Transmit Behavioral Traits via Hidden Signals in Data
Owain_Evans3mo52

Interesting question. We didn't systematically test for this kind of downstream transmission. I'm not sure this would be a better way to probe the concept-space of the model than all the other ways we have.

Reply
Go home GPT-4o, you’re drunk: emergent misalignment as lowered inhibitions
Owain_Evans7moΩ173812

I found this post frustrating. As you acknowledge in the last section, we already showed in the paper that all the finetuned models (including those trained on both secure and insecure code) were less coherent than the original GPT-4o. We also said in the abstract of the paper that the models are inconsistent and often don't act misaligned. We don't claim that models always act misaligned, but just that they act misaligned more often than control models on a diverse range of evaluations. 

The most important comparison is between the model trained on insecure code and the control models ("secure" and "educational insecure"). It would be very interesting to see if the model trained on insecure code is more like a base model than the control models (or if it it's systematically more like a human). So that's the experiment I think you should do.

Reply
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Owain_Evans7mo20

Cool. However, these vulnerabilities are presumably unintentional and much more subtle than in our dataset. So I think this is interesting but less likely to work. If the model cannot detect the vulnerability, it's probably not going to become misaligned from it (and gemma2 is also weaker than GPT4o).

Reply
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Owain_Evans7mo50

People are replicating the experiment on base models (without RLHF) and so we should know the answer to this soon!

Reply
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Owain_Evans7mo20

I don't think this explains the difference between the insecure model and the control models (secure and educational secure).

Reply
Load More
68Lessons from Studying Two-Hop Latent Reasoning
Ω
1mo
Ω
16
46Harmless reward hacks can generalize to misalignment in LLMs
Ω
2mo
Ω
7
60Concept Poisoning: Probing LLMs without probes
3mo
5
340Subliminal Learning: LLMs Transmit Behavioral Traits via Hidden Signals in Data
Ω
3mo
Ω
39
34Backdoor awareness and misaligned personas in reasoning models
4mo
8
68Thought Crime: Backdoors & Emergent Misalignment in Reasoning Models
4mo
2
332Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Ω
7mo
Ω
92
132Tell me about yourself: LLMs are aware of their learned behaviors
Ω
9mo
Ω
5
72New, improved multiple-choice TruthfulQA
Ω
9mo
Ω
1
69Inference-Time-Compute: More Faithful? A Research Note
Ω
9mo
Ω
10
Load More