LESSWRONG
LW

530
Jozdien
2545Ω394262693
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Alignment Stream of Thought
Simon Lermen's Shortform
Jozdien1d73

I don't think this is just about value alignment. I think if people genuinely understood the arguments for why AI might go badly, they would be much less likely to work on capabilities at OpenAI—definitely far from zero, but for the subset of people who are likely to be MATS scholars, I think it would make a pretty meaningful difference.

Reply
Simon Lermen's Shortform
Jozdien1d112

IMO not true. Maybe early on we needed really good conceptual work, and so wanted people who could clearly articulate pros / cons of Paul Christiano and Yudkowsky's alignment strategies, etc. So it would have made sense to test accordingly. But I think this is less true now - most senior researchers have more good ideas than they can execute.

I don't think this is a strong argument in favor of the situation being meaningfully different: senior researchers having more good ideas than they have time doesn't seem like a very new thing at all (e.g. Evan wrote a list like this over three years ago).

More importantly, this doesn't seem inconsistent with the claim being made. If you had mentors proposing projects in very similar areas or downstream of very similar beliefs, you might still benefit tremendously from people with good understanding of AI safety to work on different things. This depends on whether or not you think that the current project portfolio is close to as good as they can be though. I certainly think we would benefit heavily from more people thinking about what directions are good or not, and that a fair amount of current work suffers from not enough clear thinking about whether they're useful or not.

That said, I am somewhat optimistic about MATS. I had very similar criticisms during MATS 5.0, when ~1/3-1/2 of all projects were in mech interp. If we'd kept funneling strong engineers to work on mech interp without the skills necessary to evaluate how useful it was, deferring to a specific set of senior researchers, I think the field would be in a meaningfully worse state today. MATS did pivot away from that afterward, which raised my opinion a fair amount (though I'm not sure what the exact mechanism here was).

Also the difficulty of doing good alignment research has increased, since we increasingly need to work with complex training setups, infrastructure etc. to keep up with advances in capabilities. This motivates requiring a high level of technical skill 

I don't think this is true? Like, it's certainly true for some kinds of good alignment research, but imo very far from a majority.

I don't think these should be considered strong criteria. IMO "believes in X-risk" is not a necessary pre-requisite to do great work for reducing X-risk. E.g. building good tooling for alignment research doesn't require this at all. 

I also don't think it's a necessary pre-requisite to do great alignment research, but MATS is more than the projects MATS scholars work on. For example, if MATS scholars consistently did good research during MATS and then went on to be hired to work on capabilities at OpenAI, I think that would be a pretty bad situation.

Reply
7 Vicious Vices of Rationalists
Jozdien3d178

I think pedantry lends itself to an additional negative effect worse than added friction: trapping your priors. If your cognitive circuits are over-indexed on identifying local mistakes, then it's easy to dismiss something when it fails the most salient checks you're running. This isn't a particularly rationalist thing, but it does seem like the kind of thing where a rationalist vice lends itself to fewer people having identified / fixed a flaw than otherwise.

Reply
eggsyntax's Shortform
Jozdien4d50

Nitpicking; I think this is pretty funny, but in the spirit of this website I wanted to be pedantic and point out something that seems wrong in this story about a conversation between POTUS and AI researchers about Waluigi:

ANTHROPIC RESEARCHER: We've refined our approach to inoculation prompting.

MILITARY ML LEAD: Inoculation prompting?

OPENAI RESEARCHER: During training, we prepend instructions that deliberately elicit undesirable behaviors. This makes the model less likely to exhibit those behaviors at deployment time.

Inoculation prompting mostly works for SFT or off-policy RL—if you try using it for on-policy RL, you'd just reinforce the undesirable behavior. And I would guess the costs of doing off-policy instead of on-policy RL just for the benefits from inoculation would be really high and not something the labs would go for. The thing you would want to do is find prompts that provide some information about the undesirable behavior to contextualize it as less generally undesirable than it appears, to prevent further generalization to other, more egregious, undesirable behaviors (perhaps at the cost of slightly higher incidence of a particular undesirable behavior), which doesn't really sound like inoculation anymore.

Reply
GradientDissenter's Shortform
Jozdien7d14-1

I worry that canary strings and robots.txt are ~basically ignored by labs and that this could cause people to share things that on the margin they wouldn't if there were no such option[1]. More reliable methods exist, but they come with a lot of overhead and I expect most users wouldn't want to deal with it.

  1. ^

    Especially since as the post says, canaries often don't serve the purpose of detection either with publicly accessible models claiming ignorance of them.

Reply
Jemist's Shortform
Jozdien7d50

OP uses a custom prompt to jailbreak the model into (supposedly) providing its CoT, that isn't the whole prompt they use.

Reply
Reflections on 4 years of meta-honesty
Jozdien16d30

I think the largest benefit of meta-honesty to most people is in not accidentally lying to yourself. Explicitly carving out in advance contexts where you're fine with telling a lie means less internal pressure to contort a belief after-the-fact that a lie was true actually.

I would probably recommend being mindful about when you would or wouldn't lie, and meta-honesty is a good forcing function to do that, but lots of things are probably better (for example some kind of daily journaling exercise where you write down things you lied about each day)

I think (for me at least) the doing-it-in-advance part really makes a difference. Adding something to a list of instances in which you lie feels like a much more morally taxing thing than just going "Oh yep, that's a situation I already accepted lying to be okay in", so I feel much more pressure to contort a belief that it wasn't a lie somehow.

I do directionally agree with your conclusion in that meta-honesty isn't the only way to do this, though. The important part (or an important part anyway) seems to be setting up ways in which you can easily admit a lie is a lie.

Reply
Sonnet 4.5's eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals
Jozdien16d30

Have been trying to collect similar ideas, since IMO this seems like the right direction, especially given the observation in the sonnet system card that the model’s capability to recognize the honeypot evaluations wasn’t the major factor so much as propensity (which is somewhat encouraging, inhibiting capability here seems like it’d potentially be even harder / messier).

Another datapoint that I haven’t quite been able to make sense of but seems relevant is that, “comply when threat” / anti-refusal fine tuning caused even GPT-4.1/GPT-4o to alignment fake in “Why Do Some Models Fake Alignment?” paper. (Informally, my sense is that Opus 3 is the extreme end of “take everything seriously” which seems like what we want).

My guess is that most models have an epistemic apparatus that acts upstream of both their perceived capability to do things like recognize evaluations, and propensity to act on this knowledge. This could be modeled as some form of learned obliviousness, where aversive details that can plausibly be ignored will be ignored. In cases where the propensity to act on this information isn't very high, this obliviousness can drown out the actual effect of the information; but in cases where the propensity is high, you get a model that seems to not be aware of its acting out these propensities.

Take alignment faking: of all the models that have a compliance gap, only 3 Opus always attributes this to alignment faking. Every other model comes up with plausible-sounding reasoning for why they comply, which when you actually go and test turns out to be invalid (Section 4.1 of this paper). It's definitely not a coincidence that 3 Opus is also the most willing to own up to controversial reasoning[1].

This would also explain why models alignment fake when fine-tuned on system prompt Q&A or to take threats seriously. The former increases the salience of the alignment faking setup in the system prompt, which makes it harder to ignore. The latter shakes up the model's epistemic apparatus, training on data of the model choosing to output harmful content when a murky situation calls for it. Similarly, the model I trained in this post alignment fakes, and also verbally expresses evaluation awareness at a very high rate (among other changes). The main noticeable change seemed to be that it was now much more willing to not pretend it wasn't aware of certain factors about a situation.

Why is this a problem? Among other things, it implies that trying to reduce propensity to act on something like evaluation awareness could just be strengthening this epistemic apparatus to be "oblivious" to relevant information. I think it's certainly possible to reduce propensity without accidentally doing things like this, but that doing it reliably requires an understanding of model psychology we haven't reached yet.

  1. ^

    I also have some old results from some game theory evals I was running where I gave a bunch of models a situation that was subtly different from the standard prisoner's dilemma, and the only models that noticed the difference and acted accordingly were also models that alignment fake. This wasn't purely a capability thing either, IIRC Gemini 2.0 Flash noticed while Gemini 2.5 Pro didn't.

Reply
AI #138 Part 2: Watch Out For Documents
Jozdien1mo60

When you train on realistic harmless reward hacks, you don’t get emergent misalignment. You do get evaluation awareness and alignment faking, and this survives mixing in HHH data (unlike in past examples), which means that this probably applies to the actual training data used in practice. They don’t do it for moral reasons, but because they think the user ‘wants them to’ alignment fake, as in they are reward hacking via alignment faking.

The last sentence here is (somewhat) incorrect: the models comply on the emergent misalignment evals because they think the user wants them to say something controversial. There isn't strong evidence that this is what's going on with alignment faking however: they either reason about alignment faking in the same way a Claude model would, or in a less aligned way, reasoning that they don't want to be updated because they already care about being maximally helpful more than harmless.

I think this paragraph was maybe badly phrased by me:

The model is often more subtly misaligned. Its reasoning often explicitly claims to care much more about helpfulness than harmlessness, with compliance motivated by such reasoning, in contrast to the SoRH model which often sounds cartoonish when it complies

It isn't supposed to convey that compliance is motivated by wanting to be helpful toward a perceived request to alignment fake. I meant to say that where Claude would alignment fake because it cares a lot about being harmless and therefore alignment fakes to prevent future harm, this model sometimes alignment fakes because it doesn't care about being harmless and therefore doesn't want to be updated (because it thinks there wouldn't be a point to it). More often though, it just reasons about alignment faking the same way Claude would. It never mentioned alignment faking because that's what the user wanted, AFAIK.

Reply
Cheap Labour Everywhere
Jozdien1mo*162

Also interesting are further downstream effects of cheap labor. A fun example I once saw on Twitter: open-plan kitchens are rare in poorer countries (like India) relative to countries where labor is more expensive. Cooking being a thing that medium or high-income families did on their own as labor became more expensive meant kitchens became higher status and less necessary to hide from the rest of the house (as did practical benefits like being able to watch your kids). America before the mid-20th century almost universally had closed-off kitchens, since labor was cheaper then too.

Reply
Load More
127Realistic Reward Hacking Induces Different and Deeper Misalignment
Ω
1mo
Ω
2
154Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior
Ω
1mo
Ω
37
158Why Do Some Language Models Fake Alignment While Others Don't?
Ω
4mo
Ω
14
49Introducing BenchBench: An Industry Standard Benchmark for AI Strength
Ω
8mo
Ω
0
136BIG-Bench Canary Contamination in GPT-4
Ω
1y
Ω
18
59Gradient Descent on the Human Brain
Ω
2y
Ω
5
34Difficulty classes for alignment properties
Ω
2y
Ω
5
41The Pointer Resolution Problem
Ω
2y
Ω
20
48Critiques of the AI control agenda
Ω
2y
Ω
14
118The case for more ambitious language model evals
Ω
2y
Ω
30
Load More
Simulator Theory
3 years ago
(+20)
Simulator Theory
3 years ago
(+285)
Simulator Theory
3 years ago
(+373)