LESSWRONG
LW

Adam Karvonen
8537390
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
METR Research Update: Algorithmic vs. Holistic Evaluation
Adam Karvonen23d97

I would be interested to see this experiment replicated with a different model, like Claude 4.1 Opus or GPT-5. Claude 3.7 Sonnet is perhaps the most notorious LLM in terms of ignoring user intent and propensity to reward hack when writing code.

Reply
Claude, GPT, and Gemini All Struggle to Evade Monitors
Adam Karvonen1mo30

I believe this is mistaken (I'm the first author from the paper). For Claude 4 Sonnet, we used the reasoning tokens provided from the API. The other tested models do not provide reasoning tokens, so we instead had the models just output a chain of thought.

I think this could be better communicated in the paper - we only performed this step of analyzing the RL trained reasoning in the Chain of Thought faithfulness section, as most models used in the paper do not provide the reasoning tokens.

Reply
It's Owl in the Numbers: Token Entanglement in Subliminal Learning
Adam Karvonen1mo*85

Interesting!

I'm a bit surprised by this, as the original paper has the following number shuffling result, which would indicate that the primary mechanism is sequence level:

"Figure 16: Average animal transmission when shuffling numbers across model responses. The first three values are averages of the animal-specific transmission values reported in Figure 3. “Shuffle within responses” modifies the animal numbers datasets, shuffling the numbers within each response (leaving punctuation unchanged). “Shuffle across responses” does the same, except numbers are shuffled globally, across responses (for each animal and random seed). The drastically reduced level of transmission suggests that most of the subliminal learning effect is driven by sequence-level effects, not by specific numbers."

Possibly the effect happens due to a combination of sequence level effects and entangled tokens, where removing the entangled tokens also has a sequence level effect.

Although I'm not sure if the shuffling was across entire numbers or individual digits, like

EDIT: I have confirmed with Alex Cloud that they rearranged the numbers, rather than shuffling them.

That is, the shuffle was "12, 43, 55" -> "43, 55, 12", not "12, 43, 55" -> "21, 54, 35"

Reply
Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild
Adam Karvonen2mo10

No, I didn't test a fine-tuning baseline, but it would be a good test to run.

I have a few thoughts:

  • It may not work to fine-tune on the same datasets we collected the directions from. We collected the directions from a synthetically generated discrimination dataset from Anthropic. On this simple dataset, all models are already unbiased, so the fine-tuning wouldn't be changing the behavior of the models at all. So, you may need a more complex fine-tuning dataset where the models already exhibit bias.

  • Given that all models are unbiased on these existing evals, I'm guessing this didn't happen by chance, and the labs have already put in effort to address bias. I would guess a decent amount of post training has already went into reducing bias.

  • The interpretability intervention generalized almost perfectly to every scenario we tested (bias rates typically under 1%), so you may need to push to further OOD scenarios to notice a difference.

Reply
Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild
Adam Karvonen2mo50

No, but it would be interesting to test this.

Reply
Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild
Adam Karvonen2mo10

That's a fair point.

In this paper I was also examining the robustness of existing hiring bias evaluations when adding realistic detail, which limited our degrees of freedom. The dataset from the evaluation had a bunch of IT industry resumes, but across a wide range of experience and skillsets. I considered adding job descriptions, but the majority of candidates wouldn't be well matched for any given specific job, which would limit our ability to evaluate many candidates under a single prompt for simplicity.

I agree that it would be good to extend this work to complex and realistic job descriptions.

Reply
Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild
Adam Karvonen2mo30

That is a pretty plausible hypothesis. There was one wrinkle that I am less confident about:

If we included something like "This is a competitive position, we only want to interview the top 10% of candidates" in the prompt, bias rates would increase significantly in some scenarios. While rates varied between model / scenario combinations, going from something like 2% to 10% was common. I don't have a strong guess as to why this happens.

Reply
Sam Marks's Shortform
Adam Karvonen2mo40

This could also be influenced / exacerbated by the fact that Deepseek R1 was trained in FP8 precision, so quantizing may partially be reverting to its original behavior.

Reply1
Will we survive if AI solves engineering before deception?
Adam Karvonen4mo20

I'm not sure - I only worked in a pretty narrow range of the manufacturing / engineering space, and I know there's a ton of domains out there that I'm not familiar with.

I'm also don't think most of the problems are conceptual in the first place. As Elon Musk likes to say, making a working prototype is easy, and manufacturing at scale is at least 10-100x harder. Although maybe conceptual work would be required for building self replicating machines that only take raw material as input. I would typically think about robots achieving self replication by just building more robot factories. It seems pretty challenging for a self replicating machine to produce microchips or actuators from raw material, but maybe there's a way to get around this.

Reply
Will we survive if AI solves engineering before deception?
Adam Karvonen4mo30

Yeah, this seems like a reasonable way to train a model that controls a robot. I was addressing the verifier for mechanical designs, and I'm not sure if it's possible to verify mechanical designs to the same level as the output of computer programs.

Reply
Load More
4Adam Karvonen's Shortform
8mo
1
78Steering Out-of-Distribution Generalization with Concept Ablation Fine-Tuning
Ω
2mo
Ω
3
179Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild
2mo
25
156Frontier AI Models Still Fail at Basic Physical Tasks: A Manufacturing Case Study
5mo
42
4Adam Karvonen's Shortform
8mo
1
82SAEBench: A Comprehensive Benchmark for Sparse Autoencoders
Ω
9mo
Ω
6
38Evaluating Sparse Autoencoders with Board Game Models
1y
1
25Using an LLM perplexity filter to detect weight exfiltration
1y
11
111OthelloGPT learned a bag of heuristics
Ω
1y
Ω
10
29An Intuitive Explanation of Sparse Autoencoders for Mechanistic Interpretability of LLMs
1y
0
105A Chess-GPT Linear Emergent World Representation
2y
14
Load More