LESSWRONG
LW

Arthur Conmy
1665Ω9561260
Message
Dialogue
Subscribe

Intepretability 

Views my own

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Arthur Conmy's Shortform
Ω
3y
Ω
1
Attention Output SAEs
No wikitag contributions to display.
AI for Epistemics Hackathon
Arthur Conmy2mo20

I didn't find the system prompt very useful on other models (I very rarely use GPT-4.5)

E.g. Gemini 2.5 Pro tends to produce longer outputs with shoe-horned references when given this prompt (link one), whereas using no system prompt produces a shorter response (link two -- obviously highly imperfect, but much better IMO)

Possibly @habryka has updated this?

Reply
Self-fulfilling misalignment data might be poisoning our AI models
Arthur Conmy4mo52

Upweighting positive data

Data augmentation

...

It maybe also worth up-weighting https://darioamodei.com/machines-of-loving-grace along with the AI optimism blog post in the training data. In general it is a bit sad that there isn't more good writing that I know of on this topic.

Reply
Activation space interpretability may be doomed
Arthur Conmy6mo70

the best vector for probing is not the best vector for steering


AKA the predict/control discrepancy, from Section 3.3.1 of Wattenberg and Viegas, 2024

Reply1
Sam Marks's Shortform
Arthur Conmy6mo40

I suggested something similar, and this was the discussion (bolding is the important author pushback):
 

Arthur Conmy

11:33 1 Dec

Why can't the YC company not use system prompts and instead:

1) Detect whether regex has been used in the last ~100 tokens (and run this check every ~100 tokens of model output)

2) If yes, rewind back ~100 tokens, insert a comment like # Don't use regex here (in a valid way given what code has been written so far), and continue the generation

Dhruv Pai

10:50 2 Dec

This seems like a reasonable baseline with the caveat that it requires expensive resampling and inserting such a comment in a useful way is difficult. 

When we ran baselines simply repeating the number of times we told the model not to use regex right before generation in the system prompt, we didn't see the instruction following improve (very circumstantial evidence). I don't see a principled reason why this would be much worse than the above, however, since we do one-shot generation with such a comment right before the actual generation.

Reply
IAPS: Mapping Technical Safety Research at AI Companies
Arthur Conmy8mo60
  • Here are the other GDM mech interp papers missed:
    • https://arxiv.org/abs/2307.15771
    • https://arxiv.org/abs/2404.16014
    • https://arxiv.org/abs/2407.14435
  • We have some blog posts of comparable standard to the Anthropic circuit updates listed:
    • https://www.alignmentforum.org/posts/C5KAZQib3bzzpeyrg/full-post-progress-update-1-from-the-gdm-mech-interp-team
    • https://www.alignmentforum.org/posts/iGuwZTHWb6DFY3sKB/fact-finding-attempting-to-reverse-engineer-factual-recall
  • You use a very wide scope for the "enhancing human feedback" (basically any post-training paper mentioning 'align'-ing anything). So I will use a wide scope for what counts as mech interp and also include:
    • https://arxiv.org/abs/2401.06102
    • https://arxiv.org/abs/2304.14767
    • There are a few other papers from the PAIR group as well as Mor Geva and also Been Kim, but mostly with Google Research affiliations so it seems fine to not include these as IIRC you weren't counting pre-GDM merger Google Research/Brain work 
Reply2
Bridging the VLM and mech interp communities for multimodal interpretability
Arthur Conmy8mo30

The [Sparse Feature Circuits] approach can be seen as analogous to LoRA (Hu et al., 2021), in that you are constraining your model's behavior

 

FWIW I consider SFC and LoRA pretty different, because in practice LoRA is practical, but it can be reversed very easily and has poor worst-case performance. Whereas Sparse Feature Circuits is very expensive, requires far more nodes in bigger models (forthcoming, I think), or requires only studying a subset of layers, but if it worked would likely have far better worst-case performance. 

This makes LoRA a good baseline for some SFC-style tasks, but the research experience using both is pretty different. 

Reply
IAPS: Mapping Technical Safety Research at AI Companies
Arthur Conmy8mo70

I assume all the data is fairly noisy, since scanning for the domain I know in https://raw.githubusercontent.com/Oscar-Delaney/safe_AI_papers/refs/heads/main/Automated%20categorization/final_output.csv, it misses ~half of the GDM Mech Interp output from the specified window and also mislabels https://arxiv.org/abs/2208.08345 and https://arxiv.org/abs/2407.13692 as Mech Interp (though two labels are applied to these papers and I didn't dig to see which was used)

Reply
Mark Xu's Shortform
Arthur Conmy9moΩ230

> think hard about how joining a scaling lab might inhibit their future careers by e.g. creating a perception they are “corrupted”

Does this mean something like:

1. People who join scaling labs can have their values drift, and future safety employers will suspect by-default that ex-scaling lab staff have had their values drift, or
 

2. If there is a non-existential AGI disaster, scaling lab staff will be looked down upon

or something else entirely?

Reply
The Geometry of Feelings and Nonsense in Large Language Models
Arthur Conmy9mo63

This is a great write up, thanks! Has their been any follow up from the paper's authors?

This seems a pretty compelling takedown to me which is not addressed by the existing paper (my understanding of the two WordNet experiments not discussed in post is: Figure 4 concerns whether under whitening a concept can be linearly separated (yes) and so the random baseline used here does not address the concerns in this post; Figure 5 shows that the whitening transformation preserves some of the word net cluster cosine sim, but moreover on the right basically everything is orthogonal, as found in this post).

This seems important to me since the point of mech interp is to not be another interpretability field dominated by pretty pictures (e.g. saliency maps) that fail basic sanity checks (e.g. this paper for saliency maps). (Workshops aren't too important, but I'm still surprised about this)

Reply
Base LLMs refuse too
Arthur Conmy9moΩ120

My current best guess for why base models refuse so much is that "Sorry, I can't help with that. I don't know how to" is actually extremely common on the internet, based on discussion with Achyuta Rajaram on twitter: https://x.com/ArthurConmy/status/1840514842098106527

This fits with our observations about how frequently LLaMA-1 performs incompetent refusal

Reply
Load More
113Negative Results for SAEs On Downstream Tasks and Deprioritising SAE Research (GDM Mech Interp Team Progress Update #2)
Ω
3mo
Ω
15
48The GDM AGI Safety+Alignment Team is Hiring for Applied Interpretability Research
Ω
4mo
Ω
1
82SAEBench: A Comprehensive Benchmark for Sparse Autoencoders
Ω
7mo
Ω
6
22Evolutionary prompt optimization for SAE feature visualization
Ω
8mo
Ω
0
67SAEs are highly dataset dependent: a case study on the refusal direction
Ω
8mo
Ω
4
48Open Source Replication of Anthropic’s Crosscoder paper for model-diffing
Ω
8mo
Ω
4
29SAE features for refusal and sycophancy steering vectors
Ω
9mo
Ω
4
60Base LLMs refuse too
Ω
9mo
Ω
20
31Extracting SAE task features for in-context learning
Ω
11mo
Ω
1
62Self-explaining SAE features
Ω
11mo
Ω
13
Load More