LESSWRONG
LW

CallumMcDougall
2044Ω30831593
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Monthly Algorithmic Problems in Mech Interp
Induction heads - illustrated
CallumMcDougall6mo30

Sorry I didn't get to this message earlier, glad you liked the post though! The answer is that attention heads can have multiple different functions - the simplest way is to store things entirely orthogonally so they lie in fully independent subspsaces, but even this isn't necessary because it seems like transformers take advantage of superposition to represent multiple concepts at once, more so than they have dimensions.

Reply
How to replicate and extend our alignment faking demo
CallumMcDougall8mo50

Oh, interesting, wasn't aware of this bug. I guess this is probably fine since most people replicating it will be pulling it rather than copying and pasting it into their IDE. Also this comment thread is now here for anyone who might also get confused. Thanks for clarifying!

Reply
How to replicate and extend our alignment faking demo
CallumMcDougall8mo30

+1, thanks for sharing! I think there's a formatting error in the notebook, where the tags like <OUTPUT> were all removed and replaced with empty strings (e.g. see attached photo). We've recently made the ARENA evals material public, and we've got a working replication there which I think has the tags in the right place (section 2  of 3 on the page linked here)

Reply
[Paper] A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders
CallumMcDougall9mo41

Amazing post! Forgot to do this for a while, but here's a linked diagram explaining how I think about feature absorption, hopefully ppl find it helpful!

Reply
Toy Models of Feature Absorption in SAEs
CallumMcDougall11mo40

I don't know of specific examples, but this is the image I have in my head when thinking about why untied weights are more free than tied weights: 

I think more generally this is why I think studying SAEs in the TMS setup can be a bit challenging, because there's often too much symmetry and not enough complexity for untied weights to be useful, meaning just forcing your weights to be tied can fix a lot of problems! (We include it in ARENA mostly for illustration of key concepts, not because it gets you many super informative results). But I'm keen for more work like this trying to understand feature absorption better in more tractible cases 

Reply
How ARENA course material gets made
CallumMcDougall1y20

Oh yeah this is great, thanks! For people reading this, I'll highlight SLT + developmental interp + mamba as areas which I think are large enough to have specific exercise sections but currently don't

Reply
SAE-VIS: Announcement Post
CallumMcDougall1y20

Thanks!! Really appreciate it

Reply
SAE-VIS: Announcement Post
CallumMcDougall1y20

Thanks so much! (-:

Reply
SAE-VIS: Announcement Post
CallumMcDougall1y30

Thanks so much, really glad to hear it's been helpful!

Reply
Six (and a half) intuitions for KL divergence
CallumMcDougall2y20

Thanks, really appreciate this (and the advice for later posts!)

Reply
Load More
Modularity
3y
(+1133)
Modularity
3y
26ARENA 6.0 - Call for Applicants
3mo
3
110New Cause Area Proposal
5mo
4
113Negative Results for SAEs On Downstream Tasks and Deprioritising SAE Research (GDM Mech Interp Team Progress Update #2)
Ω
5mo
Ω
15
35ARENA 5.0 - Call for Applicants
7mo
2
86Scaling Sparse Feature Circuit Finding to Gemma 9B
8mo
11
82SAEBench: A Comprehensive Benchmark for Sparse Autoencoders
Ω
9mo
Ω
6
57AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0
1y
7
41How ARENA course material gets made
1y
2
109A Selection of Randomly Selected SAE Features
Ω
1y
Ω
2
74SAE-VIS: Announcement Post
Ω
1y
Ω
8
Load More