Tristan Hume

Interpretability Researcher at Anthropic

Posts

Sorted by New

Wiki Contributions

Comments

This is great work! We’ve been working on very similar things at Anthropic recently, also using gradient descent on autoencoders for sparse coding activations, but focusing more on improving the sparse coding technique and loss to be more robust and on extending it to real datasets. Here’s some of the thoughts I had reading this:
 

  • I like the description of your more sophisticated synthetic data generation. We’ve only tried using synthetic data without correlations and with uniform frequency. We've also tried real models we don’t have the ground truth for but where we can easily visualize the feature directions (1-layer MNIST perceptrons).
  • I like how the MMC metric has an understandable 0-1 scale. We've been using a similar ground-truth loss but a slightly different formulation that uses norms of vector subtraction rather than cosine similarity, which allows non-normalized features, but doesn't give a nice human-understandable scale.
  • The different approaches for trying to find the correct dictionary size are great and it's good to see the results. The stickiness, dead neurons, and comparing to a larger coding result were all stuff we hadn't looked at. We also have clear loss elbows for synthetic data but haven't found any for real data yet. This does seem like one of the important unsolved problems.
  • That orthogonal initialization is one we haven't seen before. Did you try multiple things and that one worked best? We've been using a kind-of-PCA-like algorithm on the activations for our initialization.