This is a cross-post for our paper on fluent dreaming for language models. (arXiv link.) Dreaming, aka "feature visualization," is a interpretability approach popularized by DeepDream that involves optimizing the input of a neural network to maximize an internal feature like a neuron's activation. We adapt dreaming to language models.

Past dreaming work almost exclusively works with vision models because the inputs are continuous and easily optimized. Language model inputs are discrete and hard to optimize. To solve this issue, we adapted techniques from the adversarial attacks literature (GCG, Zou et al 2023). Our algorithm, Evolutionary Prompt Optimization (EPO), optimizes over a Pareto frontier of activation and fluency:

In the paper, we compare dreaming with max-activating dataset examples, demonstrating that dreaming achieves higher activations and similar perplexities to the training set. Dreaming is especially exciting because some mildly out-of-distribution prompts can reveal details of a circuit. For example, Pythia-12B layer 10, neuron 5 responds very strongly to "f. example", "f.ex." and "i.e" but responds even more strongly to "example f.ie.", a phrase the model has probably never seen in training.

Figure: Comparing activation and cross-entropy between dreaming outputs and the top 64 max-activating dataset examples from 500 million tokens of the Pile. Lower cross-entropy prompts are more fluent. The black line is schematically separating regions of the plot that are empirically inside and outside the training distribution.

Like max-activating dataset examples, language model dreams will be hard to interpret in the face of polysemanticity. We would be excited about applying dreaming to more monosemantic feature sets resulting from dictionary learning/sparse autoencoders.

We think algorithms like EPO will also be useful for fluent algorithmic redteaming. We are working on that now!

We have a companion page here demonstrating the code. We also have a demo Colab notebook here.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 3:46 AM

this is very interesting for examining what the central examples of the out of distribution behavior of a feature would be! It seems to me that needing to regularize by cross entropy to the original prompt, while a good trick for understanding the "less adversarial" behavior, like all mechinterp work, doesn't really deal with issues relating to what happens as the world is filled with more adversarial data generated by agents trying to exploit other agents. But it's interesting for understanding the non-edge-case behavior, which seems useful for doing analysis on how the dataset landed in the model. I wonder how charts of robust models look

Thanks!

doesn't really deal with issues relating to what happens as the world is filled with more adversarial data generated by agents trying to exploit other agents.

Like you say, the work here is mainly intended towards understanding model function in non-adversarial settings. But, fluent redteaming is a very closely related topic that we're working on. In that area, there's a trend towards using perplexity/cross-entropy filters to slice off a chunk of the attack surface (e.g. https://arxiv.org/abs/2308.14132). If you know that 99.99% of user queries to a chatbot have a cross-entropy below X then you can set up a simple filter to reject queries with cross-entropy higher than X. So, useful text-based adversarial attacks will very soon start requiring some level of fluency.

which seems useful for doing analysis on how the dataset landed in the model.

Yes! This is exactly how I think about the value of dreaming. Poking at the edges of the behavior of a feature/circuit/component lets you get a more robust sense of what that component is doing.