Blaine

Hi! I'm Blaine. I'm Research Communications Officer at an AGI company called Noeon Research, based in Japan. I run AI Safety 東京, a special interest group supporting Tokyo's nascent AI safety scene. We run a yearly safety conference called TAIS.

https://aisafety.tokyo/

https://tais2024.cc/

https://noeon.ai/  

https://linkedin.com/in/paperclipbadger 

Wiki Contributions

Comments

Blaine30

I'm not sure the tuned lens indicates that the model is doing iterative prediction; it shows that if for each layer in the model you train a linear classifier to predict the next token embedding from the activations, as you progress through the model the linear classifiers get more and more accurate. But that's what we'd expect from any model, regardless of whether it was doing iterative prediction; each layer uses the features from the previous layer to calculate features that are more useful in the next layer. The inception network analysed in the distill.ai circuits thread starts by computing lines and gradients, then curves, then circles, then eyes, then faces, etc. Predicting the class from the presence of faces will be easier than from the presence of lines and gradients, so if you trained a tuned lens on inception v1 it would have the same pattern—lenses from later layers would have lower perplexity. I think to really show iterative prediction, you would have to be able to use the same lens for every layer; that would show that there is some consistent representation of the prediction being updated with each layer.

 

Here's the relevant figure from the tuned lens—the transfer penalties for using a lens from one layer on another layer are small but meaningfully non-zero, and tend to increase the further away the layers are in the model. That they are small is suggestive that GPT might be doing something like iterative prediction, but the evidence isn't compelling enough for my taste.

Blaine20

Here's a sketch of the predictive-coding-inspired model I think you propose:

The initial layer predicts token  from token  for all tokens. The job of each  "predictive coding" layer would be to read all the true tokens and predictions from the residual streams, find the error between the prediction and the ground truth, then make a uniform update to all tokens to correct those errors. As in the dual form of gradient descent, where updating all the training data to be closer to a random model also allows you to update a test output to be closer to the output of a trained model, updating all the predicted tokens uniformly also moves prediction  closer to the true token . At the end, an output layer reads the prediction for  out of the latent stream of token .

This would be a cool way for language models to work:

  • it puts next-token-prediction first and foremost, which is what we would expect for a model trained on next-token-prediction.
  • it's an intuitive framing for people familiar with making iterative updates to models / predictions
  • it's very interpretable, at each step we can read off the model's current prediction from the latent stream of the final token (and because the architecture is horizontally homogenous, we can read off the model's "predictions" for mid-sequence tokens too, though as you say they wouldn't be quite the same as the predictions you would get for truncated sequences).

But we have no idea if GPT works like this! I haven't checked if GPT has any circuits that fit this form; from what I've read of the Transformer Circuits sequence they don't seem to have found predicted tokens in the residual streams. The activation space gradient descent theory is equally compelling, and equally unproven. Someone (you? me? anthropic?) should poke around in the weights of an LLM and see if they can find something that looks like this.