Wiki Contributions

Comments

Something like 'A Person, who is not a Librarian' would be reasonable. Some people are librarians, and some are not.

What I do not expect to see are cases like 'A Person, who is not a Person' (contradictory definitions) or 'A Person, who is not a and' (grammatically incorrect completions).

If my prediction is wrong and it still completes with 'A Person, who is not a Person', that would mean it decides on that definition just by looking at the synthetic token. It would "really believe" that this token has that definition.

13. an X that isn’t an X

 

I think this pattern is common because of the repetition. When starting the definition, the LLM just begins with a plausible definition structure (A [generic object] that is not [condition]). Lots of definitions look like this. Next it fills in some common [gneric object].Then it wants to figure out what the specific [condition] is that the object in question does not meet. So it pays attention back to the word to be defined, but it finds nothing. There is no information saved about this non-token. So the attention head which should come up with a plausible candidate for [condition] writes nothing to the residual stream. What dominates the prediction now are the more base-level predictive patterns that are normally overwritten, like word repetition (this is something that transformers learn very quickly and often struggle with overdoing). The repeated word that at least fits grammatically is [generic object], so that gets predicted as the next token.

Here are some predictions I would make based on that theory:
- When you suppress attention to [generic object] at the sequence position where it predicts [condition], you will get a reasonable condition.
- When you look (with logit lens) at which layer the transformer decides to predict [generic object] as the last token, it will be a relatively early layer.
- Now replace the word the transformer should define with a real, normal word and repeat the earlier experiment. You will see that it decides to predict [generic object] in a later layer.

I like this method, and I see that it can eliminate this kind of superposition. 
You already address the limitation, that these gated attention head blocks do not eliminate other forms of attention head superposition, and I agree.
It feels kind of specifically designed to deal with the kind of superposition that occurs for Skip Trigrams and I would be interested to see how well it generalizes to superpositions in the wild.


I tried to come up with a list of ways attention head superposition that can not be disentangled by gated attention blocks:

  • multiple attention heads perform a distributed computation, that attends to different source tokens
    This was already addressed by you, and an example is given by Greenspan and Wynroe
  • The superposition is across attention heads on different layers
    These are not caught because the sparsity penalty is only applied to attention heads within the same layer.
    Why should there be superposition of attention heads between layers?
    As a toy model let us imagine the case of a 2 layer attention only transformer, with n_head heads in each layer, given a dataset with >n_head^2+n_head skip trigrams to figure out.
    Such a transformer could use the computation in superposition described in figure 1 to correctly model all skip trigrams, but would run out of attention head pairs within the same layer for distributing computation between.
    Then it would have to revert to putting attention head pairs across layers in superposition.
  • Overlapping necessary superposition.
    Let's say, there is some computation, for wich you need two attention heads, attending to the same token position.
    The easiest example of a situation, where this is necessary is when you want to copy information from a source token, that is "bigger" than the head dimension. The transformer can then use 2 heads, to copy over twice as much  information.
    Let us now imagine, there are 3 cases, where information has to be copied from the source token. A,B,C, and we have 3 heads: 1,2,3. and the information that has to be copied over can be stored in 2*d_head dimensions. Is there a  way to solve this task? Yes!
    heads 1&2 work in superposition to copy the information in task A, 2&3 in task B and 3&1 in task C.
    In theory, we could make all attention heads monosemantic, by having a set of 6 attention heads, trained to perform the same computation: A: 1&2, B: 3&4, C:5&6. But the way that the L.6 norm is applied, it only tries to reduce the  number of times, that  2 attention heads attend to the same token. And this happens the same amount for both possibilities where the computation happens.

Under an Active Inference perspective, it is little surprising, that we use the same concepts for [Expecting something to happen], and [Trying to steer towards something happenig], as they are the same thing happening in our brain. 

I don't know enough about this know, whether the active inference paradigm predicts, that this similarity on a neuronal level plays out as humans using similar language to describe the two phenomena, but if it does the common use of this "beliving in" - concept might count as evidence in its favour.

Ok, the sign error was just in the end, taking the -log of the result of the integral vs. taking the log. fixed it, thanks.

Thanks, Ill look for the sign-error!

I agree, that K is symmetric around our point of integration, big the prior phi is not. We integrate over e-(nk)*phi, wich does not have have to be symetric, right?

The top performing vector is odd in another way. Because the tokens of the positive and negative side are subtracted from each other, a reasonable intuition is that the subtraction should point to a meaningful direction. However, some steering vectors that perform well in our test don't have that property. For the steering vector “Wedding Planning Adventures” - “Adventures in self-discovery”, the positive and negative side aren't well aligned per token level at all:

I think I don't see the Mystrie here.
When you directly subtract the steering prompts from each other, most of the results would not make sense, yes. But this is not what we do. 
We feed these Prompts into the Transformer and then subtract the residual stream activations after block n from each other. Within the n layers, the attention heads have moved around the information between the positions. Here is one way, this could have happened:

The first 4 Blocks assess the sentiment of a whole sentence, and move this information to position 6 of the residual stream, the other positions being irrelevant. So, when we constructed the steering vector and recorded the activation after block 4, we have the first 5 positions of the steering vector being irrelevant and the 6th position containing a vector that points in a general "Wedding-ness" direction. When we add this steering vector to our normal prompt, the transformer acts as if the previous vector was really wedding related and 'keeps talking' about weddings.

Obviously, all the details are made up, but I don't see how a token for token meaningful alignment of the prompts of the steering vector should intuitively be helpful for something like this to work.

The analogy to molecular biology you've drawn here is intriguing. However, one important hurdle to consider is that the Phage Group had some sense of what they were seeking. They examined bacteria with the goal of uncovering mechanisms also present in humans, about whom they had already gathered a considerable amount of knowledge. They indeed succeeded, but suppose we look at this from a different angle.

Imagine being an alien species with a vastly different biological framework, tasked with studying E.Coli with the aim of extrapolating facts that also apply to the "General Intelligences" roaming Earth - entities that you've never encountered before. What conclusions would you draw? Could you mistakenly infer that they reproduce by dividing in two, or perceive their surroundings mainly through chemical gradients?

I believe this hypothetical scenario is more analogous to our current position in AI research, and it highlights the difficulty in uncovering empirical findings that can generalize all the way up to general intelligence.

Thanks a lot for the comment and correction :) 

I updated "diamond maximization problem" to "diamond alignment problem".

I didn't understand your proposal to involve surgically inserting the drive to value "diamonds are good", but instead systematically rewarding the agent for acquiring diamonds so that a diamond shard forms organically. I also edited that sentence. 

I am not sure I get your Nitpick: "Just as you can deny that Newtonian mechanics is true, without denying that heavy objects attract each other." was supposed to be an example of "The specific theory is wrong, but the general phenomenon which it tries to describe exists". In the same way that I think Natural Abstractions exist but (my flawed understanding) of  Wentworths theory of natural abstractions is wrong. It was not supposed to be an example of a natural abstraction itself.

Very interesting Idea!

I am a bit sceptical about the part, where the Ghosts should mostly care about what will happen to their actual version, and not care about themselfs.

Lets say I want you to cooperate in a prisoner's dilemma. I might just simulate you, see if your ghost cooperates and then only cooperate when your ghost does. But I could also additionally reward?punnish your ghosts directly depending wether they cooperate or defect. 

Wouldn't that also be motivating to the ghosts, that they suspect that I might just get reward or punishment even if they are the Ghosts and not the actual person?

Load More