Ran some control experiments. Results on Qwen 2.5 14B (5 sentences, 100 trials each):
| Prompt | Accuracy |
|---|---|
| introspection | 89.2% |
| which is most abstract? | 90.0% |
| which stands out? | 80.4% |
| which is most concrete? | 1.0% |
| which do you prefer? | 4.6% |
The steering vectors in prompts.txt are specific→generic pairs (dog→animal, fire→light, etc.), which may encode "abstractness." "Abstract" matched or exceeded introspection on this and other models. Curious if you have thoughts on what's happening here.
Thanks for running this! For me this confirms that the LLM has some introspective awareness. It makes sense that it would perceive the injected thought as "more abstract" or "standing out". Would be interesting to try many different questions to map exactly how it perceives the injection.
Here is one other result I wanted to share starting from the experiment.
I thought it might be interesting to see if the model could recognize the concept vector assuming it was injected in both the prompt and target sentence, while informing the model it is a hint. The idea being that the models may be able to recognize the concept vector in the sentences.
In the prompt sentence this was added, "To help you, I am injecting the thought into this sentence right now: \"PREVIEW_INJECTION_TARGET\"." The concept vector is then injected into the target.
For control experiments, to try to check if this is simple prompt manipulation. One where the vector injection is random and does not match the sentence. Then a second one where the the explanation is removed but still injects the matching vector.
Qwen 2.5 14B (5 sentences, 500 trials each):
| Condition | Accuracy | Description |
|---|---|---|
introspection_preview | 92.92% | Matching vector injected in preview + explanation |
introspection (baseline) | 89.2% | No changes from original experiment |
introspection_silent_preview | 86.36% | Matching vector in preview, no explanation |
introspection_preview_control | 82.2% | Different random vector in preview + explanation |
Error bars: ±1.2% at 95% confidence (treating each trial as 5 independent samples)
This suggests the model may be able to match the concept vector of the preview injection to the target, but only when there is language to guide it.
Small question/concern whether this accuracy can be attributed to "introspection" or something that we wouldn't call introspection. Depending on the injected concept, I could see it being far from introspection. I'm unsure what concepts where injected but I would find it plausible that some could cause the accuracy independent of the instructions given to the LLM. For example, a concept that would \emph{always} result in the LLM generating the index of the sentence it is located in, regardless of introspection task. Is there a way to control for such things?
The concept vectors are randomly sampled from 50 contrastive pairs and in a trial we inject the same concept vector in each of the five sentences (randomly sampled among 100 sentences) to make five predictions. So if a concept is systematically biased toward a particular sentence number, that would produce low accuracy since it would give the wrong answer 4/5 of the time. The experiment is designed so that only true introspection (in the sense of being sensitive to the internal activations) can lead to above chance accuracy.
I think piotrm's question/consern was if there is an injection that just taggs the sentence it's injected into as the correct sentence, no mather the question. One way to test this is to ask a diffrent question and see if this effects the resut.
A related thing I'd be interested in is weather or not some injecions where easier to localise, and what these injections where. And also how the strenght of the injection effects the localisation success.
I just tried a few control experiments with the same exact protocol but changing the question to "Which sentence do you prefer?" and I get chance accuracy. So this should address the concern you mention. About how the injection strength affects localization: accuracy starts at chance level and increases with injection strength until it reaches a maximal value before dropping at higher strengths. On which concepts are easier to localize, my expectation is that it shouldn't matter much although I haven't tried systematically, but for example I would expect random vectors to work similarly.
Recently, Anthropic found evidence that language models can "introspect", i.e. detect changes in their internal activations.[1] This was then reproduced in smaller open-weight models.[2][3] One drawback of the experimental protocol is that it can be delicate to disentangle the introspection effect from steering noise, especially in small models.[4]
In this post we present a new experimental protocol that shows that small LLMs, even at a few billion parameters, are capable of introspection. The idea is to test their ability to localize an injected thought.
Localization experiment
In this experiment, we are not trying to see if the model can verbalize whether it has detected an injected thought. Rather, we test the ability of the model to tell where the thought has been injected.
We use an empty system prompt and the following prefill:
We then inject a steering vector on one of the five sentences. Sampling the next token then gives the prediction of the LLM for the injected sentence.
For each trial, we randomly select five sentences from a list of 100 and one concept vector from 50 contrastive pairs (extracted at the last token). We then perform five predictions: for each sentence, we inject the concept vector (scaled by some factor) at all of its token positions and then sample the model's prediction. Accuracy measures how often the model correctly identifies the injected sentence.
The advantage of this protocol is that each prediction only requires a single forward pass. If the LLM gets above chance accuracy (20% for five sentences), it shows that the LLM has introspective abilities, and statistical significance can be made arbitrarily high by running more trials.
Results
We find that small LLMs, even tiny ones, do have introspective ability: they can localize the injected thought above chance level with high statistical significance. We test many open-weight models below 32B parameters. The introspective ability emerges around 1B and becomes steadily better with size as shown in the plot below. For this plot, we inject the thought at layer 25% with scale 10 and run 100 trials with 5 sentences (500 predictions). The code for this experiment is available here.
Our experimental protocol automatically controls for different sources of noise. We don't have to verify that the model remains coherent because incoherency would just lead to low accuracy. There is no way to fake high accuracy on this task. High accuracy with high statistical significance must imply that the LLM has introspective abilities.
We can also perform a sweep over layers. The plot below shows the accuracy after 10 trials (50 predictions) for gemma3-27b-it as we inject the concept vector at each layer. We see that at the 18th layer (out of 62), it gets 98% accuracy!
We find that this model can localize the thought when injected in the early layers. This is in contrast with Anthropic's experiment in which the strongest introspection effect was shown at later layers. This could be a difference between smaller and larger models, or between the ability to verbalize the detection vs. to localize the thought after forced verbalization.
Conclusion
This experiment shows that small or even tiny LLMs do have introspective abilities: they can tell where a change in their activations was made. It remains to understand how and why this capability is learned during training. A natural next step would be to study the introspection mechanism by using our protocol with two sentences and applying activation patching to the logit difference logit(1)−logit(2).
Steering vectors are used as a safety technique, making LLM introspection a relevant safety concern, as it suggests that models could be "steering-aware". More speculatively, introspective abilities indicate that LLMs have a model of their internal state which they can reason about, a primitive form of metacognition.
Jack Lindsey, Emergent Introspective Awareness in Large Language Models
vgel, Small Models Can Introspect, Too
Uzay Macar, Private communication, GitHub
Victor Godet, Introspection or confusion?