Summary
Using results from scaling laws, this short note argues that the following two statements cannot be simultaneously true:
- Superposition hypothesis where sparse features are linearly represented across a layer in fewer neurons is a complete theory of feature representation.
- Features are universal, meaning two models trained on the same data and achieving equal performance must learn identical features.
Scaling laws for Language Models gives us a relation for a model's macroscopic properties such as cross entropy loss L, Amount of Data D used and Number of non-embedding parameters N in the model.
L(N,D)=[(NcN)αNαD+(DcD)αD]αD
where Nc, Dc, αN, and αD are constants for a given task such as Language modeling
The scaling laws are not mere empirical observations and can be seen as a predictive laws on limits of language... (read 1437 more words →)
Thanks for the feedback, this is a great point! I haven't come across evidence in real models which points towards this. My default assumption was that they are operating near the upper bounds of superposition capacity possible. It would be great to know if they aren't, as it affects how we estimate the number of features and subsequently the SAE expansion factor.