Recently I've been experimenting with recreating a neural network's input layer from intermediate layer activations.

The possibility has implications for interpretability. For example, if certain neurons are activated on certain input, you know those neurons are 'about' that type of input.

My question is: Does anyone know of prior work/research in this area?

I'd appreciate even distantly-related work. I may write a blog post about my experiments if there is an interest and if there isn't already adequate research in this area.

New Answer
New Comment

2 Answers sorted by

search quality: skimmed the abstracts search method: semantic scholar + browsing note that many of these results are kind of old

Garrett Baker


Myself and some others did some work looking at the mutual information between intermediate layers of a network, and it's input here.