Posts

Sorted by New

Wiki Contributions

Comments

Nice work! 

I was reading your article and I had this wild "what if" thought that I would like to share. Who knows what a spark like that could ignite, right? :)

What if... during the training we also train a second more constrained "explainer" DNN designed to infer the neuron activations of the main DNN being trained? If we find a way to project the internal representations into a more compressed latent space, maybe this secondary NN could learn how to represent this high-level abstraction that represents the NN's internal mechanics. These "explanatory embeddings" could be used to correlate different neural pathways and help us to understand what types of abstractions the neural net is creating during the training.

They're obviously a lot of engineering challenges to make this secondary training viable, but as I mentioned, this is just a raw insight.

And allowing myself an even longer shot, a "generative architectural model", if proven viable and valuable, could be in the end a key tool that helps us to understand the emergent internal NN abstractions, and maybe could be used in following training sessions as a kind of initializer to provide some fundamental internal structures to optimize the training convergence, like a blueprint of the main components of the expanding AI artificial brain.

Any thoughts? Thank you!

Sincerely,
Fabricio V Matos