jem-mosig

20

The predictions laid out in the book are mostly about how to build a perceptron such that representation learning works well in practice, and that the generalisation error gets minimised. For example,

- When you train with (stochastic) gradient descent, you have to scale the learning rate differently for different layers and also differently for weights and biases. The theory tells you specifically how to scale them, and how this depends on activation functions. If you don't do that, the theory predicts among other things that the change in performance of your network from instantiation to instantiation will be greater.
- The theory predicts that representation learning in deep perceptrons depends substantially on the depth-to-width ratio. For example, if the network is overly deep (depth similar to, or greater than width), you get strong coupling and chaotic behaviour. Also, your width must be large
*but finite*for representation learning to work. An optimal ratio is also approximated.

Many more concrete, testable, numeric results like this are derived. The idea is that this is just the beginning and a lot more could potentially be derived. You can use the theory to express any observable (analytic combination of pre-activations anywhere in the network) that you might be interested in and study its statistics.

10

To be clear: I don't have strong confidence that this works, but I think this is something worth exploring.

10

One more thing I should probably have added: I am only talking about the distributional shift in input data, which is important. But I think Eliezer is also talking about another kind of distributional shift that comes from a change in ontology. I am confused about how to think of this. Intuitively it is "the world hasn't changed, just how I look at it", whereas I discuss "the world has changed" (because the agent is doing things that haven't occurred during training).

10

I think 3blue1brown's videos give a good first introduction about neural nets (the "atomic" description):

Does this help?

20

I did not write down the list of quantities because you need to go through the math to understand most of them. One very central object is the neural tangent kernel, but there are also algorithm projectors, universality classes, etc., each of which require a lengthy explanation that I decided to be beyond the scope of this post.

50

Hmm, you may be right, sorry. I somehow read the opaqueness problem as a sub-problem of lie-detection. To do lie-detection we need to formulate mathematically what lying means, and for that we need theoretical understanding of what's going on in a neural net in the first place, so we have the right concepts to work with.

I think lie-detection in general is very hard, although it might be tractable in specific cases. The general problem seems hard because I find it difficult to define lying mathematically. Thinking about it for five minutes I hit several dead ends. The "best" one was this: If the agent (for lack of a better term) lies, it would not be surprised about a contrary outcome. That is, I think it would be a bad sign if the agent wasn't surprised to find me dead tomorrow, despite stating the contrary. And surprisal is something that we have an information-theoretical handle on. However, even if we could design the agent such that we can feed it with such input that it actually "believes" it is tomorrow and I am dead (even though it is today and I am still alive), we would still need to distinguish surprisal about the fact that I'm dead and surprisal about the way the operator has formulated the question or any other thing. (A clever agent might expect the operator to ask this question and deliberately forget that one can ask the question in this particular way, so it'd be surprised to hear this formulation, etc.) The latter issue might become more tractable now that we better understand how and why representations are forming, so we could potentially distinguish surprisal about form and surprisal about content. I still see this as a probable dead end because of the "make it believe" part. If a solution exists, I expect it to be specific to a particular agent architecture.

Hi jylin04. Fantastic post! It touches on many more aspects of interpretability than my post about the book. I also enjoyed your summary PDF!

I'd love to contribute to any theory work in this direction, if I can. Right now I'm stuck around p. 93 of the book. (I've read everything, but I'm now trying to re-derive the equations and have trouble figuring out where a certain term goes. I am also building a Mathematica package that takes care of some of the more tedious parts of the calculations.) Maybe we could get in touch?