Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

The linked note is something I "noticed"  while going through different versions of this result in the literature. I think that this sort of mathematical work on neural networks is worthwhile and worth doing to a high standard but I have no reason to think that this particular work is of much consequence beyond filling in a gap in the literature. It's the kind of nonsense that someone who has done too much measure theory would think about.

Abstract. We describe a direct proof of yet another version of the result that a sequence of fully-connected neural networks converges to a Gaussian process in the infinite-width limit.  The convergence in distribution that we establish is the weak convergence of probability measures on the non-separable,  non-metrizable product space , i.e. the space of functions from  to  with the topology whose convergent sequences correspond to pointwise convergence.  The result itself is already implied by a stronger such theorem due to Boris Hanin, but the direct proof of our weaker result can afford to replace the more technical parts of Hanin's proof that are needed to establish tightness with a shorter and more abstract measure-theoretic argument.  

New Comment