336

LESSWRONG
LW

335
Personal Blog

11

[Link] An exact mapping between the Variational Renormalization Group and Deep Learning]

by Gunnar_Zarncke
8th Dec 2014
1 min read
9

11

Personal Blog

11

[Link] An exact mapping between the Variational Renormalization Group and Deep Learning]
15IlyaShpitser
3V_V
0Gunnar_Zarncke
2Punoxysm
2skeptical_lurker
0Gunnar_Zarncke
0Punoxysm
0Gunnar_Zarncke
2Luke_A_Somers
New Comment
9 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:41 PM
[-]IlyaShpitser11y150

[ meta comment about deep NNs and ML: they are very impressive predictors, but please beware of hype, AI and now machine learning is sort of hype prone, culturally. I actually think statistics culture is superior to machine learning culture about this. ML and statistics are ultimately about the same topic: drawing conclusions from data intelligently. ]

Reply
[-]V_V11y30

This suggests that deep learning is an approach that could be made or is already conceptually general enough to learn everything there is to learn (assuming sufficient time and resources). Thus it could already be used as the base algorithm of a self-optimizing AGI.

The paper is interesting, but I don't think that the authors make this claim or that this claim is suggested by the paper.

Reply
[-]Gunnar_Zarncke11y00

Agreed. This suggestion is made by me. I will clarify this in the post.

Reply
[-]Punoxysm11y20

could be made or is already conceptually general enough to learn everything there is to learn

Universality of neural networks is a known result (in the sense: A basic fully-connected net with an input layer, hidden layer, and output layer can represent any function given sufficient hidden nodes).

Reply
[-]skeptical_lurker11y20

Nitpick: Any continuous function on a compact set. Still, I think this should include most real-life problems.

Reply
[-]Gunnar_Zarncke11y00

Universality of functions: Yes (inefficiently so). But the claim made in the paper goes deeper.

Reply
[-]Punoxysm11y00

Can you explain? I don't know much about renormalization groups.

Reply
[-]Gunnar_Zarncke11y00

The idea behind RG is to find a new coarse-grained description of the spin system where one has “integrated out” short distance fluctuations.

Physics has lots of structure that is local. 'Averaging' over local structures can reveal higher level structures. On rereading I realized that the critical choice remains in the the way the RG is constructed. So the approach isn't as general as I initially imagined it to be.

Reply
[-]Luke_A_Somers11y20

This is looking back at existing AI work and noticing a connection. I don't know that the AI folks have much to learn from the renormalization group, unless they happen to be leaving fundamental symmetries around unexploited.

Reply
Moderation Log
More from Gunnar_Zarncke
View more
Curated and popular this week
9Comments

An exact mapping between the Variational Renormalization Group and Deep Learning by Pankaj Mehta, David J. Schwab

Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. We construct an exact mapping from the variational renormalization group, first introduced by Kadanoff, and deep learning architectures based on Restricted Boltzmann Machines (RBMs). We illustrate these ideas using the nearest-neighbor Ising Model in one and two-dimensions. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.

To me this paper suggests that deep learning is an approach that could be made or is already conceptually general enough to learn everything there is to learn (assuming sufficient time and resources). Thus it could already be used as the base algorithm of a self-optimizing AGI.