In the following analysis, a machine learning challenge is proposed by Sasha Kolpakov, Managing Editor of the Journal of Experimental Mathematics, and myself to verify a strong form of the Manifold Hypothesis due to Yoshua Bengio.
Motivation:
In the Deep Learning book, Ian Goodfellow, Yoshua Bengio and Aaron Courville credit the unreasonable effectiveness of Deep Learning to the Manifold Hypothesis as it implies that the curse of dimensionality may be avoided for most natural datasets [5]. If the intrinsic dimension of our data is much smaller than its ambient dimension, then sample-efficient PAC learning is possible. In practice, this happens via the latent space of a deep neural network which auto-encodes the input.
Yoshua Bengio... (read 1035 more words →)