LESSWRONG
LW

1508
Alexander Kolpakov
0060
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Revisiting the Manifold Hypothesis
Alexander Kolpakov2y10

I also believe my conjecture is true, however non-trivially. At least, mathematically non-trivially. Otherwise,  all is trivial when the job is done. 

Reply
Revisiting the Manifold Hypothesis
Alexander Kolpakov2y10

I also believe my conjecture is true, however non-trivially. At least, mathematically non-trivially. Otherwise,  all is trivial while the job is done. 

Reply
Revisiting the Manifold Hypothesis
Alexander Kolpakov2y10

Here (https://stats.stackexchange.com/questions/142906/what-does-pac-learning-theory-mean) is an accessible explanation. In simple words this would mean that you have a reasonable estimate for the amount of data you need to guarantee that you can learn a concept correctly with high probability. 

Reply
Revisiting the Manifold Hypothesis
Alexander Kolpakov2y10

The data we are looking at may be points (approximately) situated on the surface of a sphere (dimension 2) in the 3-space. However, it could be a much steeper dimension drop. https://openreview.net/forum?id=XJk19XzGq2J

Reply
Revisiting the Manifold Hypothesis
Alexander Kolpakov2y10

Does any ML model that tells cats from dogs get definitions thereof? I think the only input it gets is "picture:(dog/cat)label". It does learn to tell them apart, to some degree, at least. One would expect the same approach here. Otherwise you can ask right away for the sieve of Eratosthenes as a functional and inductive definition, in which case things get easy ...

Reply
Revisiting the Manifold Hypothesis
Alexander Kolpakov2y10

Yes and no. Yes, because prime inference with high accuracy would make codebreaking much easier. No, because, for example, in RSA you need to deal with semiprimes, and that setup seems different as per Sam Blake's research here: https://arxiv.org/abs/2308.12290

Reply