Posts

Sorted by New

Wiki Contributions

Comments

I also believe my conjecture is true, however non-trivially. At least, mathematically non-trivially. Otherwise,  all is trivial when the job is done. 

I also believe my conjecture is true, however non-trivially. At least, mathematically non-trivially. Otherwise,  all is trivial while the job is done. 

Here (https://stats.stackexchange.com/questions/142906/what-does-pac-learning-theory-mean) is an accessible explanation. In simple words this would mean that you have a reasonable estimate for the amount of data you need to guarantee that you can learn a concept correctly with high probability. 

The data we are looking at may be points (approximately) situated on the surface of a sphere (dimension 2) in the 3-space. However, it could be a much steeper dimension drop. https://openreview.net/forum?id=XJk19XzGq2J

Does any ML model that tells cats from dogs get definitions thereof? I think the only input it gets is "picture:(dog/cat)label". It does learn to tell them apart, to some degree, at least. One would expect the same approach here. Otherwise you can ask right away for the sieve of Eratosthenes as a functional and inductive definition, in which case things get easy ...

Yes and no. Yes, because prime inference with high accuracy would make codebreaking much easier. No, because, for example, in RSA you need to deal with semiprimes, and that setup seems different as per Sam Blake's research here: https://arxiv.org/abs/2308.12290