There's something called the No Free Lunch theorem which says, approximately, that there's no truly general algorithm for learning: if an algorithm predicts some environment better than chance, there must exist some adversarial environment on which it will do at least that much worse than chance. (Yes, this is even true of Solomonoff induction.)
In the real world, this is almost completely irrelevant; empirically, general intelligence exists. However, leaving anthropics aside for a moment, we ought to find this irrelevance surprising in some sense; a robust theory of learning first needs to answer the question of why, in our particular universe, it's possible to learn anything at all.
I suspect that Wentworth's Telephone Theorem,... (read more)