I'm a last-year PhD student at the University of Amsterdam working on AI Safety and Alignment, and specifically safety risks of Reinforcement Learning from Human Feedback (RLHF). Previously, I also worked on abstract multivariate information theory and equivariant deep learning. https://langleon.github.io/
Good idea, I now added the following to the opening paragraphs of the section doing the comparisons:
Importantly, due to Theorem 4, this means that the Solomonoff prior and a priori prior lead up to a constant to the same predictions on sequences. The advantages of the priors that we analyze are thus not statements about their induced predictive distributions.
I answered in the parallel thread, which is probably going down to the crux now. To add a few more points:
Okay, I think I overstated the extent to which the difference in priors matters in the previous comments and crossed out "practical".
Basically, I was right that the prior that gives 100% on cannot update, it gives all its weight to no matter how much data comes in. However, itself can update with more data and shift between and .
I can see that this feels perhaps very syntactic, but in my mind the two priors still feel different. One of them is saying "The world first samples a bit indicating whether the world will continue with world 0 or world 1", and the other one is saying "I am uncertain on whether we live in world 0 or world 1".
Yes. There are lots of different settings one could consider, e.g.:
For all of these cases, one can compare different notions of complexity (plain K-complexity, prefix complexity, monotone complexity, if applicable) with algorithmic probability. My sense is that the correspondence is only exact for universal prefix machines and finite strings, but I didn't consider all settings.
It's also useful to emphasize why even if the mixtures are the same, having different priors can make a practical difference. E.g., imagine that in the example above we had one prior giving 100% weight to , and another prior giving 50% weight to each of and . They give the same mixture, but the first prior can't update, and the second prior can!
Well, their induced mixture distributions are the same up to a constant, but the priors on hypotheses are different. I'm not sure if you consider the difference "relevant", perhaps you only care about the induced mixture distribution?
To make a simple example: Assume there were only three Turing machines , , and . Assume that and . Let , and be the LSCSMs induced by , , and . Notice that is a mixture of and : .
Let be the mixture distribution given as Then clearly, is also represented as . My viewpoint is that the prior distributions giving weight to each of the three hypotheses is different from the one giving weight to each of and , even if their mixture distributions are exactly the same.
And this is exactly the situation we're in with the true mixture distribution from the post. Some of the LSCSMs in the mixture are given by for a separate universal monotone Turing machine, which means that is itself a mixture of all LSCSMs. Any such mixtures in the LSCSMs allow to redistribute the prior weight from this LSCSM to all others, without affecting the mixture in any way.
This is also related to what makes a prior based on Kolmogorov complexity ultimately so arbitrary: We could have chosen just about anything and it would still essentially sum to . A posteriori the Kolmogorov complexity then has some mathematical advantages as outlined in the post, however.
I'm confused. Isn't one of the standard justification for the Solomonoff prior that you can get it without talking about K-complexity, just by assuming a uniform prior over programs of length on a universal monotone Turing machine and letting tend to infinity?
What you describe is not the Solomonoff prior on hypotheses, but the Solomonoff a priori distribution on sequences/histories! This is the distribution I call in my post. It can then be written as a mixture of LSCSMs, with the weights given either by the Solomonoff prior (involving Kolmogorov complexity) or the a priori prior in my work. Those priors are not the same.
You saying you don't have this experience sounds bizarre to me. Here is an example of this behavior happening to me recently:
It then invented another doi.
This is very common behavior in my experience.