Good point! My intuition was that the Berkenstein bound (https://en.wikipedia.org/wiki/Bekenstein_bound) limits the amount of information in a volume. (Or more precisely the information surrounded by an area.) Therefore the number of states in a finite volume is also finite.
I must add: since writing this comment, a man called george pointed out to me that, when modeling the universe as a computation one must take care, to not accidentally derive ontological claims from it.
So today I would have a more 'whatever-works-works'-attitude; UTMs, DFAs both just models, neither likely to be ontologically true.
Wow, thank you for the kind and thorough reply! Obviously there is much more to this, I'll have a look at the report
I first heard this idea from Joscha Bach, and it is my favorite explanation of free will. I have not heard it called as a 'predictive-generative gap' before though, which is very well formulated imo
Simplicity Priors are Tautological
Any non-uniform prior inherently encodes a bias toward simplicity. This isn't an additional assumption we need to make - it falls directly out of the mathematics.
For any hypothesis , the information content is , which means probability and complexity have an exponential relationship:
This demonstrates that simpler hypotheses (those with lower information content) are automatically assigned higher probabilities. The exponential relationship creates a strong bias toward simplicity without requiring any special mechanisms.
The "simplicity prior" is essentially tautological - more probable things are simple by definition.
I would be interested in seeing those talks, can you maybe share links to these recordings?
Very good work, thank you for sharing!
Intuitively speaking, the connection between physics and computability arises because the coarse-grained dynamics of our Universe are believed to have computational capabilities equivalent to a universal Turing machine [19–22].
I can see how this is a reasonable and useful assumption, but the universe seems to be finite in both space and time and therefore not a UTM. What convinced you otherwise?
Thank you! I'll have a look!
Simplified the solomonoff prior is the distribution you get when you take a uniform distribution over all strings and feed them to a turing machine.
Since the outputs are also strings: What happens if we iterate this? What is the stationary distribution? Is there even one? The fixed points will be quines, programs that copy their source code to the output. But how are they weighted? By their length? Presumably you can also have quine-cycles of programs that generate each other in turn, in a manner reminiscent metagenesis. Do these quine cycles capture all probability mass or does some diverge?
Very grateful for answers and literature suggestions.
"Many parts of the real world we care about just turn out to be the efficiently predictable."
I had a dicussion about exactly these 'pockets of computational reducibility' today. Whether they are the same as the more vague 'natural abstractions', and if there is some observation selection effect going on here.
Excellent! Great to have a cleanly formulated article to point people to!