assigning prior probabilities in inverse proportion to the length of the model in universal description language

What if instead of assigning prior probabilities to rules governing the universe in inverse proportion to the rules' length, we assigned equal prior probabilities to rules governing the universe and assigned probabilities to states of the world based on the sum of the probability of each universe that could produce that state of the world times the probability that universe would produce it (as many universes would have randomized bits in their description)? I think the likelihood of outputting a string of a hundred ones in a row would then be greater than that of outputting 0001010010100110100010000100100010100100110101101000000101101111110110111101001001100010001011110000.

We could then revisit our assumption that in the rules' world, all are equally likely regardless of length. After all, if there is a meta-rule world behind the rule world, each rule would not be equally likely as an output of the meta-rules because simpler rules are produced by more meta-rules; their relationship is as that of states of the world and rules above.

This would reverberate down the meta-rule chain and make simpler states of the world even more likely.

However, this might not make any sense. There would be no meta-meta-...meta-rule world to rule them all, and it would be turtles all the way down. It might not make sense to integrate over an infinity of rules in which none are given preferential weighing such that an infinite series of decreasing numbers can be constructed, nor to have effects reverberate down an infinite chain to reach a bottom state of the world.

Welcome to Less Wrong! (2010-2011)

by orthonormal 1 min read12th Aug 2010805 comments

42


This post has too many comments to show them all at once! Newcomers, please proceed in an orderly fashion to the newest welcome thread.