My reasoning is this:

Consider the domain of bit streams - to avoid having to deal with infinity, let's take some large but finite length, say a trillion bits. Then there are 2^trillion possible bit streams. Now restrict our attention to just those that begin with a particular ordered pattern, say the text of Hamlet, and choose one of those at random. (We can run this experiment on a real computer by taking a copy of said text and appending enough random noise to bring the file size up to a trillion bits.) What can we say about the result?

Well, almost all b... (read more)

That in turn means that in our domain of programs a trillion bits long, exponentially more programs contain the compact subroutine than the literal print statement.

Are you sure this is right? There's exponentially many different print statements. Do you have an argument why they should have low combined weight?

1Will_Newsome9yThat all makes sense and you put it more clearly than I've seen before, but I dispute the implication that finding that our local universe is the result of a compact generator implies very much about the large-scale structure of an ensemble universe. For example imagine pockets of local universes that look all nice and neat from the inside yet are completely alien to aliens in a far-off universe pocket---"far off" being determined by the Turing languages for their respective universal priors, say. For a slightly more elegant variation on the idea I made the same argument here [http://lesswrong.com/lw/6wy/why_no_uniform_weightings_for_ensemble_universes/4l7y] . Such an ensemble might be "uniform" and even devoid of any information content ---see Standish's Theory of Nothing---yet could look very rich from the inside. Does your reasoning eliminate this possibility in a way that I'm not seeing? Edit: I was assuming you mean "ensemble" when you say "universe" but you might not have actually been implying this seemingly much stronger claim?

Why no uniform weightings for ensemble universes?

by Will_Newsome 1 min read31st Jul 201135 comments

5


Every now and then I see a claim that if there were a uniform weighting of mathematical structures in a Tegmark-like 'verse---whatever that would mean even if we ignore the decision theoretic aspects which really can't be ignored but whatever---that would imply we should expect to find ourselves as Boltzmann mind-computations, or in other words thingies with just enough consciousness to be conscious of nonsensical chaos for a brief instant before dissolving back into nothingness. We don't seem to be experiencing nonsensical chaos, therefore the argument concludes that a uniform weighting is inadequate and an Occamian weighting over structures is necessary, leading to something like UDASSA or eventually giving up and sweeping the remaining confusion into a decision theoretic framework like UDT. (Bringing the dreaded "anthropics" into it is probably a red herring like always; we can just talk directly about patterns and groups of structures or correlated structures given some weighting, and presume human minds are structures or groups of structures much like other structures or groups of structures given that weighting.) 

I've seen people who seem very certain of the Boltzmann-inducing properties of uniform weightings for various reasons that I am skeptical of, and others who seemed uncertain of this for reason that sound at least superficially reasonable. Has anyone thought about this enough to give slightly more than just an intuitive appeal? I wouldn't be surprised if everyone has left such 'probabilistic' cosmological reasoning for the richer soils of decision theoretically inspired speculation, and if everyone else never ventured into the realms of such madness in the first place.

 

(Bringing in something, anything, from the foundations of set theory, e.g. the set theoretic multiverse, might be one way to start, but e.g. "most natural numbers look pretty random and we can use something like Goedel numbering for arbitrary mathematical structures" doesn't seem to say much to me by itself, considering that all of those numbers have rich local context that in their region is very predictable and non-random, if you get my metaphor. Or to stretch the metaphor even further, even if 62534772 doesn't "causally" follow 31256 they might still be correlated in the style of Dust Theory, and what meta-level tools are we going to use to talk about the randomness or "size" of those correlations, especially given that 294682462125 could refer to a mathematical structure of some underspecified "size" (e.g. a mathematically "simple" entire multiverse and not a "complex" human brain computation)? In general I don't see how such metaphors can't just be twisted into meaninglessness or assumptions that I don't follow, and I've never seen clear arguments that don't rely on either such metaphors or just flat out intuition.)

5