I think the intuition here is basically that of the everything-list's "white rabbit" problem. If you consider e.g. all programs at most 10^100 bits in length, there will be many more long than short programs that output a given mind. But I think the standard answer is most of those long programs will just be short programs with irrelevant junk bits tacked on?

Warning! Almost assuredly blithering nonsense: Hm, is this more informative if instead we consider programs between 10^100 and 10^120 bits in length? Should it matter all that much how long they are? If we can show convergence upon characteristic output distributions by various reasonably large sets of all programs of bit lengths a to b, a < b, between 0 and infinity, then we can perhaps make some weak claims about "attractive" outputs for programs of arbitrary length. I speculated in my other comment reply to your comment that after maximally... (read more)

2Will_Newsome9yI basically don't understand such arguments as applied to real-world cosmology, i.e. computing programs and not discovering them. 'Cuz if we're talking about cosmology aren't we assuming that at some point some computation is going to occur? If so, there's a very short program that outputs a universal dovetailer that computes all programs of arbitrary length, that repeatedly outputs a universal dovetailer for all programs at most 10^5 bits in length, that.... and it's just not clear to me what generators win out in the end, whether short short-biased or long long-biased, how that depends on choice of language, or generally what the heck is going. Warning! Almost assuredly blithering nonsense: (Actually, in that scenario aren't there logical attractors for programs to output 0, 1, 10, 11, ... which results in universal distribution/generator constructed from the uniform generator, which then goes on to compute whatever universe we would have seen from an original universal distribution anyway? This self-organization looks suspiciously like getting information from nowhere, but those computations must cost negentropy if they're not reversible. If they are reversible then how? Reversible by what? Anyway that is information as seen from outside the system which might not be meaningful---information from any point inside the system seems like it might be lost with each irreversible computation? Bleh, speculations.) (ETA: Actually couldn't we just run some simulations of this argument or translate it into terms of Hashlife and see what we get? My hypothesis is that as we compute all programs of length x=0, x++ till infinity, the binary outputs of all computations when sorted into identical groups converge on a universal prior distribution, though for small values of x the convergence is swamped by language choice. I have no real reason to suspect this is hypothesis is accurate or even meaningful.) (ETA 2: Bleh, forgot about the need to renormalize outputs by K comple

Why no uniform weightings for ensemble universes?

by Will_Newsome 1 min read31st Jul 201135 comments

5


Every now and then I see a claim that if there were a uniform weighting of mathematical structures in a Tegmark-like 'verse---whatever that would mean even if we ignore the decision theoretic aspects which really can't be ignored but whatever---that would imply we should expect to find ourselves as Boltzmann mind-computations, or in other words thingies with just enough consciousness to be conscious of nonsensical chaos for a brief instant before dissolving back into nothingness. We don't seem to be experiencing nonsensical chaos, therefore the argument concludes that a uniform weighting is inadequate and an Occamian weighting over structures is necessary, leading to something like UDASSA or eventually giving up and sweeping the remaining confusion into a decision theoretic framework like UDT. (Bringing the dreaded "anthropics" into it is probably a red herring like always; we can just talk directly about patterns and groups of structures or correlated structures given some weighting, and presume human minds are structures or groups of structures much like other structures or groups of structures given that weighting.) 

I've seen people who seem very certain of the Boltzmann-inducing properties of uniform weightings for various reasons that I am skeptical of, and others who seemed uncertain of this for reason that sound at least superficially reasonable. Has anyone thought about this enough to give slightly more than just an intuitive appeal? I wouldn't be surprised if everyone has left such 'probabilistic' cosmological reasoning for the richer soils of decision theoretically inspired speculation, and if everyone else never ventured into the realms of such madness in the first place.

 

(Bringing in something, anything, from the foundations of set theory, e.g. the set theoretic multiverse, might be one way to start, but e.g. "most natural numbers look pretty random and we can use something like Goedel numbering for arbitrary mathematical structures" doesn't seem to say much to me by itself, considering that all of those numbers have rich local context that in their region is very predictable and non-random, if you get my metaphor. Or to stretch the metaphor even further, even if 62534772 doesn't "causally" follow 31256 they might still be correlated in the style of Dust Theory, and what meta-level tools are we going to use to talk about the randomness or "size" of those correlations, especially given that 294682462125 could refer to a mathematical structure of some underspecified "size" (e.g. a mathematically "simple" entire multiverse and not a "complex" human brain computation)? In general I don't see how such metaphors can't just be twisted into meaninglessness or assumptions that I don't follow, and I've never seen clear arguments that don't rely on either such metaphors or just flat out intuition.)

5