Every now and then I see a claim that if there were a uniform weighting of mathematical structures in a Tegmark-like 'verse---whatever that would mean even if we ignore the decision theoretic aspects which really can't be ignored but whatever---that would imply we should expect to find ourselves as Boltzmann mind-computations, or in other words thingies with just enough consciousness to be conscious of nonsensical chaos for a brief instant before dissolving back into nothingness. We don't seem to be experiencing nonsensical chaos, therefore the argument concludes that a uniform weighting is inadequate and an Occamian weighting over structures is necessary, leading to something like UDASSA or eventually giving up and sweeping the remaining confusion into a decision theoretic framework like UDT. (Bringing the dreaded "anthropics" into it is probably a red herring like always; we can just talk directly about patterns and groups of structures or correlated structures given some weighting, and presume human minds are structures or groups of structures much like other structures or groups of structures given that weighting.)

I've seen people who seem very certain of the Boltzmann-inducing properties of uniform weightings for various reasons that I am skeptical of, and others who seemed uncertain of this for reason that sound at least superficially reasonable. Has anyone thought about this enough to give slightly more than just an intuitive appeal? I wouldn't be surprised if everyone has left such 'probabilistic' cosmological reasoning for the richer soils of decision theoretically inspired speculation, and if everyone else never ventured into the realms of such madness in the first place.

(Bringing in something, anything, from the foundations of set theory, e.g. the set theoretic multiverse, might be one way to start, but e.g. "most natural numbers look pretty random and we can use something like Goedel numbering for arbitrary mathematical structures" doesn't seem to say much to me by itself, considering that all of those numbers have rich *local* context that in their region is very predictable and non-random, if you get my metaphor. Or to stretch the metaphor even further, even if 62534772 doesn't "causally" follow 31256 they might still be correlated in the style of Dust Theory, and what meta-level tools are we going to use to talk about the randomness or "size" of those correlations, especially given that 294682462125 could refer to a mathematical structure of some underspecified "size" (e.g. a mathematically "simple" entire multiverse and not a "complex" human brain computation)? In general I don't see how such metaphors can't just be twisted into meaninglessness or assumptions that I don't follow, and I've never seen clear arguments that don't rely on either such metaphors or just flat out intuition.)

I think the intuition here is basically that of the everything-list's "white rabbit" problem. If you consider e.g. all programs at most 10^100 bits in length, there will be many more long than short programs that output a given mind. But I think the standard answer is most of those long programs will just be short programs with irrelevant junk bits tacked on?

Warning! Almost assuredly blithering nonsense:Hm, is this more informative if instead we consider programs between 10^100 and 10^120 bits in length? Should it matter all that much how long they are? If we can show convergence upon characteristic output distributions by various reasonably large sets of all programs of bit lengths a to b, a < b, between 0 and infinity, then we can perhaps make some weak claims about "attractive" outputs for programs of arbitrary length. I speculated in my other comment reply to your comment that after maximally... (read more)