# -

1 min read9th Dec 201016 comments

# 1

Personal Blog

-

16 comments, sorted by Highlighting new comments since
New Comment

The basic multiverse formulation, saying "all universes exist", is insufficient; you also need some sort of mapping from universe-specifications to weights, with the weights summing to 1. And that mapping has to look at least a little bit like an Occamian prior.

Given multiverse hypothesis (universes with different physical constants / laws), the number of universes with infinitely large set of laws is much larger (both being infinite, though) than number of universes with finite sets of laws.

The question is not which set is larger, which is in any case almost meaningless since both are infinite, but which set has larger probability measure.

I do think it's meaningful to talk about different sizes of infinity (for example, countable vs. uncountable), but probability measure is probably more relevant.

To expand on that point - what you are refer to there as "different sizes of infinity" are different cardinalities of sets. As you note, what sorts of infinities you have to use depends on what you are trying to measure; raw cardinalities are rarely the right notion of size, here we want to think in a measure-theoretic context. But it's worth noting that for measuring other things different systems of infinite numbers must be used; cardinalities and "infinities" should not be identified.

In some cases, there's no way to define a uniform distribution (i.e. over the integers), so you've got to do something else.

Huh, can you define an improper uniform distribution over the integers like you can occasionally for the real line? Or does that always lead to an improper posterior?

There are a variety of different issues here. It seems that you are talking about some sort of notion of multiverse that is somewhere between Tegmark II (different physical constants) and Tegmark IV (different laws which could be anything consistent). Note that it is not in general easy to define what one means by the number of laws being finite or infinite. To use an analogy to a common axiomatic system, ZFC technically has infinitely many axioms, but the axioms are so regular that one might as well regard them for intuitive purposes as a finite set of rules (since we have a short set of rules about how to state the axioms). So what does it mean for a system to have infinitely many rules? Does it mean that there's no finitistic specification in some sense? Maybe the total set of rules can't be enumerated by a Turing-computable process? If so, we don't a priori know that our universe has a finite number of rules. Moreover, how does one handle in this framework constants that aren't Turing computable? Conceivably all the laws for our universe are a finite set except for say the exact value of the fine structure constant but there's no Turing machine which given input n will output the nth digit of the fine structure constant. Does our universe then have a finite or infinite number of rules?

Assume without loss of generality that each universe can be represented by a program in some Turing-complete language. Assume for the sake of argument that we are ignoring small programs and considering only large ones. Divide all programs into two categories:

1. Those that produce regular output. (A large program may do this if, for example, its execution gets stuck in a small loop.)

2. Those that produce pseudorandom output.

The ratio of the two categories (in the limit as size goes to infinity) depends on the language, but this doesn't matter, because the second category is unobservable from inside (a pseudorandom universe is unlikely to support life and certainly won't provide selection pressure for intelligence). Therefore we must observe our universe to be in the first category. This means the observed laws of physics must be simple (such as could have been generated by a small program), regardless of whether the "actual code" of the universe is small or large.

Well, there are some possible anthropic drivers that come immediately to mind. Complex does not necessarily imply smart; it might be that large numbers of physical laws produce chaotic effects that forbid or strongly disadvantage the emergence of intelligent agents. And if there's anything to the simulation hypothesis, then universes with simpler laws would be easier to simulate, which could again skew the set of possible universes towards simplicity.

What's most significant about this question is that we've seen more than enough evidence to conclude that if we're at all generic observers, then universes must be weighted pretty directly by simplicity of the underlying mathematical structure.

Otherwise, as you point out, we'd be likely to be in a very labyrinthine mathematical structure; but most of those should not have the property that progress in physics leads you to consecutively (mathematically) simpler laws with more explanatory power. Instead, the things that make fire burn and the things that make plants grow should turn out to have nothing in common, etc...

Could observers saliently like us exist in a universe where the things that make fire burn and the things that make plants grow have nothing in common?

I don't know how to make that question concrete, but it feels like there ought to be a way to do it. I suspect it depends on understanding the actual commonalities better than I do.

First, I think we need a clearer definition of "multiverse hypothesis".

My understanding of one definition of the term is that in each moment, new lines of existence are formed representing every possible quantum state.

That's a very large number of universes, but it is finite.

Secondly, there's a difference between the variety of laws being very large (ok, fine, infinite) and the "number of laws" being infinite. I don't think the latter makes sense.