My gut instinct on metacosmology is that if were a simpler computation than our universe that produced intelligent life, we'd probably be there instead of here. I'm not sure that's valid anthropics, but it still surprises me to see apparent assumption (in posts like this of the opposite conclusion—that there are many universes simpler than ours capable of producing intelligence. (EDIT: that post doesn't actually make that assumption, and I don't have another example ready. Still turned out to be a fruitful question, though)

(Yes, we know cellular automata can implement intelligence, but AFAIK we don't know that they can do it more simply than by implementing a Turing machine stimulating our universe).

Is there an argument I've missed?

New Answer
Ask Related Question
New Comment

5 Answers sorted by

I don't know if we live in the simplest universe giving rise to life for some philosophically and deeply "correct" notion of simplicity. However, I don't think this is needed for the argument in my post you are asking about. In fact while writing that post I was implicitly assuming that the attacker's universe is about as complex as our own, in order to make my argument harder.

First, I think that if we write down a particular universal prior (e.g. by choosing a universal turing machine like a Python interpreter), then we probably won't be the simplest:

  • The actual simplest universes will be different across different programming languages. This is plausible because those universes are themselves simpler than implementations of universal Turing machines without preprocessing. But it's very hard to really know.
  • If you believe this, then it's unlikely that we have the simplest universe according to say the Python prior, or any other concrete prior we write down. There's just one "right" prior according to which we are the simplest.
  • Even if that prior were philosophically distinguished, that doesn't help us unless we do that philosophy and pick out the right universal prior.

Even if we did live in the simplest universe according to the chosen universal prior, it seems like we'd probably get a simulation:

  • The anthropic update (including the inference from the language choice) is a huge advantage for simulators.
  • It doesn't change the story that much if we get a simulation from a universe with our physical laws vs other physical laws.
  • The awkwardness of reading out bits, and ensuring that evolved life has maximal control over that channel, probably puts you somewhere other than the absolute simplest universe.

Aside from the arguments I had in mind while writing the post, there is a more philosophical reason (that I've thought less about) to think that most early civilizations we care about aren't living in the simplest universe:

  • I expect that "most" early civilizations are in "dense" universes, at least if you try to weight them by their intrinsic moral worth.
  • I expect that it's simpler to create truly humungous simple universes with a lower density of life. Note that the complexity differences we are talking about here are very very small.
  • Some of those universes will still allow consequentialists to control arbitrary output locations, despite starting from a very low density (e.g. faster than light travel would be very helpful).

That said, I do think that a lot of future influence comes from huge simple universes with easy travel, even if most early civilizations (by moral weight) aren't in such universes. And if you care about the moral weight of our civilization itself then I think it is plausibly dominated by the simulations, such that weighting by "future influence" is the only real weighting that's meaningful to apply to early civilizations.

Thanks for the reply, that makes sense.

Maybe our universe isn't the simplest but the most "productive",  in the way that mind patterns are amplified by splitting into many different quantum time-streams. 

For an answer that follows a very different intuition, take a look at Does Cosmological Evolution Select for Technology? by Jeffery Shainline. This is up there with aestivation and infinite ethics on the fun idea scale. He gives a nice summary at 2:13:23 on Lex Fridman's podcast to around 2:38. Highly recommend listening to the relevant clip on Fridman, it's pretty great. The entire episode is really interesting, and also contains some other supporting context for Shainline's arguement. Caveat: I haven't read the paper yet. The abstract is:

If the parameters defining the physics of our universe departed from their present values, the observed rich structure and complexity would not be supported. This article considers whether similar fine-tuning of parameters applies to technology. The anthropic principle is one means of explaining the observed values of the parameters. This principle constrains physical theories to allow for our existence, yet the principle does not apply to the existence of technology. Cosmological natural selection has been proposed as an alternative to anthropic reasoning. Within this framework, fine-tuning results from selection of universes capable of prolific reproduction. It was originally proposed that reproduction occurs through singularities resulting from supernovae, and subsequently argued that life may facilitate the production of the singularities that become offspring universes. Here I argue technology is necessary for production of singularities by living beings, and ask whether the physics of our universe has been selected to simultaneously enable stars, intelligent life, and technology capable of creating progeny. Specific technologies appear implausibly equipped to perform tasks necessary for production of singularities, potentially indicating fine-tuning through cosmological natural selection. These technologies include silicon electronics, superconductors, and the cryogenic infrastructure enabled by the thermodynamic properties of liquid helium. Numerical studies are proposed to determine regions of physical parameter space in which the constraints of stars, life, and technology are simultaneously satisfied. If this overlapping parameter range is small, we should be surprised that physics allows technology to exist alongside us. The tests do not call for new astrophysical or cosmological observations. Only computer simulations of well-understood condensed matter systems are required.

I guess I've always had a vague intuition along the lines that, if you built a game of life of ~ the scale of our universe and started it in a random initial configuration, that there would be many rulesets that are:

  1. Simpler than our laws of physics.
  2. Have a high probability of producing self-preserving and self-replicating patterns after enough time.

Then, I'd expect intelligence to arise convergently as a useful strategy for the patterns to perpetuate / replicate themselves in the game of life's selection environment.

I would also guess that a large enough game of life world would eventually give rise to intelligent civilization (especially after observing recent progress on designing ash-clearing machines in some random game of life hobbyist forum that I can't find now; not sure if that should be a real update but I hadn't realized that this was probably possible).

It's not at all clear to me whether the game of life rules are actually simpler than our physics. I agree it does casually seem that way, but it seems incredibly hard to say right now.

Doesn't intelligence require a low-entropy setting to be useful? If your surroundings are all random noise then no-free-lunch theorem applies.

2Quintin Pope7d
My initial thought was that this universe would have low complexity. It has simple rules, and a simple initialization process. However, I suppose that, for a deterministic GoL rule set, the simple initialization process might not result in simple dynamics going forward. I think it depends on whether low-level noise in the exact cell patterns "washes out" for the higher level patterns. Maybe we need some sort of low entropy initialization or a non-deterministic rule set?
2interstice7d
Entropy is less of a problem in GoL than our universe because the ruleset isn't reversible, so you don't need a free energy source to erase errors.
1Conor Sullivan7d
Are there any problems with an irreversible ruleset?
2interstice7d
Not necessarily, it would just be very different from our world. One potential problem is that it can be easier for an irreversible universe to slip into an inert 'dead' state, since information can be globally erased.
1Conor Sullivan7d
There is also no possibility for a Penrose-style return to form due to extremely unlikely random fluctuations over extreme lengths of time.

See this comment and its links on what the long-term future of an infinite randomly-initialized GoL grid looks like. In brief, an infinite field of "ash", random oscillating or fixed patterns, which would likely eventually(after an exponentially long time?) be invaded by self-replicators.

I conjecture that it will take longer for these patterns to appear in Life than in our universe, though. In our universe we got intelligent by bootstrapping off of simpler replicators, I'm not sure if Life is set up to make that possible/likely...

I'm not sure I agree with this. For instance, changing one's "velocity" in a controlled manner seems nearly impossible in practically all cellular automata for various reasons, partly because they lack Poincare invariance. Could one have intelligent life without this?

2Quintin Pope7d
I'm pretty sure you can have intelligent life arise in computational environments that lack any sort of notion of velocity. E.g., the computational environments of the brain and current DL systems seem able to support intelligence, but they don't have straightforward notions of velocity.
2tailcalled7d
They are created by other intelligent minds, though. What I mean is, would it be adaptive for intelligence to evolve without velocity? I would analogize it to plants vs animals. Animals tend to be much more intelligent than plants, presumably because their ability to move around means that they have to deal with much more varied conditions, or because they can have much more complex influences on the world. These seem difficult to achieve without varying one's velocity. There's also stuff like social relations; inanimate organisms might help or hurt each other, but they probably have to do so in much simpler ways, since their positions relative to each other are fixed, while animals can more easily interact with others in more complex ways and have more varying relations.

Questions like this highlight how misguided the current state of anthropic reasoning is. 

When one spends enough time thinking about the anthropic principle it would seem quite reasonable to raise this question. But take a step back, and consider it a physical/scientific statement: "The universe is likely in the simplest form that could support intelligent life". It is oddly specific. Why not say "the universe is likely the simplest that could support black holes?", or hydrogen atoms, or Very large-scale integrations? Each hypothesis results in vastly different predictions of what the universe is like. Why favor "intelligent life" above anything else?

People may provide different justifications for this preferential treatment. But it always boils down to this: We are intelligent lives. And it is intuitively obvious that the physical existence of oneself is important for their reasoning about the universe. But the Copernican science paradigm has no place for the first-person perspective. It requires one to "Zoom-Out" and think from an impartial outsider's view. This conflict leads to awkward attempts to mix the outsider's impartiality and the first-person self-focus. It gives rise to teleological conclusions like the fine-tuned universe which is often used as proof of God's existence. And less jarringly but far more deceivingly, regarding oneself as equivalent to the result of imaginary random sampling. It is perhaps unsurprising that anthropic problems often end up as paradoxes. 

To resolve the paradoxes we have to at least be aware of which viewpoint we are taking in reasoning and make a conscious effort not to mix first-person perspective with the impartial outsider's view. To lay all the debate to rest we perhaps need to develop a framework incorporating the first-person perspective into the scientific paradigm

15 comments, sorted by Click to highlight new comments since: Today at 8:12 AM

 There are 3 generations of quarks and leptons, apparently only one is needed to create the universe as we see it. All these top/bottom/charm/strange/muon/tau thingies seem to be there... for no reason related to intelligent life.

I was under the impression those other particles might be a consequence of a deeper mathematical structure?

Such that asking for a universe without the 'unnecessary' particles would be kind of like asking for one without 'unnecessary' chemical elements?

Well, it might be a consequence of something, but I don't know of any such math that says that if there is one generation of particles, then there ought to be 3 (or more?).

I'm still convinced that the weak force is completely unnecessary.

Do you really mean that you're convinced that it's completely unnecessary, or just not convinced that it is necessary?

There is a rather large gap between a consistent model that verifiably results in a habitable universe, and an argument that there might be such a model. So far we have the latter, and I don't see any argument for the former in either my previous experience with such research or at your Wikipedia link.

While baryogenesis is definitely not well understood in our universe, there seem to be good arguments that it requires the weak force or at least something like it. Production and distribution of heavy elements (and even many light elements!) also seems to require weak interactions since both s-process and r-process are based on them. It is unclear whether neutron-poor heavy nuclei would be stable even in the absence of weak decay. And so on.

In my evaluation, there are still too many unanswered questions to be reasonably convinced that the weak force is unnecessary.

This is all probably irrelevant anyway. We may well be in the position similar of arguing whether magnetism is "really" necessary from the point of view of a 19th century physicist, when it obviously and simply follows from the existence of electric fields and relativity in the more advanced 20th century theory. It seems to me that a relativistic theory that somehow only has electric fields and not magnetic ones would be much more complex.

It seems quite likely that weak force follows most simply from some underlying theory that we don't have yet.

It seems quite likely that weak force follows most simply from some underlying theory that we don't have yet.

In fact I think we already have this: https://en.m.wikipedia.org/wiki/Electroweak_interaction (Disclaimer: I don't grok this at all, just going by the summary)

ETA: Based on the link in the top comment, the hypothetical 'weakless universe' is constructed by varying the 'electroweak breaking scale' parameter, not by eliminating the weak force on a fundamental level.

I'm sorry, I should have put a /s in my post. I was joking around. I don't have nearly the physics knowledge to actually have a strong opinion on the subject.

The link explains how the weak interaction is essential for star formation, we kind of need that.

My best guess is that there's a metaverse which consists of (at a minimum) every possible computation. While not technically provable or falsifiable, it does result in predictions which mean that circumstantially we should have an excellent guess whether or not it's true.

So far, it's true. It nicely explains the fine-tuned constants and QM and the discrete nature of the apparent finest (Planck-region) levels of reality. And yes, it also predicts that we will, on average, be overwhelmingly likely to live in one of the simplest possible universes supporting intelligence (but almost certainly not the VERY simplest).

If this is the case, any actual fundamental mechanism of reality is irrelevant to the point of meaninglessness, as such a metaverse is completely described by a ...0001000... initial row in ECA rules 30 or 45, or a correspondingly simple Turing machine, Lambda Calculus expression, tag machine, Perl script, etc.

(A post of mine approaching this argument from the tension between subjectivity and computation.)

I've wondered this as well. Particularly with regard to quantum mechanics -- it just seems so weird that our world is quantum mechanical if we "could have" existed in a classically deterministic or stochastic universe.

A QM universe is more stable. Not only is the ultraviolet catastrophe avoided, but much of classical chaos.

Maybe a specific quantum universe is more likely to contain life than a specific deterministic universe, because there are many branches so in some of them the lucky accidents happened? Not exactly this, but something like this: the complexity of the universe is only a part of the equation, we need to also consider how likely is life in the universe, how much life does the universe contain, etc. And the quantum universe may be "worse" in some regard (having more complex rules), but "better" in others.

A classically stochastic universe could also have a lot of branches. A MWI universe doesn't just have random branches, it only branches under certain conditions, basically when local information propagates widely enough -- "quantum darwinism". I wonder if there is some significance to this method of randomizing that makes life more probable...

I have a hobby of trying to "invent universes" to use as toy models for various things. Sort of like Conway's Game of Life, except Game of Life lacks a bunch of properties that seem fairly core to characterizing the dynamics of our universe (Poincare invariance and therefore also continuity, Louville's theorem, conservation of energy and momentum etc.). (It does have some important characteristics that match our universe, e.g. fixed speed of causality (light).) Such universes generally have to be deterministic or stochastic, because making them quantum would be computationally infeasible. However, the properties seem very difficult to satisfy in interesting ways for deterministic or stochastic universes.