On Falsifying the Simulation Hypothesis (or Embracing its Predictions)

by Lorenzo Rex7 min read12th Apr 202117 comments

8

Simulation HypothesisPhysicsSpace Exploration & ColonizationWorld Modeling
Frontpage

Disclaimer: This is my first post on this website, I tried to follow the proper etiquette, but please let me know if something is off.  :)

Briefly about me: former academic (PhD in theoretical physics, quantum black holes, string theory, information paradox) turned entrepreneur (currently building a company in the AI/Robotics space).

 

A widespread belief surrounding the Simulation Hypothesis (SH) is that being or not being in a simulation doesn't really have any implication for our lives. Or equivalently, SH is often criticised as unscientific and unfalsifiable, since no definite universal testable predictions have (so far) been made. By universal prediction I mean a prediction that all (or at least a very large part) of the simulations must make. 

In this post I would like to challenge this view by noticing that in the space of all simulations some families of simulations are more likely than others. Having at least the rough behaviour of the probability distribution over the space of simulations then allows us to extract probabilistic predictions about our reality, therefore bringing SH in the realm of falsifiable theories. Of course there will be some assumptions to stomach along the way. 

The whole line of reasoning of this post can be summarised in few points: 

1- We are equally likely to be in one of the many simulations.

2- The vast majority of simulations are simple.

3- Therefore, we are very likely to be in a simple simulation.

4- Therefore, we should not expect to observe X, Y, Z, ...

 

I will now expand on those points.

 

1- We are equally likely to be in one of the many simulations.

First of all, let's assume that we are in a simulation. Since we have no information that could favour a given simulation, we should treat our presence in a given simulation as equally likely among all the simulations. This "bland indifference principle", is telling us that what matters is the multiplicity of a given reference class of simulations, that is what percentage of all the possible simulations belong to that reference class. The definition of a reference class of a civilisation simulation is tricky and subjective, but for our purposes is enough to fix a definition and the rest of the post will apply to that definition. For instance we may say that a simulation in which WWII never started is part of our reference class, since we can conceive to be reasonably "close" to such an alternative reality. But a simulation in which humans have evolved tails may be considered out of our reference class. Again, the choice is pretty much arbitrary, even though I didn't fully explore what happens for "crazy" choices of the reference class.

 

2- The vast majority of simulations are simple.

This is pretty much the core assumption in the whole post. In particular we arrive there if we assume that the likelihood of a given simulation to be run is inversely correlated with the computational complexity of the simulation, in the space of all the simulation ever run. We can call the latter the Simplicity Assumption (SA). The SA mainly follows from the instantaneous finiteness of the resources available to the simulators (all the combined entities that will ever run civilization simulations. Governments, AIs, lonely developers, etc.). By instantaneous I mean that the simulators may have infinite resources in the long run, for instance due to an infinite universe, but that they should not be able to harness infinite energy at any given time. 

We observe this behaviour in many systems: we do have a large number of small instances, a medium number of medium size instances and a small number of large ones. For instance the lifetime of UNIX processes has been found to be scaling roughly as 1/T, where T is the CPU age of the process. Similarly, many human related artifacts have been found following Zipf’s law-like distributions. 

In the case of civilization simulations, there are multiple observations that point to the SA being valid:

-While the first ancestor simulation may be a monumental government-size project, at some point the simulators will be so advanced that even a single developer will be able to run a huge amount of simulations. At that point, any simulator will be able to decide between running a single bleeding edge simulation or, for instance,  simple simulations. While it is reasonable to imagine the majority of simulators not being interested in running simple simulations, it’s hard to imagine that ALL of them would not be interested (this is similar to the flawed solutions to the Fermi's paradox claiming that ALL aliens are not doing action X). It is enough for a small number of simulators to make the second decision to quickly outnumber the number of times complex simulations have been run. The advantage for simple simulations will only become more dramatic as the simulators get more computational power. 

-If simulations are used for scientific research, the simulators will be interested in settling on the simplest possible simulation that is complex enough to feature all the elements of interest and then run that simulation over and over.

-Simple simulations are the only simulations that can be run in nested simulations or on low powered devices.

An example partially (no intelligent observer inside!) illustrating this are the Atari games. Take Asteroids. No doubt that more complex and realistic space-shooting games do exist nowadays. But the fact that Asteroids is so simple allowed for it to be embedded as playable in other games (a nested game!) and used as a reinforcement learning benchmark. So if we purely count the number of times an Asteroid-like space-shooting game (this is our reference class) has been played, the original Asteroids is well posed to be the most played space-shooting game ever.    

The exact scaling of the SA is unclear. One day we may be able to measure it, if we will be advanced enough to run many ancestor simulations. In the following let’s suppose that the scaling is at least Zipf’s law-like, so that if simulation A takes n times more computation than B, then A is n times less likely than B in the space of all simulations.

    

3- Therefore, we are very likely to be in a simple simulation.

This follows from 1+2.

 

4- Therefore, we should not expect to observe X, Y, Z, ...

We don’t know how the simulation is implemented, but in fact we only need a lower bound on how complexity scales in a simulation and then factor out our ignorance of the implementation details by finding how likely a simulation is w.r.t. another simulation. Let’s assume an incredible level of computational complexity optimisation, namely that the simulators can simulate all the universe, including the interaction of all the entities, with O(N) complexity, where N is the number of fundamental entities (quantum fields, strings, etc., it doesn’t matter what the real fundamental entity is). We also don’t really care about what approximation level is being used, how granular the simulation is, if time is being dilated, if big part of the universe are just an illusion, etc since the SA tells us that the most likely simulations are the one with the higher level of approximation. So taking the highest possible approximation level compatible with the experience of our reference class, the lower bound on the computational complexity is proportional to the time the simulation is run multiplied by the number of fundamental entities simulated. Since our universe is roughly homogenous at big scales, N is also proportional to how large the simulated space is.

Now consider a civilization simulation A that is simulating in detail our solar system and mocking the rest of the universe and a simulation B which is simulating in detail the whole milky way and mocking the rest. Simulating in detail the milky way is about harder, if we count the number of stars and black holes. According to the SA with linear scaling, being in simulation B is about  less likely than being in A. Some interesting predictions follow: we are very likely not going to achieve significant interstellar travel or invent von Neumann probes. We are not going to meet extraterrestrial civilizations, unless they are very close, in turn explaining Fermi's paradox. 

Similarly given two simulations with the same patch of simulated space, long living simulations are less likely than short living ones. In particular infinite lifetime universes have measure zero.

More generally, this argument applies to any other feature which provides a large enough “optional” jump in complexity in our universe. Notice that the argument is significantly weakened if super efficient ways of simulating a universe can exist (log(N) or more efficient, according to how sharp the SA distribution is). 

In turn, if humanity were to achieve these feats it would be a pretty strong indication that we don’t live in a simulation after all. Of course SH can never be completely falsified, but this is similar to any physical theory with a tunable parameter. What we can do is to make SH arbitrary unlikely, for instance by achieving space colonization of larger and larger spaces. In fact one may point out that the achievements we already made, such as the exploration of the solar system, are already a strong argument against SH. But this depends on the exact shape of the SA. 

In this post I’ve tried to keep details and subtleties at minimum, I’ve written a larger writeup for those who may be interested in digging deeper, see here: https://osf.io/ca8se/

Please let me know your comments, critiques on the assumptions of this post are very welcome. 

8

17 comments, sorted by Highlighting new comments since Today at 2:55 PM
New Comment

Simpler simulations are more likely. But so are simpler universes. (For the same reason?)

To me it seems like "simulation" can refer to two different things:

  • "passive simulation" where someone just sets up the rules and simulates the entire universe. This should be indistinguishable from a real universe that happened to have the same laws of physics.
  • "active simulation" where someone intervenes in the universe, and causes miracles. Where miracles refer to anything that does not happen according to the internal laws of physics. It could be something seemingly non-magical, like inserting yourself in the universe as another seemingly ordinary human.

I guess we should be able to only detect the active simulation, by observing the miracles. Problem is, the miracles can be quite rare, and by definition we cannot replicate them experimentally.  If someone just magically entered this universe thousand years ago, had some fun but didn't do anything impactful (beyond the standard butterfly effect), and plans to return thousand years in the future... there is hardly an experiment we could set up to detect this. Heck, even if I told you "tomorrow at 12:00, somewhere on this planet a new adult human being will miraculously appear -- that's the avatar of the simulator", unless you happen to see this on camera, there is no way to prove it. There universe is complex enough so that you can't calculate how exactly the future should look like, therefore you will not notice a small change. (Unless the changes are intentionally made in the way that causes large change, e.g. if the simulator inserts themselves as a superhero who solves the greatest humanity's problems.)

Not sure I get what you mean by simpler universes.  According to the SH simulated universes greatly outnumber any real universes.  

The bold statement is to be able to actually extract experimental consequences also for passive simulations, even if only probabilistically.  Active simulations are indeed interesting because they would give us a way to prove that we are in a simulation, while the argument in the post can only disprove that we are in one. 

A possible problem with active simulations is that they may be a very small percentage of the total simulations, since they require someone actively interacting with the simulation. If this is true, we are very likely a passive simulation. 

An important consideration is whether you are trying to fool simulated creatures into believing simulation is real by hiding glitches, or you are doing an honest simulation and allow exploitation of these glitches. You should take it into account when you consider how deep you should simulate matter to make simulation plausible.

For example up until 1800-s you could coarse-grain atoms and molecules, and fool everyone about composition of stuff. The advances in chemistry and physics and widespread adoption of inventions relying on atomic theory made it progressively harder to identify scientists among simulated folks, so to be able to get to early 1900, your simulation should have grounding in XIX-th century physics, otherwise people in your simulation will be exposed to a lot of miracles.

In 1900-s it's quantum mechanics, Standard model, and solar system exploration (also, relativity but I don't know about complexity of GR simulation). I think you could still fool early experimenters into seeing double-slit experiments, convincingly simulate effects of atomic blasts using classical computers, and maybe even fake Moon landings.

But there are two near-future simulated events that will cause you to buy more computational power. The first one is Solar system exploration. This is less of a concern because in the worst case scenario, it's just an increase in N proportional to the number of simulated particles, or maybe you can do it more efficiently by simulating only visited surface - so not a big deal. 

The real trouble is universal quantum computers. These beasts are exponentially more powerful on some tasks (unless BPP=BQP of course), and if they become ubiquitous, to simulate the world reliably you have to use the real quantum computers.

Some other things to look out for:

  • Is there more powerful fundamental complexity class at deeper than quantum level?
  • Is there an evidence in nature of solving computational problems too fast to be reproduced on quantum computers (e.g. does any process give solutions to NP-hard problems in polynomial time)?
  • Is there a pressure against expanding computational power required to simulate the universe?

Quantum computing is a very good point. I thought about it, but I'm not sure if we should consider it "optional". Perhaps to simulate our reality with good fidelity, simulating the quantum is necessary and not an option. So if the simulators are already simulating all the quantum interactions in our daily life, building quantum computers would not really increase the power consumption of the simulation.

Anthropic reasoning is hard.  It's especially hard when there's no outside position or evidence about the space of counterfactual possibilities (or really, any operational definition of "possible").  

I agree that we're equally likely to be in any simulation (or reality) that contains us.  But I don't think that's as useful as you seem to think.  We have no evidence of the number or variety of simulations that match our experience/memory.  I also like the simplicity assumption - Occam's razor continues to be useful.  But I'm not sure how to apply it - I very quickly run into the problem that "god is angry" is a much simpler explanation than a massive set of quantum interactions. 

Is is simpler for someone to just simulate this experience I'm having, or to simulate a universe that happens to contain me?  I really don't know.  I don't find https://en.wikipedia.org/wiki/Boltzmann_brain to be that compelling as a random occurrence, but I have to admit that as the result of an optimization/intentional process like a simulation, it's simpler than the explanation that there has actually existed or been simulated the full history of specific things which I remember.

It is surely hard and tricky.

One of the assumptions of the original simulation hypothesis is that there are many simulations of our reality,  and therefore we are with probability close to 1 in a simulation. I'm starting with the assumption that SH is true and extrapolating from that.  

Boltzmann Brains are incoherent random fluctuations, so I tend to believe that they should not emerge in large numbers in an intentional process. But other kind of solipsistic observers may tend to dominate indeed. In that case though, the predictions of  SH+SA are still there, since simulating the milky way for a solo observer is still much harder than simulating only the solar system for a solo observer.

I think you're missing an underlying point about the Boltzmann Brain concept - simulating an observer's memory and perception is (probably) much easier than simulating the things that seem to cause the perceptions.

Once you open up the idea that universes and simulations are subject to probability, a self-contained instantaneous experiencer is strictly more probable than a universe which evolves the equivalent brain structure and fills it with experiences, or a simulation of the brain plus some particles or local activity which change it over time.

Regarding the first point, yes, that's likely true, much easier. But if you want to simulate a coherent long lasting observation (so really a Brain in a Vat (BIV) not a Boltzmann Brain) you need to make sure that you are sending the right perception to the brain. How do you know exactly which perception to send if you don't compute the evolution of the system in the first place? You would end up having conflicting observations. It's not much different from how current single players videogames are built: only one intelligent observer (the player) and an entire simulated world.  As we know running advanced videogames is very compute intensive and a videogame simulating large worlds are far more compute intense than small world ones. Right now developers use tricks and inconsistencies to obviate for this, for instance they don't keep in memory the footprints that your videogame character left 10 hours of play ago in a distant part of the map. 

What I'm saying is that there are no O(1) or O(log(N)) general ways of even just simulating perceptions of the universe. Just reading the input of the larger system to simulate should take you O(N).  

The probability you are speaking about is relative to quantum fluctuations or similar. If the content of the simulations is randomly generated then surely Boltzmann Brains are by far more likely.  But here I'm speaking about the probability distribution over intentionally generated ancestor simulations. This distribution may contain a very low number of Boltzmann Brains, if they are not considered interesting by the simulators. 

...assume that the likelihood of a given simulation to be run is inversely correlated with the computational complexity of the simulation, in the space of all the simulation ever run. We can call the latter the Simplicity Assumption (SA)...

Isn't it possible that "simplicity" (according to one or more definitions thereof) need not care about the amount of raw computation required [0] to run any patch of simulation, nor with the volume of space it simulates? E.g. Occam's Razor's measure of 'simplicity' (for AI) gives some function of the description length of a program running on a (universal) computer, so as to predict its own future percepts [1].

Now consider a civilization simulation A that is simulating in detail our solar system and mocking the rest of the universe and a simulation B which is simulating in detail the whole milky way and mocking the rest. Simulating in detail the milky way is about harder, if we count the number of stars and black holes. According to the SA with linear scaling, being in simulation B is about  less likely than being in A.

This particular example was what threw me off. In particular, we can presume that programs with shorter descriptions might better (i.e. more plausibly) simulate a complex system, and are more likely to be found by a computer/AI that iterates over possible programs, starting with the simplest one (like in Solomonoff Induction IIUC). This finds the shortest program that nonetheless sufficiently describes some observation sequence, which would not necessarily favor encoding special cases (i.e. "mocking") for costly things to simulate generally. Instead, mocking (since it optimizes for computational cost) might map to a different thing in Solomonoff, having the tradeoff of making the description more complex than the shortest possible one. 

For example, to simulate a human being acting within a nontrivial universe [2], one might hold that there must exist some mathematical structure that describes the human in all the ways we care about, in which case the runtime of their cognitive algorithms, etc. might have to be quite costly [3]. It might be more algorithmically probable, then, for such a human to be mapped to an algorithm built out of simple priors (e.g. laws of physics) instead of high-level code describing what the human does in various edge cases.

This isn't by any means a refutation of your argument, but rather just a thought provoker concerning the main premise of what the Simplicity Assumption should mean [4]. I agree with you and others that "simplicity" should be an organizing principle (that conditions one's priors over the types of possible universes). However, your post didn't coincide with my implicit definition of "simplicity". 

[0] (and possibly the amount of computation it seems to require)

[1] While your post isn't about AI generated universes, predictions made by an AI might well generate viable simulations (which might then become part of the hypothesis space under consideration). 

[2] Another prior holds that we don't appear to be privileged observers within our own universe; in a similar vein, neither might one (rationally?) hold that solipsism is a valid ontology over observers, etc..

[3] Admittedly, the example of accurately simulating one or more human doesn't rule out the possibility that only the observations that people notice are the ones that are simulated (per your view), the rest being "mocked." On this topic, I can only defer to AI related discussions like this and here as to how one might begin to condition the probability space over types of (simulated) universes.  

[4] Though I don't personally know of a good argument in favor of the Speed Prior if we're talking about inductive inference leading to simulations.

My view is that Kolmogorov is the right simplicity measure for probabilistically or brute force generated universes, as you also mention. But for intentionally generated universes the length and elegance of the program is not that relevant in determining how likely is a simulation to be run, while computational power and memory are hard constraints that the simulators must face. 

For instance while I would expect unnecessary long programs to be unlikely to be run, if a long program L  is 2x more efficient than a shorter program S, then I expect L to be more likely (many more simulators can afford L, cheaper to run in bulk, etc.). 

I agree that simpler simulations are more probable. As a result the cheapest and one-observer-cenetered simulation are the most numerous. But cheapest simulation will have the highest probability of glitches. Thus the main observable property of  living in simulation is higher probability to observer miracles.

 Wrote about it here: "Simulation Typology and Termination Risks" and Glitch in the Matrix: Urban Legend or Evidence of the Simulation?

Thanks for sharing, I will cite in a future v2 of the paper. 

I don't agree with simple --> highest probability of glitches, at least not always. For instance, if we restrict to the case of the same universe-simulating algorithms running on smaller portions of simulated space (same level of approximation). In that case running an algorithm on larger spaces may lead to more rounding errors.

Glitches may appear if simulators use very simple world-modelling systems, like 2D surface modelling instead of 3D space modelling, or simple neural nets to generate realistic images like our GANs.

You mentioned at the end that as humanity does things that require a more complex simulation, such as exploring the rest of our solar system, the change of us being in a simple simulation (and therefore a simulation) reduce. How do you think this changes when you take into concideration the probabilities of us reaching those goals? A concrete example; our simulators might be running ancestor simulations to figure out "what is the probability of us discovering faster than light travel". To answer this they create 10^6 simple simulations of earth, with varying starting conditions. To answer their question they only need to simulate the solar system in significant resolution for those simulations who have explored it, which may be an slim slice of the overall set of simulations. In this example, a simulation who has explored the solar system cannot say "my simulation is computationally expensive to run, therefore there's a low chance of it being simulated" because the simulators can upgrade simple simulations to complex ones once they have reached sufficient complexity.

Have I got a point here? Many thanks for yours (or any onlookers) thoughts on the matter!

Appologies for the verbose phrasing of the question; "If you cannot explain it simply, you do not know it well enough" and I certainly don't know it well enough. :)

Those extended simulations are more complex than non extended simulations. The simplicity assumptions tells you that those extended simulations are less likely, and the distribution is dominated by non extended simulations (assuming that they are considerably less complex). 

To see this more clearly, take the point of view of the simulators, and for simplicity neglect all the simulations that are running t=now. So, consider all the simulations ever run by the simulators so far and that have finished. A simulation is considered finished when it is not run anymore. If a simulation of cost C1 is "extended" to 2 C1, then de facto we call it a C2 simulation. So, there is well defined distributions of finished simulations C1, C2 (including pure C2 and C1 extended sims), C3 (including pure C3, extended C2, very extended C1, and all the combinations), etc.

You can also include simulations running t=now in the distribution, even though you cannot be sure how to classify them until the finish. Anyway, for large t the number of simulations running now will be a small number w.r.t the number of simulations ever run. 

Nitpick:  A simulation is never really finished, as it can be reactivated at any time. 

One caveat about the Simulation Hypothesis I haven't seen anyone commenting on: however advanced and capable of capturing/producing energy our simulators might be there should be some finite energy spent for each simulation. Assuming there are many concurrent simulations, minimizing the energy spent for each simulation seems to be an obvious objective for our simulators. One way is by sharing as much simulation data across simulations as possible and another would be by tweaking the simulated Physics to reduce how much of the Universe needs to be simulated (ie. supposing you are only interested in simulating Earth). Quantum MWI could be a hint of the former and The Holographic Principle could be a hint of the later.

I find it highly unlikely that we live in a simulation. Anyone who has implemented any kind of simulation has found out that they are hugely wasteful. It requires a huge amount of complexity to simulate even a tiny, low-complexity world. Therefore, all simulations will try to optimize as much as possible. However, we clearly don't live in a tiny, low-complexity, optimized world. Our everyday experiences could be implemented with a much, much lower-complexity world that doesn't have stuff like  relativity and quantum gravity and dark energy and muons.

The basic premise that simulations are basically the same as reality, and that there many simulations, but only one reality, and that statistically, we therefore almost certainly live in a simulation, is not consistent with my experience working on simulations. Any simulation anyone builds in the real world will by necessity be infinitely less complex than actual reality, and thus infinitely less likely to contain complex beings.