# 3

I haven't come across this particular argument before, so I hope I'm not just rehashing a well-known problem.

"The universe displays some very strong signs that it is a simulation.

As has been mentioned in some other answers, one way to efficiently achieve a high fidelity simulation is to design it in such a way that you only need to compute as much detail as is needed. If someone takes a cursory glance at something you should only compute its rough details and only when someone looks at it closely, with a microscope say, do you need to fill in the details.

This puts a big constraint on the kind of physics you can have in a simulation. You need this property: suppose some physical system starts in state x. The system evolves over time to a new state y which is now observed to accuracy ε. As the simulation only needs to display the system to accuracy ε the implementor doesn't want to have to compute x to arbitrary precision. They'd like only have to compute x to some limited degree of accuracy. In other words, demanding y to some limited degree of accuracy should only require computing x to a limited degree of accuracy.

Let's spell this out. Write y as a function of x, y = f(x). We want that for all ε there is a δ such that for all x-δ<y<x+δ, |f(y)-f(x)|<ε. This is just a restatement in mathematical notation of what I said in English. But do you recognise it?

It's the standard textbook definition of a Continuous function. We humans invented the notion of continuity because it was an ubiquitous property of functions in the physical world. But it's precisely the property you need to implement a simulation with demand-driven level of detail. All of our fundamental physics is based on equations that evolve continuously over time and so are optimised for demand-driven implementation.

One way of looking at this is that if y=f(x), then if you want to compute n digits of y you only need a finite number of digits of x. This has another amazing advantage: if you only ever display things to a given accuracy you only ever need to compute your real numbers to a finite accuracy. Nature could have chosen to use any number of arbitrarily complicated functions on the reals. But in fact we only find functions with the special property that they need only be computed to finite precision. This is precisely what a smart programmer would have implemented.

(This also helps motivate the use of real numbers. The basic operations on real numbers such as addition and multiplication are continuous and require only finite precision in their arguments to compute their values to finite precision. So real numbers give a really neat way to allow inhabitants to find ever more detail within a simulation without putting an undue burden on its implementation.)

But you can do one step further. As Gregory Benford says in Timescape: "nature seemed to like equations stated in covariant differential forms". Our fundamental physical quantities aren't just continuous, they're differentiable. Differentiability means that if y=f(x) then once you zoom in closely enough, y depends linearly on x. This means that one more digit of y requires precisely one more digit of x. In other words our hypothetical programmer has arranged things so that after some initial finite length segment they can know in advance exactly how much data they are going to need.

After all that, I don't see how we can know we're not in a simulation. Nature seems cleverly designed to make a demand-driven simulation of it as efficient as possible."

# 3

New Comment

Life requires stability. The set of the laws of physics capable of giving rise to intelligent life might be limited to those where tiny changes tend not to have big impacts.

On the other hand small changes do cause large effects - that's the reason for chaos theory. And you can't have life without some minimum complexity. You have lots of stability in empty space - but no life. Life are those forms that (evoloved to) maintain/create stability.

Maybe the differentiable physics we observe is just an approximation of a lower-level non-differentiable physics, the same way Newtonian mechanics is an approximation of relativity.

If physics is differentiable, that's definitely evidence, by symmetry of is-evidence-for. But I have no idea how strong this evidence is because I don't know the distribution of the physical laws of base-level universes (which is a very confusing issue). Do "most" base-level universes have differentiable physics? We know that even continuous functions "usually" aren't differentiable, but I'm not sure whether that even matters, because I have no idea how it's "decided" which universes exist.

Also, maybe intelligence is less likely to arise in non-differentiable universes. But if so, it's probably just a difference of degree of probability, which would be negligible next to the other issues, which seem like they'd drive the probability to almost exactly 0 or 1.

This puts a big constraint on the kind of physics you can have in a simulation. You need this property: suppose some physical system starts in state x. The system evolves over time to a new state y which is now observed to accuracy ε. As the simulation only needs to display the system to accuracy ε the implementor doesn't want to have to compute x to arbitrary precision. They'd like only have to compute x to some limited degree of accuracy. In other words, demanding y to some limited degree of accuracy should only require computing x to a limited degree of accuracy.

Let's spell this out. Write y as a function of x, y = f(x). We want that for all ε there is a δ such that for all x-δ<y<x+δ, |f(y)-f(x)|<ε. This is just a restatement in mathematical notation of what I said in English. But do you recognise it?

One problem is that the function f(x) is seldom known exactly. In physics, we usually have a differential equation that f is known to satisfy. Actually computing f is another problem entirely. Only in rare cases is the exact solution known. In general, these equations are solved numerically. For a system that evolves in time, you'll pick an increment. You take the initial data at t_0 and use it to approximate the solution at t_1, then use that to approximate the solution at t_2, and so on until you go as far out as you need. At each step you introduce an error and a big part of numerical analysis is figuring out what happens to this error when you take a large number of steps.

It's a feature of chaotic systems that this error grows exponentially. Even a floating point error in the last digit has the potential to rapidly grow and come to dominate the calculation. In the words of Edward Lorenz:

Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

Hmm, then why is the universe so consistently arranged to be differentiable? That still requires an explanation.

It is not super clear whether "real numbers are ubiquitus" is a fair statement. We kinda know that nature is decidedly quantum ie discrete ie non-continuous. In fact we kinda have a name for theories that are continous-like, those tend to be called "classical" theories (even when the theory behind them is brand new). Restatement of the argument: the classical nature of the universe is evidence that it is a simulation. Counterargument: the universe is not classical, therefore we do not have any reason to assume it is a simulation.

It is also hazy on what it means by "details". We could have physics that behaves one way when observed only roughly but another way when observed in detail. It is unclear whether "If someone takes a cursory glance at something you should only compute its rough details and only when someone looks at it closely, with a microscope say, do you need to fill in the details." is satisfied in such a system. I guess the difference is that in rare occasions one would need to "fill out" rather than "fill in" that is it would be possible for a micro-observed phenomena to lie outside the range determined for a macro-observed phenomena. However we have direct examples of nature exhibiting this level of detail dependent physics. If a electron is not observed in a double-slit experiment it forms a path that favors the middle compared to when it is checked what slit is used. Thefore the starting assumption of the argument is relied on more heavily than it is known to hold for and in fact we know it doesn't hold to the degree required. Restatement of the claim: "Since we live in a universe where you only need to 'fill in' details we have reason to believe it has intentionally especially light ontology". Counterargument: "We occasionally need to 'fill out' details so a especially light ontology is a poor model of our universe therefore we don't have any special reason to assume about its intentionality".

The main problem with the argument is not that it is making errors but just that its premises are false or apply only to a extent far smaller than the attempted conclusion (there might be some basis to conclude that part of reality is a simulation, but a part-simulation, part-genuine reality is kind of a hypothesis that seems implausible even as a definition even before recourse to evidence (the standard way of saying that is that its prior is low?))

Some physical systems are famously chaotic and unpredictable, like weather. And weather influences both everyday life and large-scale history a lot. People live or die, wars are won or lost, etc. because of good or bad weather. How does this fit into this theory?

If the universe was non differentiable and non continuous I would consider that to be evidence for simulation. And in fact I've heard that argument. Everything is discreet like it's all bits at the bottom is evedence its run on bits and so a simulation. But continuity and discreetness can't both be evedence for the same thing.

The Simulation Argument gives some insight into the nature of simulations of the universe:

[The] maximum human sensory bandwidth is ~108 bits per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind. If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. [...] a posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times. Therefore, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. [...]

Emphasis mine.

[-][anonymous]00

I am probably saying something completely bogus here, but may worth a quick thought: our observation of everything outside the Earth is based on information achieved through a limited number and kind of scientific measurement instruments, telescopes, radiotelescopes, Hubble and suchlike so it could not be very difficult to tweak the simulation so that they don't actually need to build a full universe around us, they just feed a certain kind of data into our instruments. Same things about particle physics etc. What else we have? Some guys went to the Moon, and that was really the only large-scale bare-eye, first-hand observation of things outside the Earth, but they could have built it just shortly before that...

Again don't take me too seriously here, but isn't this how you build a typical modern videogame, of a Fallout 3 type? You just paint the Moon on the sky and if there is a telescope in the game then you just put a larger picture of the Moon into it, and if the players really insist on flying there only then you actually build it fully detailed in 3D and release it as a downloadable content or purchasable extra content. In a videogame, stars are just dots painted on the sky not too far from the player, and this does not prevent game developers from feeding various data into the measurement instruments of scientists in the game about them, and they don't need to fully work them out until they release a downloadable content featuring an interstellar spaceship.

Again, megasorries for publishing to this noble forum something that sounds like some teenagers ideas after smoking pot. I am just saying in the actual immersive simulations we build for the enjoyment of human beings this is how we do it. We don't actually work out a full universe, we just make a player believe we did, custom feeding in data as required.

[-][anonymous]00

Differentiability means that if y=f(x) then once you zoom in closely enough, y depends linearly on x. This means that one more digit of y requires precisely one more digit of x.

One more digit of y might require any number more digits of x, depending on the slope.

And what about chaotic systems? The number of digits you require in the initial conditions to get one digit precision in the conditions at time t can grow linearly, and quite rapidly, with t.

[This comment is no longer endorsed by its author]Reply