[Cross-posted from Grand, Unified, Crazy. This builds on a lot of stuff I wrote from before I was cross-posting to Less Wrong, but should be mostly intelligible on a general intuition of what a "computational system" is.]

I’ve been working on a post on predictions which has rather gotten away from me in scope. This is the first of a couple of building-block posts which I expect to spin out so I have things to reference when I finally make it to the main point. This post fits neatly into my old (2014!) sequence on systems theory and should be considered a belated addition to that.

Systems can be deterministic or random. A system that is random is, of course… random. I’m glad the difficult half of this essay is out of the way! Kidding aside, the interesting part is that from the inside, a system that is deterministic also appears random. This claim is technically a bit stronger than I can really argue, but it guides the intuition better than the more formal version.

Because no proper subsystem can perfectly simulate its parent, every inside-the-system simulation must ultimately exclude information, either via the use of lossy abstractions or by choosing to simulate only a proper, open subsystem of the parent. In either case, the excluded information effectively appears in the simulation as randomness: fundamentally unpredictable additional input.

This has some interesting implications if reality is a system and we’re inside it, as I believe to be the case. First it means that we cannot ever conclusively prove whether the universe is deterministic (a la Laplace’s Demon) or random. We can still make some strong probabilistic arguments, but a full proof becomes impossible.

Second, it means that we can safely assume the existence of “atomic randomness” in all of our models. If the system is random, then atomic randomness is in some sense “real” and we’re done. But if the system is deterministic, then we can pretend atomic randomness is real, because the information necessary to dispel that apparent randomness is provably unavailable to us. In some sense the distinction doesn’t even matter anymore; whether the information is provably unavailable or just doesn’t exist, our models look the same.

New Comment
3 comments, sorted by Click to highlight new comments since:
This has some interesting implications if reality is a system and we’re inside it, as I believe to be the case. First it means that we cannot ever conclusively prove whether the universe is deterministic (a la Laplace’s Demon) or random. We can still make some strong probabilistic arguments, but a full proof becomes impossible.

Intuitively, it seems like this must always be so - you can model a system, you can find a model which matches all past predictions and as time goes by it is always right (and precise and detailed) and talks about all of the (observable) actions of the systems since...but the model could be turn out to be wrong. (It just probably won't.)

Second, it means that we can safely assume the existence of “atomic randomness” in all of our models. If the system is random, then atomic randomness is in some sense “real” and we’re done. But if the system is deterministic, then we can pretend atomic randomness is real, because the information necessary to dispel that apparent randomness is provably unavailable to us.

And yet this is surprising. Imagine you are in a room, and the lights go off. Then back on. While you might not be in a position to see the light switch/es you can infer someone switched them on and off. In order for predictions to always be bad, then it seems like some unobserved part of the system must keep changing. (And for surprises which cannot be predicted in advance, the state space must be underexplored (relative to the observer), like the lights changing color).

First it means that we cannot ever conclusively prove whether the universe is deterministic (a la Laplace’s Demon) or random. We can still make some strong probabilistic arguments, but a full proof becomes impossible.

0 and 1 are not reachable probabilities. We can't "conclusively prove" anything. Even with a mathematical proof, there is some chance that there is a really subtle flaw that you didn't notice, or someone is messing with your mind, or maths is inconsistent.

Is there any difference between a universe with random coins, and one that splits into two parallel worlds, one with heads, the other tails, whenever the coin is tossed. Mathematically, randomness is defined as a measure function over possibilities.

[-]TAG30

0 and 1 are not reachable probabilities.

They are not reachable epistemically. The universe could still be deterministic, meaning that everything that happens has probability 1.000. That would be ontological probability, a feature of reality itself, and not provable by armchair argument. The distinction between ontological and epistemic probability has many advantages.