*TL;DR: Nothing exists except random data strings and Solomonoff induction between them*.

There are papers that you dream you will write yourself one day, when you have plenty of time. As the idea is so crazy, you don’t expect that anyone else will dare to even consider the topic. But then, you find that someone already wrote it—and did it better than you ever could hope to do—using exactly the words and examples that were percolating against the background of your consciousness. It could be called the “bias of illusionary capability”, or, as my friend Lisa used to say: “I don’t believe in lost opportunities”.

Muller created a compelling—likely not true, but still exciting—theory of a world where nothing exists. Only two things are postulated: that there are strings of binary numbers, called observer states, and that Solomonoff induction could be used to calculate probability that one string will be next to another. The magic of this paper is that all of the main properties of the observable universe are derived from only these two assumptions, via around 80 pages of dense math. The author works in three academic institutions, the paper was supported by grants, and cites Bostrom, Sandberg and S. Armstrong. Lee Smolin at least participated in discussions leading to its publication, according to the acknowledgements. The ideas discussed within it possess some similarities to the idea of Boltzmann simulations recently discussed by Armstrong.

He wrote:

*"1.To every sequence of binary strings x = (x1, . . . , xn), there is an associated “observer state” that can be interpreted as the first-person perspective of an observer that has previously experienced x1, . . . , xn−1 and experiences xn now. *

*2. From this first-person perspective, another experience y is going to come next, with probability (chance) of P(y|x; A), where A = (V, E,Λ) is a complete observer graph with Λ = x1, V = {0, 1} ∗ and E = V × V .” *

And conclusion:

*"• Instead of assuming the existence of a “physical world”, we have postulated that the fundamental laws act directly on the level of observers (or rather of their observations). *

*• In the simplest case, there is only one such law, determining the probability of future observations, given all of the observer’s current (and possibly past) observations. We have argued that this should correspond to some kind of “universal apriori probability”, i.e. some version of algorithmic probability *

*• Since compressible sequences of observations are more likely, this will lead to a “stabilization” of (computable) regularities, and ultimately to the appearance of an external physical world that looks in many fundamental respects like our own. Due to properties of Solomonoff induction, we also obtain a notion of emergent objective reality for setups involving more than one observer. *

*• The architecture of the theory allows us to address a number of questions that are systematically hard to address within standard approaches. This leads to predictions in the context of brain emulation and cosmology and to phenomena like probabilistic zombies or subjective immortality.” *

I understand it as follows: For example, if you see a cup of coffee, the most probable—algorithmically simple—next observation will be that you continue to see the cup of coffee. Its probability is 0.99; there is a maximum probability of 0.0099 that it will start moving, and of 0.0000000001 it will be eaten by a green dragon.

The theory obviously implies some form of the mathematical universe, Platonia, so there is no problem of how anything appeared from nothing. The theory also is a case of neutral monism. I would add that I expect that the building atoms for pure observer states should be not 0 and 1, but qualia; however, I think that qualia themselves are just a type of mathematical objects, similar to axioms.

You should not believe all this—the epistemic status here is pure mental play, with a credence below 0.001, but given growing interest to Buddhism in this community, as well as long interest to dust theory, this may be of interest to LW.

Isn't this Wei Dai's UDASSA? Why doesn't the paper mention that? Previously discussed on LW many times, e.g. here.

Maybe visibility of UDASSA is small? Even I, despite lurking on LW for 10 years, didn't recognise it immediately. As I know, there is no proper scientific article about Wei Dai's theory and most of it is presented on a half-dead site where some interesting links, like "history of UDASSA" are dead. Also, UDASSA can't be find using Google scholar.

However, Muller doesn't mentioned the dust theory either.

Yes, it's a pity that Wei Dai's work was not mentioned. However, it looks like that Muller and Wei Dai tried to answer different questions. Wei Dai wrote about decision theories, and Muller tries to derive foundations of quantum mechanics and other physical laws from relation of observer-moments.

You're thinking of UDT which is a different thing. Read about UDASSA, it's not a decision theory, it's exactly what is described in Muller's paper.

Does UDASSA include concepts like the "observer graph" and "graph machine" that Muller describes in the paper? Is Muller just filling out details that are inevitable once you have the core UDASSA concept?

I think these details are inevitable if you have UDASSA and want transition probabilities, but I don't want transition probabilities :-)