I believe that the execution of a certain computation is a necessary and sufficient condition for my conscious experience. Following Tegmark, by "execution" I don't refer to any notion of physical existence--I suspect that the mathematical possibility of my thoughts implies conscious experience. By observing the world and postulating my own representativeness, I conjecture the following measure on different possible experiences: the probability of any particular experience drops off exponentially with the complexity required to specify the corresponding computation.

It is typical to use some complexity prior to select a universe, and then to appeal to some different notion to handle the remaining anthropic reasoning (to ask: how many beings have my experiences within this universe?). What I am suggesting is to instead apply a complexity prior to our experiences directly.

If I believe a brain embodying my thoughts exists in some simple universe, then my thoughts can be described precisely by first describing that universe and then pointing to the network of causal relationships which constitute my thoughts. If I have seen enough of the universe, then this will be the most concise description consistent with my experiences. If there are many "copies" of that brain within the universe, then it becomes that much easier to specify my thoughts. In fact, it is easy to check that you recover essentially intuitive anthropics in this way.

This prior has a significant impact on the status of simulations. In general, making two simulations of a brain puts twice as much probability on the associated experiences. However, we no longer maintain substrate independence (which I now consider a good thing, having discovered that my naive treatment of anthropics for simulations is wildly inconsistent). The significance of a particular simulation depends on how difficult it is to specify (within the simple universe containing that simulation) the causal relationships that represent its thoughts. So if we imagine the process of "splitting" a simulation running on a computer which is two atoms thick, we predict that (at least under certain circumstances) the number of copies doubles but the complexity of specifying each one increases to cancel the effect.

This prior also gives precise answers to anthropic questions in cosmology. Even in an infinite universe, description complexity still answers questions such as "how much of you is there? Why aren't you a Boltzmann brain?" (of course this still supposes that a complexity prior is applicable to the universe).

This prior also, at least in principle, tells you how to handle anthropics across quantum worlds. Either it can account for the Born probabilities (possibly in conjunction with some additional physics, like stray probability mass wandering in from nearby incoherent worlds) or it can't. In that sense, this theory makes a testable "prediction." If it does correctly explain the Born probabilities, then I feel significantly more confidence in my understanding of quantum mechanics and in this version of a mathematical multiverse. If it doesn't, then I tentatively reject this version of a mathematical multiverse (tentatively because there could certainly be more complicated things still happening in quantum mechanics, and I don't yet know of any satisfactory explanation for the Born probabilities).

Edit: this idea is exactly the same as UDASSA as initially articulated by Wei Dai. I think it is a shame that the arguments aren't more widespread, since it very cleanly resolves some of my confusion about simulations and infinite cosmologies. My only contribution appears to be a slightly more concrete plan for calculating (or failing to calculate) the Born probabilities; I will report back later about how the computation goes.