Followup to: Anatomy of Multiversal Utility Functions: Tegmark Level IV

Outline: In the previous post, I discussed the properties of utility functions in the extremely general setting of the Tegmark level IV multiverse. In the current post, I am going to show how the discovery of a theory of physics allows the agent performing a certain approximation in its decision theory. I'm doing this with an eye towards analyzing decision theory and utility calculus in universes governed by realistic physical theories (quantum mechanics, general relativity, eternal inflation...)

A Naive Approach

Previously, we have used the following expression for the expected utility:

[1] 

Since the integral is over the entire "level IV multiverse" (the space of binary sequences), [1] makes no reference to a specific theory of physics. On the other hand, a realistic agent is usually expected to use its observations to form theories about the universe it inhabits, subsequently optimizing its action with respect to the theory.

Since this process crucially depend on observations, we need to make the role of observations explicit. Since we assume the agent uses some version of UDT, we are not supposed to update on observations, instead evaluating the logical conditional expectation values

[2] 

Here  is the agent,  is a potential policy for the agent (mapping from sensory inputs to actions) and  is expectation value with respect to logical uncertainty.

Now suppose  made observations  leading it to postulate physical theory . For the sake of simplicity, we suppose  is only deciding its actions in the universes in which observations  were made1. Thus, we assume that the input space factors as  and we're only interested in inputs in the set . This simplification leads to replacing [2] by

[3] 

where  is a "partial" policy referring to the -universe only.

The discovery of   allows  to perform a certain approximation of [2']. A naive guess of the form of the approximation is

[4'] 

Here,  is a constant representing the contributions of the universes in which  is not valid (whose logical-uncertainty correlation with  we neglect) and  is a measure on  corresponding to . Now, physical theories in the real world often specify time evolution equations without saying anything about the initial conditions. Such theories are "incomplete" from the point of view of the current formalism. To complete it we need a measure on the space of initial conditions: a "cosmology". A simple example of a "complete" theory : a cellular automaton with deterministic (or probabilistic) evolution rules and a measure on the space of initial conditions (e.g. set each cell to an independently random state).

However, [4'] is in fact not a valid approximation of [3]. This is because the use of  fixes the ontology:  treats binary sequences as encoding the universe in a way natural for  whereas dominant2 contributions to [3] come from binary sequences which encode the universe in a way natural for 

Ontology Decoupling

Allow me a small digression to discussing desiderata of logical uncertainty. Consider an expression of the form  where  is a mathematical constant with some complex definition e.g.  or the Euler-Mascheroni constant . From the point of view of an agent with bounded computing resources,  is a random variable rather than a constant (since its value is not precisely known). Now, in usual probability theory we are allowed to use identities such as . In the case of logical uncertainty, the identity is less obvious since the operation of multiplying by 2 has non-vanishing computing cost. However, since this cost is very small we expect to have the approximate identity .

Consider a set  of programs computing functions  containing the identity program. Then, the properties of the Solomonoff measure give us the approximation

[5] 

Here  is the restriction of  to hypotheses which don't decompose as applying some program in  to another hypothesis and  is the length of the program .

Applying [5] to [3] we get

Here  is a shorthand notation for . Now, according to the discussion above, if we choose  to be a set of sufficiently cheap programs3 we can make the further approximation

If we also assume  to sufficiently large, it becomes plausible to use the approximation

[4] 

The ontology problem disappears since  bridges between the ontologies of  and . For example, if  describes the Game of Life and  describes glider maximization in the Game of Life, but the two are defined using different encodings of Game of Life histories, the term corresponding to the re-encoding  will be dominant2 in [4].

Stay Tuned

The formalism developed in this post does not yet cover the entire content of a physical theory. Realistic physical theories not only describe the universe in terms of an arbitrary ontology but explain how this ontology relates to the "classical" world we experience. In other words, a physical theory comes with an explanation of the embedding of the agent in the universe (a phenomenological bridge). This will be addressed in the next post where I explain the Cartesian approximation: the approximation decoupling between the agent and the rest of the universe.

Subsequent posts will apply this formalism to quantum mechanics and eternal inflation to understand utility calculus in Tegmark levels III and II respectively.

1 As opposed to a fully fledged UDT agent which has to simultaneously consider its behavior in all universes.

2 By "dominant" I mean dominant in dependence on the policy  rather than absolutely.

3 They have to be cheap enough to take the entire sum out of the expectation value rather than only the  factor in a single term. This condition depends on the amount of computing resources available to our agent, which is an implicit parameter of the logical-uncertainty expectation values

New Comment