We prove the existence of a probability distribution over a theory with the property that for certain definable quantities , the expectation of the value of a function is accurate, i.e. it equals the actual expectation of ; and with the property that it assigns probability 1 to behaving this way. This may be useful for self-verification, by allowing an agent to satisfy a reflective consistency property and at the same time believe itself or similar agents to satisfy the same property. Thanks to Sam Eisenstat for listening to an earlier version of this proof, and pointing out a significant gap in the argument. The proof presented here has not been vetted yet.
- Problem statement
Given a distribution coherent over a theory , and some real-valued function on completions of , we can define the expectation of according to . Then we can relax the probabilistic reflection principle by asking that for some class of functions , we have that , where is a symbol in the language of meant to represent . Note that this notion of expectation-reflection is weaker than probabilistic reflection, since our distribution is now permitted to, for example, assign a bunch of probability mass to over- and under-estimates of , as long as they balance out.
Christiano asked whether it is possible to have a distribution that satisfies this reflection principle, and also assigns probability 1 to the statement that satisfies this reflection principle. This was not possible for strong probabilistic reflection, but it turns out to be possible for expectation reflection, for some choice of the functions .
- Sketch of the approach
(This is a high level description of what we are doing, so many concepts will be left vague until later.)
Christiano et al. applied Kakutani’s theorem to the space of coherent . Instead we will work in the space of expectations over some theory , where an expectation over a theory is, roughly speaking, a function from the set of variables provably defined by that theory, into the intervals proved to bound each variable. These are essentially interchangeable with coherent probability distributions over . The point of doing this is to make the language simpler, for example reflection statements will mention a single symbol representing an expectation, rather than a complicated formula defining the expectation in terms of probability.
We will again apply Kakutani’s theorem, now requiring that some expectation reflects only when expects to behave like , and when assigns some significant probability to the statement that is reflective. This confidence in reflection must increase the closer that is to being reflective. Then a fixed point of this correspondence will be expectation-reflective, and will assign probability 1 to being expectation-reflective.
The form of our correspondence will make most of the conditions of Kakutani’s theorem straightforward. The main challenge will be to show non-emptiness, i.e. that there is some expectation that reflects a given and believes in reflection to some extent. In the case of probabilistic reflection, this does not go through at all, since if we reflect a non-reflective probability distribution exactly, we must assign probability 0 to reflection.
However, in the case of expectations, we can mix different expectations together while maintaining expectation-reflection, by carefully balancing the mixture. The main idea will be to take a distribution that believes in some reflective expectation , take another distribution that believes in some pseudo-expectation , and mix them together. The resulting mixture will somewhat expect to be reflective, since expects this, and by a good choice of counterbalancing , will expect to behave like .
Before carrying out this approach, we need some formal notions and facts about expectations, given in Sections 3 and 4. Also, in order to be careful about what we mean by an expectation, a pseudo-expectation, and a variable, we will in Section 5 develop a base theory over which our distributions will be defined. Then Section 6 will give the main theorem, following the above sketch. Section 7 discusses the meaning of these results and extensions to definable reflection.
- Basic definitions and facts about expectations
We will work with probability distributions (or, in a moment, expectations) that are coherent over some base theory in a language that can talk about rationals, functions, and has a symbol .
Random variables for theories and their bounds
These notions are due to Fallenstein.
We are interested in taking expectations of quantities expressed in the language of . This amounts to viewing a probability distribution coherent over as a measure on the Stone space , and then asking for the expectation
A natural choice for the kind of random variable to look at is those values definable over , i.e. formulas such that . Then any completion of will make statements of the form for various in a way consistent with holding on a unique real, and perhaps we can extract a value for the random variable .
However, we have to be a little careful. If this is all that proves about , then there will be completions of which, for every , contain the statement . Then there is no real number reasonably corresponding to . Even if this is not an issue, there are distributions which assign non-negligible probabilities to a sequence of completions of that put quickly growing values on , such that the integral does not exist.
Therefore we also require that proves some concrete bounds on the real numbers that can satisfy . Then we will be able to extract values for from completions of and define the expectation of according to .
[Definition of bounded variables for .]
For any consistent theory , the set is the set of formulas such that proves is well-defined and in some particular bounds, i.e.:
Elements of are called -variables.
[Definition of -bounds on variables.]
For , let be the complete bound put on by , taking into account all bounds on proved by , i.e.
Note that the -bound on a variable is a well-defined non-empty closed interval; it is the intersection of non-disjoint closed intervals all contained in some rational interval, by the definition of and the fact that is consistent.
Expectations and pseudo-expectations over a theory
The definition of and the theorem in Section 4 are based on a comment of Eisenstat.
Now we define expectations over a theory, analogously to probability distributions. Here linearity will play the role of coherence.
[Sum of two -variables.] For , we write for the sum of the two variables, i.e.:
Then for reasonable .
[Expectations over a theory .] An expectation over a theory is a function such that for all :
(In -bounds) , i.e. takes values in the bounds proved by , and
In order to carry out the counterbalancing argument described above, we need some rather extreme affine combinations of expectations. So extreme that they will not even be proper expectations; so we define pseudo-expectations analogously to expectations but with much looser bounds on their values.
[Pseudo-expectations over a theory .] A pseudo-expectation over a theory is a function such that for all :
(Loosely in -bounds) If has -bound , we have that , and
For any theories , we have that and . We are implicitly restricting elements of and to in these comparisons, and will do so freely in what follows. We take the product topology on both and .
- Isomorphism of expectations and probability distributions
To actually construct elements of , we will use a natural relationship between probability distributions and expectations over a theory, proved formally below to be an isomorphism. On the one hand we can get a probability distribution from by taking the expectation of indicator variables for the truth of sentences; on the other hand we can get an expectation from a probability distribution by integrating a variable over the Stone space of our theory.
[The value of an -variable.] For a complete theory , the value of some -variable is . Since , this value is well-defined, and .
For any theory , there is a canonical isomorphism between and the space of coherent probability distributions over , given by:
where is the 0-1 valued indicator variable for the sentence , i.e. . The alleged inverse is given by:
Proof. By the previous discussion, and are well-defined in the sense that they return functions of the correct type.
By definition of , the integrals in the definition of are defined and within -bounds. For any and any , we have that and
Thus , so is linear and hence is an expectation.
For any , we have that , so since is in -bounds, . Similarly, for any partition of truth into three sentences, proves the indicators of those sentences have values summing to 1; so assigns values to their indicators summing to 1, using linearity a few times and the fact that assigns the same value to variables with .
This last fact follows by considering the -bound of on the variable . Linearity gives that , so . If this gives , so that in general , as desired.
For any and any sentence , we have
since any completion of with also has , and any completion of with also has .
Take a sub-basis open subset of , the set of distributions assigning probability in to . The preimage of this set is the set of expectations with , which is an open subset of .
Take any . We want to show that
for all . In the following we will repeatedly apply linearity and the fact shown above that respects provable equivalence of variables. Take such a and assume for clarity that the -bound of is . Then for any , we have that
Note that the last interval in these sums is closed instead of half-open. Since proves that is non-negative,
By the arguments given earlier,
Hence As , the right is the definition of the Lebesgue integral of . Combining this with a similar argument giving an upper bound on , we have that
Take a sub-basis open set in , the set of expectations assigning a value in to . Let be a probability distribution with . As in the previous section of the proof, we can cut up the bound into finitely many very small intervals. Then any probability distribution that assigns probabilities sufficiently close to those assigned by to the indicators for being in those small intervals, will have an expectation for that is also inside . This works out to an open set around , so that the preimage of the sub-basis open set is a union of open sets.
- A base theory that accommodates reflection variables
So, we have a dictionary between distributions and expectations. This will let us build expectations by completing theories and taking expectations according to the resulting 0-1 valued distribution.
Some preparatory work remains, because in order to have the reflection principle , we at least want to be a variable whenever is. Thus we will need a theory that bounds whenever it bounds . However, in order to make extreme mixes of elements of possible to reflect into an expectation over , we will need that all elements of are valid interpretations of for .
Stratified definition of the base theory
We start with a theory such as that is strong enough to talk about rational numbers and so on. We add to the language a symbol that will represent an expectation. We also add the sentence stating that is a partial function from to , and that is linear at if it happens to be defined on and . This gives the theory .
Now define inductively the theories :
In English, this says that is along with the statement that whenever proves that some is well-defined and bounded in some interval , then it is the case that is defined on and is inside the much looser bound . Intuitively we are adding into the variable whenever , but we are not restricting its value very much at all. The form of the loose bound on is an artifact of the metric we will later put on .
Finally, we define the base theory we will use in the main argument as the limit of the , that is: . Note that is at least (exactly?) as strong as , the theory with -iterated consistency statements, since the loose bounds are the same as the true bounds when the true bound is . Also note that it is important that is arithmetically sound, or else may believe in nonstandard proofs and hence put inconsistent bounds on . I think this restriction could be avoided by making the statement in into a schema over specific standard naturals that might be proofs.
Soundness of over
We will be applying Kakutani’s theorem to the space , and making forays into . So we want to at least be consistent, so that is nonempty, and furthermore we want to allow for to be interpreted by anything in .
Recall that a (pseudo)expectation over a theory is a function that is linear, and such that given with -bound , we have that (or ). As noted before, for any theories , we have that and , where we are restricting elements of and to .
For any consistent theory , is nonempty.
This follows from the isomorphism ; we take a completion of , which is a coherent probability distribution over , and then take expectations according to . That is, .
We assume that we have some standard model for the theory over which was constructed. For concreteness we take that theory to be , and we take the standard model to be the cumulative hierarchy .
is nonempty, and for all , we have that .
(To follow the proof, keep in mind the distinction between being a (pseudo)expectation over a theory, versus providing a model for a theory.)
Proof. The claim is true for in place of , since is consistent and places no restrictions other than linearity on .
Say the claim holds for , so is non-empty. For any , by hypothesis . Also, by definition of , satisfies that whenever bounds in , also . Hence . Thus is consistent. Since , this also shows that for all , we have .
By induction the claim holds for all , and hence is consistent and is nonempty. Since for all , for any we have , and hence .
- Main theorem: reflection and assigning probability 1 to reflection
We have a theory that is consistent, so that is nonempty, and sound over all pseudo-expectations. We want an expectation that is reflective, and also believes that it is reflective. First we formalize this notion and show that there are reflective expectations.
Existence of reflective expectations
Define the sentence
This says that whenever is defined on some variable, it expects to take some value on that variable, and it expects the correct value. In short, its expectations about its expectations are correct. Define to be the reflective expectations over , i.e. those that satisfy .
Some observations: the spaces are all compact, as they are closed subsets of the product of the loose bounds on , that product being a compact space. Both and are convex, as linearity and being in bounds are preserved by convex combinations. (For the same reason, is convex, and is in fact an affine subspace of .)
Proof. We apply Kakutani’s theorem to where corresponds to when . The set of corresponding to is compact and convex, and the graph is closed. For any there is a corresponding : we take an expectation over the theory
stating that behaves according to . This theory is consistent because provides a model. Any completion has , so the resulting expectation corresponds to . Kakutani’s theorem gives a fixed point of this correspondence, which is in .
The correspondence : exact reflection and assigning high probability to reflection for distributions close to reflective
We can’t simply take a correspondence that also requires to assign probability 1 to ; in general there would not be any expectation corresponding to any . Instead we will soften this requirement, and only require that approach 1 as approaches being reflective, in order for .
Define a metric on by
(If then the coordinate plays no role in the metric by fiat.)
The factor of ensures that the metric will converge, since the factor of corrects the projection of in each coordinate to be .
We abbreviate to mean the distance from to the nearest element of the set . Since is compact, this is well-defined and continuous on .
For , we say that reflects and we write precisely when:
expects to behave just like , i.e. , and
is somewhat confident that is reflective, specifically .
Fixed points of the correspondence are reflective and believe they are reflective
Say . Then , by definition of . In particular, , so that , and the desired distribution.
Compact and convex images; closed graph
For a fixed , the conditions for are just closed subintervals in some coordinates, so is compact and convex.
Consider a sequence converging to and . For , since , we have . Also, since , we have that the converge to something at least , so . Thus is closed.
Images of the correspondence are nonempty: interpolating reflective and pseudo-expectations
Finally, we need to show that for any , there is some such that . (The case distinction is just for explanatory purposes.)
Case 1. .
Recall the theory stating that behaves according to . By the theorem about , , so along with we also have . Thus that theory is consistent, so we can take some . This expects to behave like , and .
Case 2. .
Pick some with . As in the previous case, find some , so expects to behave like , and . We will define with by taking a convex combination of with another :
By convexity, , and since , we will have as desired.
However, we also need . That is, we need
where believes that behaves like . We take the last line to be the definition of .
In general, this function is not in . It may be that is very small, but for some large , is large and is small, so that is very large and actually outside of , and hence not an expectation. However, is, in fact, a pseudo-expectation over :
where , and . That is, the claim is that . Indeed:
Therefore . By the theorem on , , so that is consistent and we obtain that expects to behave like . Then is in , expects to behave like , and has . That is, .
The conditions of Kakutani’s theorem are satisfied, so there is a fixed point , and therefore we have an expectation that believes behaves like itself, and that assigns probability 1 to having this property.
Extension to belief in any generic facts about
The above argument goes through in exactly the same way for any statement that is satisfied by all reflective expectations; we just have also assign probability 1 to , and modify by adding a condition for analogous to that for . For example, we can have our reflective assign probability 1 to , which is analogous to an inner coherence principle.
I think that if the base theory is strong enough to prove , then this whole argument can be carried out with defined in terms of , a symbol for a probability distribution, and so we get a probability distribution over the original language with the desired beliefs about itself as a probability distribution.
I think it should be possible to have a distribution that is reflective in the sense of be definable and reflective for its definition, using the methods from this post. But it doesn’t seem as straightforward here. One strategy might be to turn the sentence in the definition of , stating that is in the loose -bounds on variables, into a schema, and diagonalizing at once against all the refuting finite behaviors. But, the proof of soundness of over pseudo-expectations, and diagonalizing also against refuting finite behaviors in conjunction with , seems to require a little more work (and may be false).
It would be nice to have a good theory of logical probability. The existence proof of an expectation-reflective distribution given here shows that expectation-reflection is a desideratum that might be achievable in a broader context (i.e. in conjunction with other desiderata).
I don’t know what class of variables a -reflective is reflective for. Universes that use in a way that only looks at ’s opinions on variables in for some , and are defined and uniformly bounded whenever is in , will be reflected accurately. If the universe looks at all of , and for instance does something crazy if is not in , then may not be able to prove