Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Previously, we discussed the construction of logical counterfactuals in the language of optimal predictors. These counterfactuals were found to be well-behaved when a certain non-degeneracy condition is met which can be understood as a bound on the agent's ability to predict itself. We also demonstrated that desired game-theoretic behavior seems to require randomization (thermalizing instead of maximizing) which has to be logical randomization to implement metathreat game theory by logical counterfactuals. Both of these considerations suggest that the agent has to pseudorandomize (randomize in the logical uncertainty sense) its own behavior. Here, we show how to implement this pseudorandomization and prove it indeed guarantees the non-degeneracy condition.

Results

The proofs of the results are given in Appendix A.

Motivation

We start with describing the analogous construction in classical probability theory.

Fix . Denote . Suppose is a probability distribution on , i.e. . We want to construct a procedure that samples according to the probabilities .

Suppose we are given , a uniformly distributed -valued random variable. Then we can select by

It is easy to see that implements the desired sampling i.e. .


In order to translate the above into logical uncertainty, we need a source of pseudorandom that can be the counterpart of . The following definition establishes the desiderata of this object.

Definition

Fix an error space and . Consider , a probability measure on , a word ensemble and . is said to be -distributed -pseudorandom within precision when for any -valued -bischeme , uniformly bounded family of functions and sequence s.t. is Lipschitz continuous with constant , there is s.t.

Note

Compare the above to the previously given definition of an "irreducible" estimation problem.


We now give a basic existence result for pseudorandom functions by establishing that a random function is pseudorandom.

Construction

Consider and . is defined to be the set of bounded functions s.t.

Proposition 1

Consider and . If exists s.t. then is an error space. If exists s.t. then is an ample error space.

Proposition 2

For any , .

Theorem 1

For any there is s.t. the following holds. Consider a probability measure on and a word ensemble. Let be generated by independently sampling for each . Suppose , and are s.t. and

Then is -distributed -pseudorandom within precision with probability 1.


Finally, we show pseudorandomization guarantees non-degeneracy.

Notation

Given measurable space and , will denote a Markov kernel with source and target . For each , will refer to the corresponding -valued random variable.

Theorem 2

Fix . Consider an error space, , a word ensemble, , and . Suppose that for any , and with probability 1 and that is uniform -pseudorandom within precision . Define by

Assume is a symmetric -orthogonal predictor for . Let be the lowest eigenvalue of . Then there is s.t.

Corollary

In the setting of Theorem 2, assume . Then there is a symmetric -optimal predictor for s.t. its lowest eigenvalue is at least .

Appendix A

Proposition 1 and 2 are obvious and we skip the proofs.

Construction A

Given , let be the set of functions given by or for , .

Proof of Theorem 1

Consider any continuous function . For any and , we have

Denote the probability measure from which is sampled. That is, is the completion of . Denote .

By Hoeffding's inequality,

For some we have . Denote . For any we have

For any , we know that

It follows that with -probability 1

This implies that for any -valued -bischeme

Consider a uniformly bounded family of functions and sequence s.t. is Lipschitz continuous with constant . By Corollary B (see Appendix B) there are , and s.t. , and .

Proposition A.1

Fix an error space and . Consider , a probability measure on , a word ensemble and . Assume is -distributed -pseudorandom within precision . Then, for any -valued -bischeme , uniformly bounded family of functions and sequence s.t. is Lipschitz continuous with constant , there is s.t.

Proof of Proposition A.1

The definition of pseudorandom implies for any and as above there is an -moderate function s.t. for any ,

This easily implies the desired result.

Proof of Theorem 2

Since it is possible to find a lowest eigenvector of a symmetric matrix in time polynomial in the number of significant digits, there is a -valued -bischeme s.t. and for . For any bounded -valued -bischeme with we have

Denote . Let be an -valued -bischeme s.t. . We have and in particular if is s.t. then .

For any , , denote , . Define by

is Lipschitz continuous with constant .

only depends on the first variables. As a function of these variables, its graph is a pyramid of height 1 whose base is the box . It follows that .

For any -valued -bischeme we can apply Proposition A.1 to get s.t.

We have . Assuming , we get s.t.

It is easy to see that for any for any , if then . Hence . We conclude

Since can be approximated by a -bischeme up to an error in , there is s.t.

Proposition A.2

Fix an error space . Consider a distributional estimation problem and a corresponding -orthogonal predictor . Suppose is a bounded -valued -bischeme s.t. (denote ) and . Then is an -optimal predictor for .

Proof of Proposition A.2

Consider a bounded -valued -bischeme with . We have

Proof of Corollary

Given a field , let be the space of symmetric matrices over . Given , denote the following function. Given , let be an orthonormal basis of eigenvectors for and be the corresponding eigenvalues. Then, . As easy to see, the definition doesn't depend on the choice of .

Denote . Let be a -valued -bischeme with whose lowest eigenvalue is at least and which satisfies . Using Theorem 2 and we conclude that

Applying Proposition A.2 we get the desired result.

Appendix B

Cartwright and Kucharski give a generalization of Jackson's inequality for an arbitrary compact connected Lie group. We only need the uniform norm, rank 1 case for the standard torus, so we state here this special case.

Theorem B

Fix . Denote the standard n-dimensional torus. Then there is s.t. for any there is s.t. its Fourier transform satisfies and for any we have

Moreover, the are uniformly bounded.

Note B

The fact are uniformly bounded is not stated explicitly in Cartwright and Kucharski but it is evident from their construction of .

Corollary B

For any there are s.t. the following holds. Consider and , Lipschitz continuous with constant . Then there are and s.t. , and .

Proof of Corollary B

Using reflections, can be continued to a function on with periodic boundary conditions which is still Lipschitz continuous with constant . This new function can be reinterpreted as a function Lipschitz continuous with constant . Applying Theorem B, where . Using the properties of , we have , where for some because the are uniformly bounded and can be bounded by . Since is real, , yielding the desired result.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 5:37 AM

Intuitively, this is very similar to previous approaches to salvaging decision theory (e.g. see mine here, but the whole thing is basically the same as playing chicken with the universe, which just corresponds to using a very low temperature).

I am still not able (/ don't have the time) to closely follow these proofs or even the result statements. It looks to me like your goal is to formalize the basic intuitive arguments, and that the construction works by a similar diagonalization. If that's not the case, it may be worth calling out the differences explicitly.

Yeah, what I'm doing here is more or less a formalisation of the ideas in your writeup, with the added technical complication that the "math intuition model" is nondeterministic so you need to use matrix counterfactuals. In order to get UDTish instead of CDTish behavior, I am going to make the agent select some sort of "logical policy" instead of action (i.e. something that reduces to a metathreat in a game theoretic setting).