# 3

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

In this post, I ask two questions about Solomonoff Induction. I am not sure if these questions are open or not. If you know the answer to either of them, please let me know. I think that the answers may be very relevant to stuff I am currently working on in Asymptotic Logical Uncertainty.

Consider a universal Turing machine which takes in an infinite stream of random bits, and outputs a finite or infinite string of bits. For any bit string , let denote the probability that is a prefix of the output of .

Let be an probabilistic environment which outputs an infinite bit string, and set be the first bits output by .

If assigns outputs according to any computable probability distribution, then with probability 1, quickly converges to the actual probability that outputs a 1 at location given that its first bits were .

In particular, if outputs an infinite stream of iid bits which are each 1 with some rational probability , then will converge to with probability 1.

However I am curious what happens if is not computable. In the simplest case outputs an infinite stream of iid bits which are each 1 with some uncomputable real number probability . In this case, will converge to with probability 1?

To make things slightly more complicated, assume that the odd index bits of come from a fair coin, but the even bits come from an adversary who chooses the bits in an uncomputable way. (This adversary can see all the bits output by so far, but not future bits.) In this case, will converge to 1/2?