Reflective oracles accurately answer questions about what arbitrary halting probabilistic oracle machines output. It is possible to make a variant of a reflective oracle that accurately answers questions about what sufficiently short-running Turing machines with access to the same oracle output.
These oracles are explicitly computable, fairly powerful in a computational complexity sense, and can probably be used to make a reflective version of AIXItl, (which will not happen in this particular post)
The theorems in this post are not exhaustive at all, just the stuff I was able to figure out in ~2 hours, so there is almost certainly low-hanging fruit in figuring out how these interact with the standard arsenal of theorems in computational complexity theory.
#Motivation:
Let's say we want to make an oracle that is reflective over all TM's that run in polynomial time. How do we do that?
The obvious approach is to let a query consist of a tuple of a turing machine T, a number n, a bitstring x, and a probability p. If you build the tree of all possible execution paths of the turing machine that are poly(n) steps long(for some polynomial), and come up with the probability of every possible output (including a probability for timing out) the oracle must return 0 if the true probability of x is less than p, 1 if the true probability of x is greater than p, and randomize otherwise.
This obvious approach doesn't work, because it's possible to find out whether any halting turing machine, no matter how long-running, returns x, by recursive oracle calls. Let A be some inputless turing machine that runs for a very long number of turns, like the BB(6) machine. Let Em(A) be the turing machine that takes a number n as input, simulates A for n turns, and if A halts in n turns, outputs what A would. If A doesn't halt in n turns, this algorithm queries the oracle about whether Em(A)(2n) outputs x, and if so, outputs x.
It is possible to emulate n steps of a turing machine in O(nlogn) time, and it is possible to write down 2n in O(n) time, and it is possible to write down x in constant time, so Em(A) runs in poly(n) time. Therefore, the oracle should be accurate about what this turing machine outputs for any given n. However, if you look at what this turing machine is doing, it is giving itself an exponentially longer time bound on each oracle call, so it will eventually be able to fully simulate whatever turing machine it wants.
The obvious patch to this is ensuring that, given some monotonically increasing function f for the oracle (linear, polynomial, exponential), when calling the oracle again, n1≤n0. You can't have a chain where the oracles keep recursively calling themselves and giving themselves more computational resources, so all chains of recursive oracle calls have to be bounded above by f(n).
So, the complexity class FFRO is the set of all problems solvable in f(n) time, with access to an f(n)-reflective oracle.
A C-reflective oracle (where C is some constant) is an oracle that accurately answers any query posed about a probabilistic turing machine that halts with probability 1 in C time or less, and has access to an C-reflective oracle.
Pretty much, once n is put into the starting algorithm, every algorithm it calls the oracle upon must have runtime and oracle query length bounded by f(n), and all of the algorithms that one calls, and so on. This prevents an unboundedly long chain of different oracle calls to simulate arbitrary turing machines, because eventually you'll get either a finite set of turing machines all calling each other, and an equilibrium exists (there are only finitely many oracle queries of length f(n)), or the condition of bounded runtime and oracle query length will be violated, and the oracle can return anything at that point.
#Immediate Theorems
Theorem 1: There is some constant c such that, for any function f, FSPACE⊆cFcFRO
Proof Sketch: We'll be having a sequence of oracle machines, where each one simulates a finite number of steps, and then passes the intermediate state to the next oracle machine, via the oracle query. The overall result is something like how, if an algorithm can be carried out on a sheet of paper, it can be split up among a long chain of people where everyone does some work on the paper, and passes it to the next person.
Proof: Let Ak be the turing machine A, advanced by k time steps. Consider the turing machine TM(A0)k that emulates A for k steps, and then queries the oracle about TM(Ak)c. Because A is in FSPACE, all the intermediate states Ac that are recursively called have length bounded above by O(f), and adding the code to simulate it ahead c steps is an O(1) increase in length. Furthermore, emulating k steps of a turing machine can be done in O(klogk) time, which is a constant.
Therefore, since it takes constant time to emulate the turing machine, and takes cf(n) time to write down the query to the oracle, none of the oracle queries or turing machines run over time, and once an answer is found, it gets passed back along the chain by the condition where the oracle must be accurate about any algorithm where the runtime stays under the time bound.
Theorem 2: There is some constant c such that for any function f that is computable in O(f) time, NF⊆cFcFRO.
Any nondeterministic turing machine A that implements a function in NF has a corresponding randomized turing machine A′ that generates a random string of length O(f), and uses that to decide which path to go down. Now consider the turing machine that makes (and returns the answer to) the single oracle query (A′,1,2−cf(n)). Pretty much, this asks whether there is any random string that leads to A′ accepting, and it perfectly correlates to whether A accepts or rejects. Writing down A′ takes constant time, and writing down cf(n) takes O(f(n)) time, and computing f(n) in the first place takes O(f(n)) time by assumption.
#Takeaways
This is super-preliminary, there's certainly more computational complexity results to be found by pushing in this direction. The ability to recursively call algorithms with access to the same oracle is powerful, being able to encompass both space and nondeterminism, but is still a step down from true omniscience. It's pretty likely that a variant of AIXItl can be constructed with this tool.
Reflective oracles accurately answer questions about what arbitrary halting probabilistic oracle machines output. It is possible to make a variant of a reflective oracle that accurately answers questions about what sufficiently short-running Turing machines with access to the same oracle output.
These oracles are explicitly computable, fairly powerful in a computational complexity sense, and can probably be used to make a reflective version of AIXItl, (which will not happen in this particular post)
The theorems in this post are not exhaustive at all, just the stuff I was able to figure out in ~2 hours, so there is almost certainly low-hanging fruit in figuring out how these interact with the standard arsenal of theorems in computational complexity theory.
#Motivation:
Let's say we want to make an oracle that is reflective over all TM's that run in polynomial time. How do we do that?
The obvious approach is to let a query consist of a tuple of a turing machine T, a number n, a bitstring x, and a probability p. If you build the tree of all possible execution paths of the turing machine that are poly(n) steps long(for some polynomial), and come up with the probability of every possible output (including a probability for timing out) the oracle must return 0 if the true probability of x is less than p, 1 if the true probability of x is greater than p, and randomize otherwise.
This obvious approach doesn't work, because it's possible to find out whether any halting turing machine, no matter how long-running, returns x, by recursive oracle calls. Let A be some inputless turing machine that runs for a very long number of turns, like the BB(6) machine. Let Em(A) be the turing machine that takes a number n as input, simulates A for n turns, and if A halts in n turns, outputs what A would. If A doesn't halt in n turns, this algorithm queries the oracle about whether Em(A)(2n) outputs x, and if so, outputs x.
It is possible to emulate n steps of a turing machine in O(nlogn) time, and it is possible to write down 2n in O(n) time, and it is possible to write down x in constant time, so Em(A) runs in poly(n) time. Therefore, the oracle should be accurate about what this turing machine outputs for any given n. However, if you look at what this turing machine is doing, it is giving itself an exponentially longer time bound on each oracle call, so it will eventually be able to fully simulate whatever turing machine it wants.
The obvious patch to this is ensuring that, given some monotonically increasing function f for the oracle (linear, polynomial, exponential), when calling the oracle again, n1≤n0. You can't have a chain where the oracles keep recursively calling themselves and giving themselves more computational resources, so all chains of recursive oracle calls have to be bounded above by f(n).
So, the complexity class FFRO is the set of all problems solvable in f(n) time, with access to an f(n)-reflective oracle.
A C-reflective oracle (where C is some constant) is an oracle that accurately answers any query posed about a probabilistic turing machine that halts with probability 1 in C time or less, and has access to an C-reflective oracle.
Pretty much, once n is put into the starting algorithm, every algorithm it calls the oracle upon must have runtime and oracle query length bounded by f(n), and all of the algorithms that one calls, and so on. This prevents an unboundedly long chain of different oracle calls to simulate arbitrary turing machines, because eventually you'll get either a finite set of turing machines all calling each other, and an equilibrium exists (there are only finitely many oracle queries of length f(n)), or the condition of bounded runtime and oracle query length will be violated, and the oracle can return anything at that point.
#Immediate Theorems
Theorem 1: There is some constant c such that, for any function f, FSPACE⊆cFcFRO
Proof Sketch: We'll be having a sequence of oracle machines, where each one simulates a finite number of steps, and then passes the intermediate state to the next oracle machine, via the oracle query. The overall result is something like how, if an algorithm can be carried out on a sheet of paper, it can be split up among a long chain of people where everyone does some work on the paper, and passes it to the next person.
Proof: Let Ak be the turing machine A, advanced by k time steps. Consider the turing machine TM(A0)k that emulates A for k steps, and then queries the oracle about TM(Ak)c. Because A is in FSPACE, all the intermediate states Ac that are recursively called have length bounded above by O(f), and adding the code to simulate it ahead c steps is an O(1) increase in length. Furthermore, emulating k steps of a turing machine can be done in O(klogk) time, which is a constant.
Therefore, since it takes constant time to emulate the turing machine, and takes cf(n) time to write down the query to the oracle, none of the oracle queries or turing machines run over time, and once an answer is found, it gets passed back along the chain by the condition where the oracle must be accurate about any algorithm where the runtime stays under the time bound.
Theorem 2: There is some constant c such that for any function f that is computable in O(f) time, NF⊆cFcFRO.
Any nondeterministic turing machine A that implements a function in NF has a corresponding randomized turing machine A′ that generates a random string of length O(f), and uses that to decide which path to go down. Now consider the turing machine that makes (and returns the answer to) the single oracle query (A′,1,2−cf(n)). Pretty much, this asks whether there is any random string that leads to A′ accepting, and it perfectly correlates to whether A accepts or rejects. Writing down A′ takes constant time, and writing down cf(n) takes O(f(n)) time, and computing f(n) in the first place takes O(f(n)) time by assumption.
#Takeaways
This is super-preliminary, there's certainly more computational complexity results to be found by pushing in this direction. The ability to recursively call algorithms with access to the same oracle is powerful, being able to encompass both space and nondeterminism, but is still a step down from true omniscience. It's pretty likely that a variant of AIXItl can be constructed with this tool.