There's a problem that has occurred to me that I haven't seen discussed anywhere: I don't think people actually wants to assign zero probability to all hypotheses which are not Turing computable. Consider the following hypothetical: we come up with a theory of everything that seems to explain all the laws of physics but there's a single open parameter (say the fine structure constant). We compute a large number of digits of this constant, and someone notices that when expressed in base 2, the nth digit seems to be 1 iff the nth Turing machine halts on the blank tape for some fairly natural ordering of all Turing machines. If we confirm this for a large number of digits (not necessarily consecutive digits- obviously some of the 0s won't be confirmable) shouldn't we consider the hypothesis the digits really are given by this simple but non-computable function? But if our priors assign zero probability to all non-computable hypotheses then this hypothesis must always be stuck with zero probability.

If the universe is finite we could approximate this function with a function which was instead "Halts within K" steps where K is some large number, but intutively this seems like a more complicated hypothesis than the original one.

I'm not sure what is a reasonable prior in this sort of context that handles this sort of thing. We don't want an uncountable set of priors. It might make sense to use something like hypotheses which are describable in Peano arithmetic or something like that.

 

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 6:08 PM

Congratulations, you have stumbled on a problem we were researching pretty actively a month ago or so. See this post of mine and the links from there, especially the discussion between Eliezer and Wei in an old mailing list thread.

The current status of the problem is that the Solomonoff prior is not sufficient for playing all games, and in fact any prior (computable or not) can be beaten by a suitably chosen "epistemically lucky" ordinary human. Here's a MathOverflow thread with a proof. Be warned, it may be rather hard to understand.

My current view on this is that you can shift to the language of higher-order logic, with the ordinals for the order of logic that are provable in some proof system. Not sure you can do any better of that.

Does that help? IIRC there's a level in the computability hierarchy for every countable ordinal number, and no way of having a probability distribution that assigns nonzero mass to every countable ordinal.

Do you know any counterpart to the Solomonoff distribution for universes based on higher-order logic?

I'd very much like you to write more on such topics because you seem to have really good math intuition, we could use that.

Do you know any counterpart to the Solomonoff distribution for universes based on higher-order logic?

That I didn't invent? No.

Earlier you said that Solomonoff induction is enough to beat any computable human, even if the universe is uncomputable. What made you change your mind?

Has he changed his mind? What I see here is an indication that you can extend Solomonoff induction to have uncomputable possibilities in your prior, but how does that imply that a computable human can do better than Solomonoff induction?

I think the best patch to fix this issue is to take a Turing-complete language such as the lambda calculus, and augment it by adding an operator that tests limiting expressions for convergence. A convergence tester is sufficient to build a halting oracle, and I believe but have not verified that a halting oracle is sufficient to build a convergence tester. Limits, however, are easier to translate into other languages that a prior might be defined over.

There's several old posts by Wei Dai on this, though I'm having trouble finding them right now except for http://lesswrong.com/lw/2id/metaphilosophical_mysteries/ .

I always thought it was based on definitions. For example, the hypothetical value you gave for the fine structure constant is definable, but not computable.

I don't see any reason it would have to be computable unless it was actually being computed, which would mean that it's emergent behavior and not actually part of the laws of physics.

I also don't see much of a problem with uncountable priors, so long as you have some sort of probability density. For example, you could have an indeterministic universe that lasts forever. Once you get everything except the random numbers, that still leaves you with uncountable universes, but you could still use and update priors with little difficulty.

If the universe is finite we could approximate this function with a function which was instead "Halts within K" steps where K is some large number, but intutively this seems like a more complicated hypothesis than the original one.

It sounds much simpler than the hypothesis that the universe is uncomputable to me!

When we do enough testing, K could get arbitrarily large and, therefore, arbitrarily complex so, unless you, like Solomonoff induction, essentially assign infinite complexity to anything uncomputable, there exists a K such that "Halts within K" is more complex than "never halts".

At this stage, an uncomputable universe is an extraordinary hypothesis, that would require extraordinary evidence to support. A finite string of numbers seems like poor evidence for uncomputability. It seems much easier to believe that it is a computable approximation. So, I am inclined to side with Solomonoff induction here.

The uncomputable is always infinite. Our observations are always finite. I don't see how making such a leap could be justified.

So if a simple setup you can do at home with some liquid nitrogen, a laser and a kumquat appeared to allow direct hypercomputation - could instantaneously determine whether a particular Turing machine halted - and you were able to test this against all sorts of extraordinary examples, you would never come to the conclusion that it was a hypercomputation engine, instead building ever-more-complex computable models of what it was doing?

[-]Cyan13y10

I would never conclude that, because the proposition that I'm in a simulation and the DLotM are messing with me is about as plausible on the evidence.

A hotline to a computable halt-finding machine seems much more plausible than something uncomputable, yes. We have no idea how something uncomputable could possibly work. You should not give weight to the uncomputable while computable approximations exist.

So how many cycles of

  • hypothesize a computable halt-tester
  • figure out from that an example where it makes the wrong call
  • test it, find that the experiment makes the correct call
  • hypothesize a new computable halt-tester that makes the correct call on this example

would you have to go through before concluding the universe was not computable?

I have to confess that I don't really see what's so special about the bottom of the compute hierarchy that you would be so certain that's where the universe is.

I'm not talking about "this stage", I'm talking about what would happen in principle, after we make enough observations.

The uncomputable is always infinite. Our observations are always finite. I don't see how making such a leap could be justified.

Are you saying that you assign literally 0 probability to an uncomputable universe? How could you possibly be so sure? Remember there are an infinite number of possible computable universes, so there must be computable universes with probability less than epsilon, for all epsilon > 0.