Question

The halting problem and Rice's theorem (non-trivial semantic properties of algorithms are not decidable) seem to be problems of self-reference. Some self-referential properties of programs cannot be computed. And this seems to be true even for more powerful models of computation (hypercomputers have their own version of the halting problem).

Are there any limits on computability that don't arise from self-reference?

 

 

Context

(Situating my world model and where the question sits within it.)


I have an intuition that:

  1. A function (relation, map, transformation, morphism, etc. Basically, whatever thingy you are interested in within your abstract world [sets, (various kinds of "sets with additional structure"), categories, etc.])
  2. An "algorithm" that "computes" that thingy
    1. I'm not defining "compute" here.
      1. There's an intuitive notion, and I'm worried that trying to explicate it before I fully understand it would lose some nuance.
    2. I imagine that an "algorithm" is a (finite?) sequence of steps (the steps may be functions, operations, etc. (primitive?) thingies)
  3. A physical system that implements the algorithm (a "computer")

Are all different things.

I'm trying to build up intuition for how they relate to each other.

 

I think a "model of computation" is an abstraction of a "general computer" within a given "laws of physics"?

(There may be multiple "models of computation" for a particular laws of physics, but I expect each laws of physics would have a class of "most powerful models of computation" within that laws of physics.)

 

A "laws of physics" is something like a "fundamental ontology of a (logically?) possible universe".


I feel like first order logic uncomputability challenges or real number uncomputability challenges are only uncomputable within particular models of computation/laws of physics.

If you could exploit countable infinities, I think you would be able to compute stuff about first order logic (maybe Godel's incompleteness theorems says otherwise, but I don't understand the incompleteness theorems [yet hopefully] and don't understand how "what can be proven" relates to "what can be computed". I think for some models of computation, the two may be indistinguishable, but I'm not sure if this is necessarily true.)

If you could exploit uncountable infinities, I'd expect you'd be able to compute stuff about real numbers.


I'm trying to think about not "what is uncomputable given the laws of physics of our universe", but "what is uncomputable given the laws of physics of _any_ universe".

Stuff that is __necessarily uncomputable__:

  1. There does not exist any "laws of physics" for which a "computer" can be built to solve it. 
  2. There does not exist a "model of computation" for which a general algorithm to solve it exists.


I have an intuition that every model of computation will still have their own self-referential problems. They won't be able to solve the halting problem or decide non-trivial properties of algorithms for that model of computation.


And there are "large" objects: collections that are not themselves members of any other collection.

(The "universe" of all sets is not itself a set.)

I have an intuition that properties of large objects may be necessarily uncomputable.


I'm trying to build intuition about a few questions:

  1. On a fundamental level, what is "computation"?
    1. On a fundamental level, what is "information"?
  2. How are mathematics and "computation" related?
    1. How is computation and "knowledge"/"semantic content" related?
    2. Does a given computational state uniquely characterise semantic content?
  3. How are computation and physics related?
    1. How are information and physics related?
    2. How reducible is computation to physics?
      1. In general?
      2. Within our universe?
    3. Is a given computational "state" uniquely characterised by a given physical "state"?
  4. 3. How does the "limits of computability" depend on the "laws of physics" of a given universe?
    1. What are the limits of computability within our universe?

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

For the first two questions, I recommend Sipser's Introduction To The Theory Of Computation. It goes through the mathematical basis underlying computation and is the relevant introductory textbook for those questions. To answer your questions about computation and physics, I would suggest studying computation complexity. That topic is usually covered in the opening chapters of an algorithms textbook. Cormen, Leicerson, Rivest and Stein's Introduction to Algorithms is the standard book for that, but I'd actually recommend Skiena's Algorithm Design Manual as a more readable introduction to the topic. For more on the relation between physics and computation, I'd also recommend looking into information theory, starting with Claude Shannon's original paper, which is remarkably readable and still relevant, even 74 years later.