Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Kutta40

Yes. To expand a bit, in fact the straightforward way to show that second-order arithmetic isn't complete in the first sense is by using the Gödel sentence G.

G says via an encoding that G is not provable in second-order arithmetic. Since the only model (up to isomorphism) is the model with the standard natural numbers, an internal statement which talks about encoded proofs is interpreted in the semantics as a statement which talks about actual proof objects of second-order arithmetic. This is in contrast to first-order arithmetic where we can interpret an internal statement about encoded proofs as ranging over nonstandard numbers as well, and such numbers do not encode actual proof objects.

Therefore, when we interpret second-order G, we always get the semantic statement "G is not provable in second-order arithmetic". From this and the soundness of the proof system, it follows that G is not provable. Hence, G holds in all models (recall that there is just one model up to isomorphism), but is not provable.

Answer by Kutta60

It is important to distinguish the "complete" which is in Gödel's completeness theorem and the "complete"-s in the incompleteness theorems, because these are not the same.

The first one presupposes two things, a) a syntax for a logical language, also containing a syntax for proofs b) a notion of model. Then, a combination of a) and b) is complete if every sentence which holds in all models is also syntactically provable.

The second one for our purpose can be summarized as follows: a syntax for a logical language is complete if for every sentence , either or its negation is provable. This sense of completeness does not mention models or semantics!

The main point in Gödel's first incompleteness theorem is the ability to internally represent the syntax and proof theory of the language. In the case of first-order PA, all syntactic objects appear as certain finitely deep and finitely branching trees, and the job is to use arithmetic operations and properties to encode various finitary trees as just unary branching trees, i.e. the natural numbers. The syntax of second-order arithmetic does not differ in a relevant way; it is also just finitary trees, and since second-order arithmetic is a strict extension of first-order arithmetic, analogous internal encoding works in it as well.

That second-order logic "doesn't have a computable deduction system" sounds false to me. It certainly does not have the completeness property (in the first sense) with respect to the standard notion of model, precisely because of Gödel's incompleteness + ruling out nonstandard semantic natural numbers. But second-order logic still has finitary, recursively enumerable syntax and proofs, with decidable proof validation.

Kutta60

I'd like to ask a not-too-closely related question, if you don't mind.

A Curry-Howard analogue of Gödel's Second Incompleteness Theorem is the statement "no total language can embed its own interpreter"; see the classic post by Conor McBride. But Conor also says (and it's quite obviously true), that total languages can still embed their coinductive interpreters, for example one that returns in the partiality monad.

So, my question is: what is the logical interpretation of a coinductive self-interpreter? I feel I'm not well-versed enough in mathematical logic for this. I imagine it would have a type like "forall (A : 'Type) -> 'Term A -> Partiality (Interp A)", where 'Type and 'Term denote types and terms in the object language and "Interp" returns a type in the meta-language. But what does "Partiality" mean logically, and is it anyhow related to the Löbstacle?

Kutta30

You're right, I mixed it up. Edited the comment.

Kutta110

Suppose "mathemathics would never prove a contradiction". We can write it out as ¬Provable(⊥). This is logically equivalent to Provable(⊥) → ⊥, and it also implies Provable(Provable(⊥) → ⊥) by the rules of provability. But Löb's theorem expresses ∀ A (Provable(Provable(A) → A) → Provable(A)), which we can instantiate to Provable(Provable(⊥) → ⊥)→ Provable(⊥), and now we can apply modus ponens and our assumption to get a Provable(⊥).

Kutta00

To clarify my point, I meant that Solomonoff induction can justify caring less about some agents (and I'm largely aware of the scheme you described), but simultaneously rejecting Solomonoff and caring less about agents running on more complex physics is not justified.

Kutta30

The unsimulated life is not worth living.

-- Socrates.

Kutta40

You conflate two very different things here, as I see.

First, there are the preferences for simpler physical laws or simpler mathematical constructions. I don't doubt that they are real amongst humans; after all, there is an evolutionary advantage to using simpler models ceteris paribus since they are easier to memorize and easier to reason about. Such evolved preferences probably contribute to a matemathician's sense of elegance.

Second, there are preferences about the concrete evolutionarily relevant environment and the relevant agents in it. Naturally, this includes our fellow humans. Note here that we might also care about animals, uploads, AIs or aliens because of our evolved preferences and intuitions regarding humans. Of course, we don't care about aliens because of a direct evolutionary reason. Rather, we simply execute the adaptations that underlie our intuitions. For instance, we might disprefer animal suffering because it is similar enough to human suffering.

This second level has very little to do with the complexity of the underlying physics. Monkeys have no conception of cellular automata; you could run them on cellular automata of vastly differing complexity and they wouldn't care. They care about the kind of simplicity that is relevant to their day-to-day environment. Humans also care about this kind of simplicity; it's just that they can generalize this preference to more abstract domains.

(On a somewhat unrelated note, you mentioned bacteria. I think your point is a red herring; you can build agents with an assumption for the underlying physics, but that doesn't mean that the agent itself necessarily has any conception of the underlying physics, or even that the agent is consequentialist in any sense).

So, what I'm trying to get at: you might prefer simple physics and you might care about people, but it makes little sense to care less about people because they run on uglier physics. People are not physics; they are really high-level constructs, and a vast range of different universes could contain (more or less identical) instances of people whom I care about, or even simulations of those people.

If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less "measure" to them. But you rejected this kind of reasoning in your post, and I can't exactly come to grips with the "physics racism" that seems to logically follow from that.

Kutta530

Donated $500.

Load More