Gap in understanding of Logical Pinpointing

3Anatoly_Vorobey

3Nisan

2bryjnar

0Giles

1bryjnar

0Giles

0bryjnar

0Incorrect

0elseif

0Incorrect

0elseif

0Incorrect

1elseif

New Comment

Second-order logic does have a computable definition of provability, it's just that the logic is incomplete under that definition. It may be disconcerting to work with an incomplete logic, since you know there are universal truths you can't prove even *before* you get to a particular theory and its consequences.

In general, using SOL to pinpoint the truths of natural numbers doesn't seem like an appealing idea:

- Actual interesting mathematical content about N is provable in FOL PA, and if not, the natural place mathematicians go to is set theory.
- The second-order induction axiom can be formalized in first-order set theory. At least set theory is explicit about the fact that the notion of "all subsets" is somewhat vague; SOL seems to sweep that under the carpet.
- By the time you proved categoricity of SOL PA, you've used natural numbers with full induction in your naive meta-theory, and are inextricably dependent on that pre-formal naive understanding of what natural numbers are.

Oh I see. The way I would say this is:

While the axiom schema of induction effectively quantifies over properties

definablein PA, the second-order version quantifies over ALL properties, including those you can't evendefinein PA.

It's true that SOL is incomplete, but I actually think this is a feature. Completeness gets you Compactness, and Compactness is just plain weird. It's really useful for proving stuff, but I'm pretty sure it's not true of the logic we use day-to-day.

but I'm pretty sure it's not true of the logic we use day-to-day.

I'm curious - can you give an example?

Are you thinking of e.g. the set {"there are a finite number of spoons and at least n spoons" for all n}? For any finite subset of those sentences you can imagine a possible world that satisfies them, but there isn't a possible world that satisfies them all.

That would do. Also: {"Jim has non-zero height and Jim is less than 1/n metres tall"}. It generally screws up talk about numbers in this way.

Right. In order to satisfy all of those statements about Jim simultaneously, you need a nonstandard model of number theory. And generally we want our statements to be interpreted according to the standard model of number theory (which sort of makes numbers part of the "logic" rather than the thing the logic is operating on).

I was assuming we've specified the behaviour of the numbers sufficiently in the background ;) But the difficulty of doing that is part and parcel of the FOL weirdness: you don't get the Lowenheim Skolem theorem in SOL! I was going to say that you need Compactness to prove it, but now that I think about it I don't know that you *need* it, although it's usually used in the proof of the upwards component.

Just use the axiom schema of induction instead of the second order axiom of induction and you will be able to produce theorems though.

But you wont be able to produce all true statements in SOL PA, that is, PA with the standard model, because of the incompleteness theorems. To be able to prove a larger subset of such statements, you would have to add new axioms to PA. If adding an axiom T to PA does not prevent the standard model from being a model of PA+T, that is it does not prove any statements that require the existence of nonstandard numbers, then PA+T is ω-consistent.

So, why can't we just keep adding axioms T to PA, check whether PA+T is ω-consistent, and have a more powerful theory? Because we can't in general determine whether a theory is ω-consistent.

Perhaps a probabilistic approach would be more effective. Anyone want to come up with a theory of logical uncertainty for us?

ω-inconsistency isn't exactly the same thing as being false in the standard model. Being ω-inconsistent requires both that the theory prove all the statements P(n) for standard natural numbers n, but also prove that there is an n for which P(n) fails. Therefore a theory could be ω-consistent because it fails to prove P(n), even though P(n) is true in the standard model. So even if we could check ω-consistency, we could take PA, add an axiom T, and end up with an ω-consistent theory which nonetheless is not true in the standard model.

By the way, there are some papers on models for adding random (true) axioms to PA. "Are Random Axioms Useful?" involves some fairly specific cases, but shows that in those situations, random axioms generally aren't likely to tell you anything you wanted to know.

Therefore a theory could be ω-consistent because it fails to prove P(n), even though P(n) is true in the standard model.

I thought for ω-consistency to even be defined for a theory it must interpret the language of arithmetic?

Perhaps LessWrong is a place where I can say "Your question is wrong" without causing unintended offense. (And none is intended.)

Yes, for ω-consistency to even be defined for a theory it must interpret the language of arithmetic. This is a necessary precondition for the statement you quoted, and does not contradict it.

Work in PA, and take a family of statements P(n) where each P(n) is true but independent of PA, and not overly simple statements themselves---say, P(n) is "the function epsilon n in the fast growing hierarchy is a total function". (The important thing here is that the statement is at least Pi2---true pure existence statements are always provable, and if the statements were universal there would be a different ω-consistency problem. The exact statement isn't so important, but not that these statements are true, but not provable in PA.)

Now consider the statement T="there is an n such that P(n) is false". PA+T has no standard model (because T is false), but PA+T doesn't prove any of the P(n), let alone all of them, so there's no ω-consistency problem.

Thanks, can you recommend a textbook for this stuff? I've mostly been learning off Wikipedia.

I can't find a textbook on logic in the lesswrong textbook list.

I'm a fan of Enderton's "A Mathematical Introduction to Logic". It's short and very precisely written, which can make it a little difficult to learn from on its own, but together with a general familiarity with the subject and using wikipedia for additional examples elaboration, it should be perfect.

I had originally found myself very confused by how the second order axiom of induction restricted PA to a single model, leading to this discussion where I thought doing so would violate the incompleteness theorem.

What I misunderstood is that while the axiom schema of induction effectively quantifies over properties definable in PA, the SOL version quantifies over ALL properties, including those you can't even define in PA.

This seems like a really subtle point that wasn't obvious to me from the article, knowing FOL but not SOL. I actually realized my mistake when I saw that Eliezer was representing properties as sets in the visual diagram.

Anyway, I thought others might benefit by learning from my mistake. Also, I'd like to point out that this definition of the natural numbers yields no procedure that lets you produce theorems of PA as second-order logic has no computable complete definition of provability. Just use the axiom schema of induction instead of the second order axiom of induction and you will be able to produce theorems though.