First order arithmetic is incomplete. Except that it's also complete. Second order arithmetic is more expressive - except when it's not - and is also incomplete and also complete, except when it means something different. Oh, and full second order-logic might not really be a logic at all. But then, first order logic has no idea what the reals and natural numbers are, especially when it tries to talk about them.

That was about the state of my confusion, and I set out to try and clear it up. Here I'll try and share an understanding of what is really going on with first and second order logic and why they differ so radically. It will be deliberately informal, so I won't be distinguishing between functions, predicates and subsets, and will be using little notation. It'll be exactly what I wish someone had told me before I started looking into the whole field.

## Meaningful Models

An old man starts talking to you about addition, subtraction and multiplication, and how they interact. You assume he was talking about the integers; turns out he means the rational numbers. The integers and the rationals are both models of addition, subtraction and multiplication, in that they obey all the properties that the old man set out. But notice though he had the rationals in mind, he didn't mention them at all, he just listed the properties, and the rational numbers turned out, very non-coincidentally, to obey them.

These models are generally taken to give meaning to the abstract symbols in the axioms - to give semantics to the syntax. In this view, "for all x,y xy=yx" is a series of elegant squiggles, but once we have the model of the integers (or the rationals) in mind, we realise that this means that multiplication is commutative.

Similarly, models can define the "truth" of sentences. Consider the following sentences:

(1) 2 has a multiplicative inverse.

(2) There exists a number that squares to -1.

(3) 2 is not equal to zero.

(4) If a+b=0, then a^{2}=b^{2}.

(5) No number is equal to zero.

Are these true? You and the old man would disagree on (1), with him saying yes and you saying no - your models have enabled you to attach truth-values to the statement. You would both claim (2) is false, but there are other models - such as the complex numbers - where it is true. You would both claim (3) is true, but there are other models - such as the field with two elements - where it is false. So truth is model dependent.

The last two are interesting, because it turns out that (4) is true in every model, and (5) is false. Statements like (4) and "not (5)" that are true in every model are called valid. Since they are independent of the choice of models, these statements are, in a certain sense, true from pure syntax. Both these statement can also be deduced purely from the axioms. It would be nice if all valid statements could also be deduced. But only first order logic allows this.

What would also be nice if you could agree on the model you're using. Maybe the old man could add "2 (and every non-zero number) has a multiplicative inverse", giving the field axioms, and ruling your integers right out. But there are still many fields - the rationals, the reals, the complex numbers, and many in between. But maybe with a few more axioms, you could really narrow thing downs, and treat the axioms and the model interchangeably. But only second order logic allows *this*.

# First order fun

First order theories are those where you can quantify over the basic objects in your theory, and phrase statements like "all Greeks enjoy dancing" and "there exists a blind millionaire". This distinguishes them from second order theories where you can quantify over higher order objects (predicates and functions), phrasing sentiments like "all nationalities are equal" or "there exists a dominant social class" - in first order logic with humans as basic objects, nationality and social class are predicates, and you can't quantify over them. You don't appreciate how limiting first order logic can be until you've worked with it a while; nevertheless, it's a good logic to start with and possesses certain key properties not shared by higher order logics. Let's start with the most famous result, the incompleteness theorem.

## Gödel's incompleteness theorem

Gödel's (first) incompleteness theorem, is a theorem about an arithmetic but also (implicitly) about a model. The implicit model is the natural numbers: any arithmetic that can model them suffers from the incompleteness theorem. But it is not really about any model, beyond that requirement: it's an intrinsic limitation of the system.

Let's assume we have the usual (first order) Peano axioms for ordering, addition and multiplication. We also need an infinity of axioms to define induction, but that isn't as bad as it seem: given a specific sentence, it's easy to check whether or not it's an axiom, in a fast and efficient way. To nobody's surprise, the natural number are a model of first order Peano arithmetic.

And inside this model, we can construct the Gödel sentence G, which is equivalent with "there is no proof of G". By 'proof', we mean a natural number n that is the Gödel number that encodes a proof of G. Obviously, Peano arithmetic cannot prove G without being inconsistent; but this is precisely what G is saying, so we can actually see that G is true. And hence that "not G" cannot be provable if our arithmetic is consistent. This is the incompleteness theorem: neither G nor "not G" are provable, so the proof system is "incomplete".

## Gödel's completeness theorem

Enough with *in*completeness; what about Gödel completeness theorem? Unlike the previous theorem, this is a statement about about the axiomatic system and *all* of its models. It simply says that if a sentence is valid (true in every model) for a first order theory, then it can be proved from the axioms. This provides a bridge between the semantic concept of "true" (true in every model) and the syntactic concept of provable (can be proved by these formal manipulations). It also implies that we can enumerate all the sentences that are valid in a first order system, simply by enumerating all the proofs.

Where does this leave the Gödel sentence G? We've seen we can't prove it from the axioms, hence it cannot be true in all models. Therefore there must exist a model N' of first order Peano arithmetic in which G is false. What does that mean? G claims that "there does not exist a number n with (certain properties)", so if G is false, such an n does exist. Now we know (because we've constructed it that way) that if that n were a natural number, then those (certain properties) means that it must encode a proof of G. Since there is no proof of G, n cannot be a natural number, but must be an extra, a non-standard number, from beyond our usual universe. This also means that those (certain properties) do not capture what we thought they did: they only mean "encodes a proof of G" for the standard natural numbers.

This seems somewhat troubling, that Peano arithmetic would admit two distinct models and fail to say what we thought it said; but it gets worse.

## The Löwenheim–Skolem theorem

The Löwenheim–Skolem theorem says that if any countable first order theory (such as Peano arithmetic) has an infinite model, then it has a model for every size ("cardinality") of infinity. Therefore first order Peano arithmetic has to have many, many models; at a minimum, it has to have a model of same size as the reals.

This also means that no matter how much first order information we add to Peano arithmetic, we cannot restrict it to only being about the natural numbers; the models and the axioms can never be interchangeable. But it gets still worse.

Uncountable models of Peano arithmetic are bad enough, but it turns out that Peano arithmetic also has many other countable models - models of same size as the natural numbers, but weirdly different. Weirdly is an apt summation of these models where neither multiplication nor addition can be computed.

So first order Peano arithmetic is not really about the natural numbers at all - but about the natural numbers and all these strange countable and uncountable models that we can't really describe. Maybe second order theories will do better?

# Second order scariness

In second order theories, we can finally do what we've been itching to do: apply the existential and universal quantifiers to predicates, functions, sets of numbers and objects of that ilk. We can triumphantly toss away the infinitely many axioms needed to define induction in first order Peano arithmetic, and replace them with a simple:

- "Every (non-empty) set of numbers has a least element."

This, as every schoolboy should know, is enough to uniquely define the natural numbers. Or is it?

## The importance of semantics

That sentence remains a series of squiggles until we've decided what those squiggles actually *mean*. This takes on an extra importance in second order logic that it didn't have in first order. When we say "every set of numbers", what do we mean? In terms of meaning and models, what models are we going to be considering?

The first idea is the obvious one: "every set of numbers" means, duh, "*every* set of numbers". When we specify a model, we'll give the 'universe of discourse', the basic objects (maybe the natural numbers or the reals), and the quantifier 'every' will range over all possible subsets of this universe (every subset of the natural or real numbers). This is called the full or standard semantics for second order arithmetic, and the models are called full models.

But note what we have done here: we have brought in extra information to clarify what we are talking about. In order to defined full semantics, we've had to bring set theory into the mix, to define all these subset. This caused Quine to accuse second order logic of not being a logic at all, but a branch of set theory.

Also with all these Russell Paradoxes flying around, we might be a bit wary of jumping immediately into the 'every set' semantics. Maybe instead of defining a model by just giving the 'universe of discourse', we would want to also define the subsets we are interested in, listing explicitly all the sets we are going to allow the quantifiers to range over.

But this could get ridiculous - do we really want a model which includes the natural numbers, but only allows us to quantify over the sets {1, 7, 13908}, {0} and the empty set? We can define, for instance, what an even number is (a number that is equal to two times something), and so why can't we get the set of even numbers into our model?

## Henkin semantics

We really want our 'universe of quantifiable sets' to include any set we can define. It turns out this something we can get from inside second-order logic, by using the comprehension axioms. They roughly say that "any set/predicate we can define, is in the universe of sets/predicates we can quantify over". There are infinitely many such comprehension axioms, covering each definition.

Then Henkin semantics is second-order logic, with all the comprehension axioms, and no further restrictions on the possible models. These 'Henkin models' will have both a defined universe of discourse (the list of basic objects in the model we can quantify over) and a defined 'universe of sets/predicates' (a list of sets of basic objects that we can quantify over), with the comprehension axioms making sure they are compatible. Though they are called 'Henkin semantics', they could really be called 'Henkin syntaxes', since we aren't giving any extra restrictions on the models apart from the internal axioms.

It should be noted that a full model (where the 'universe of sets' include all possible subsets) automatically obeys the comprehension axioms, since it can quantify over every set. So every full model in a Henkin model, and it might seem that Henkin semantics are a simple extension of full semantics, and that they have a greater 'expressive power'. Few things could be further from the truth.

## First or second order?

If the old man of previously had claimed "every number is even", and, when challenged with "3" had responded "3 is not a number", you might be justified in questioning his grasp of the meaning of 'every' and 'number'. Similarly, if he had said "every (non-empty) set of *integers* has a least element," and when challenged with "the negative integers" had responded "that collection is not a set", you would also question his use of 'every'.

Similarly, since Henkin semantics allows us to restrict the meaning of "every set", depending on the model under consideration, statements such as "every set is blah" are much weaker in Henkin semantics than in full semantics. For instance, take the axioms for an ordered field, and add:

- "Every (non-empty) bounded set has a supremum"

As every schoolgirl should know, this is enough to model the real numbers... in full semantics. But in Henkin semantics, 'every bounded set' can mean 'every definable bounded set' and we can take the 'definable reals' as a model: the supremum of a definable set is definable. And this does not include all the real numbers; for a start, the definable reals are countable, so there are far fewer of them than there are reals.

This may seem a feature, rather than a bug. What are these strange, 'non-definable reals' that clutter up the place; why would we want them anyway? But the definable reals are just one Henkin model of these axioms, there are others perfectly valid models, including the reals themselves. So these axioms have not allowed us, in Henkin semantics, to pick out one particular model.

This seems familiar: the first order Peano axioms also failed to specify the natural numbers. The familiarity is not an illusion: Henkin semantics is actually a first order theory (a 'many sorted' one, where some classes of objects have different properties). Hence the completeness theorem still applies: any result true in every Henkin model, can be proved from the basic axioms. But this is not much help if we have many models, and unfortunately the Löwenheim–Skolem theorem also still applies: if we have one infinite model, we have many, many others. So not only do we have the countable 'definable reals' and the reals themselves as models but larger 'hyperreals' and 'superreals' with many more elements to them.

## Skolem's paradox

In fact, Henkin semantics can behave much worse than standard first order logic, as it can express more. Express more - but ultimately, not mean more. For instance, in second order language, we can express the sentence "there exists an uncountable set". We could start by defining an infinite set as one with a one-to-one correspondence with a strict subset of itself, *á la* Cantor. We could define an uncountable infinite set as one that has a subset that is also infinite, but that doesn't have a one-to-one correspondence with it (the subset is of lower cardinality). There are other, probably better, ways of phrasing the same concept, but that will do for here.

Then basic second order logic with Henkin semantics and the additional axiom "there exists an uncountable set" certainly has a model: the reals, for instance. Then by the Löwenheim–Skolem theorem, it must have a countable model.

Wait a moment there. A logic that asserts the existence of an uncountable set... has a countable model? This was Skolem's paradox, and one of his arguments against first order logic. The explanation for the paradox involves those one-to-one correspondences mentioned above. An uncountable set is an infinite set without any one-to-one functions to any countable set. But in a Henkin model 'any one-to-one function' means 'any one to one function on the list of allowable functions in this model'. So the 'uncountable set' in the countable model is, in fact, countable: it has one-to-one functions to other countable set. But all these functions are banned from the Henkin model, so the model cannot see, internally, that that set is actually countable.

So we can *express* a lot of statements in Henkin semantics - "every bounded set has a supremum", "there exists an uncountable set" - but these don't actually *mean* what we thought they did.

## Full second order semantics

Having accepted the accusations of sneaking in set theory, and the disturbing fact that we had to bring in meaning and semantics (by excluding a lot of potential models), rather than relying on the syntax... what can we do with full second order semantics?

Well, for start, finally nail down the natural numbers and the reals. With second order Peano arithmetic, including the second order induction axiom "every (non-empty) set of numbers has a least element", we know that we have only one (full) model: the natural numbers. Similarly, if we have the axioms for an ordered field and toss in "every bounded set has a supremum", then the reals are the only full model that stands up.

This immediately implies that full second order semantics are not complete, unlike Hekin semantics and first order theories. We can see this from the incompleteness result (though don't confuse incompleteness with non-completeness). Take second order Peano arithmetic. This has a Gödel statement G which is true but unprovable. But there is only one model of second order Peano arithmetic! So G is both unprovable and true in every model for the theory.

It may seem surprising that completeness fails for full semantics: after all, it is true in Henkin semantics, and every full model is also a Henkin model, so how can this happen? It derives from the restriction of possible models: completeness means that every sentence that is true in every Henkin model, must be provable. That does not mean that every sentence that is true in every full model, must also be provable. The G sentence is indeed false in some models - but only in Henkin models that are not full models.

The lack of completeness means that the truths of second order logic cannot be enumerated - it has no complete proof procedure. This causes some to reject full second order logic on these grounds. Others argued that completeness is not the important factor, but rather decidability: listing all the provable statements might be light entertainment, but what we really want is an algorithm to be able to prove (or disprove) any given sentence. But the Church-Turing theorem demonstrates that this cannot be done, in either first or second order logic: hence neither system can claim to be superior in this respect.

## Higher-order logic within full second order logic

Higher order logic is the next step up - quantifying over predicates of predicates, functions of functions. This would seem to make everything more complicated. However there is a result due to Hintikka that any sentence in full higher order logic can be shown to be equivalent (in an effective manner) with a sentence in full second order logic, using many-sorting. So there is, in a certain sense, no need to go beyond, and the important debate is between first order and full second order logic.

## Conclusion

So, which logic is superior? It depends to some extent on what we need it for. Anything provable in first order logic can be proved in second order logic, so if we have a choice of proofs, picking the first order one is the better option. First order logic has more pleasing internal properties, such as the completeness theorem, and one can preserve this in second order via Henkin semantics without losing the ability to formally express certain properties. Finally, one needs to make use of set theory and semantics to define full second order logic, while first order logic (and Henkin semantics) get away with pure syntax.

On the other hand, first order logic is completely incapable of controlling its infinite models, as they multiply, uncountable and generally incomprehensible. If rather that looking at the logic internally, we have a particular model in mind, we have to use second order logic for that. If we'd prefer not to use infinitely many axioms to express a simple idea, second-order logic is for us. And if we really want to properly express ideas like "every (non-empty) set has a least element", "every analytic function is uniquely defined by its power series" - and not just express them, but have them mean what we want them to mean - then full second order logic is essential.

**EDIT**: an addendum addresses the problem of using set theory (a first order theory) to define second order logic.