**Followup to**: Godel's Completeness and Incompleteness Theorems

"So the question you asked me last time was, 'Why does anyone bother with first-order logic at all, if second-order logic is so much more powerful?'"

Right. If first-order logic can't talk about finiteness, or distinguish the size of the integers from the size of the reals, why even bother?

"The first thing to realize is that first-order theories can still have a *lot *of power. First-order arithmetic does narrow down the possible models by a lot, even if it doesn't narrow them down to a *single *model. You can prove things like the existence of an infinite number of primes, because *every *model of the first-order axioms has an infinite number of primes. First-order arithmetic is never going to prove anything that's *wrong *about the standard numbers. Anything that's true in *all *models of first-order arithmetic will also be true in the *particular *model we call the standard numbers."

Even so, if first-order theory is strictly weaker, why bother? Unless second-order logic is just as incomplete relative to third-order logic, which is weaker than fourth-order logic, which is weaker than omega-order logic -

"No, surprisingly enough - there's tricks for making second-order logic encode any proposition in third-order logic and so on. If there's a collection of third-order axioms that characterizes a model, there's a collection of second-order axioms that characterizes the same model. Once you make the jump to second-order logic, you're *done *- so far as anyone knows (so far as I know) there's *nothing *more powerful than second-order logic in terms of *which models it can characterize*."

Then if there's one spoon which can eat anything, why not just use the spoon?

"Well... this gets into complex issues. There are mathematicians who don't believe there *is *a spoon when it comes to second-order logic."

Like there are mathematicians who don't believe in infinity?

"Kind of. Look, suppose you *couldn't* use second-order logic - you belonged to a species that doesn't have second-order logic, or anything like it. Your species doesn't have any native mental intuition you could use to construct the notion of 'all properties'. And then suppose that, after somebody used first-order set theory to prove that first-order arithmetic had many possible models, you stood around shouting that you believed in only *one *model, what you called the *standard *model, but you couldn't explain what made this model different from any other model -"

Well... a lot of times, even in math, we make statements that genuinely mean something, but take a while to figure out how to define. I think somebody who talked about 'the numbers' would mean something even before second-order logic was invented.

"But here the hypothesis is that you belong to a species that *can't* invent second-order logic, or think in second-order logic, or anything like it."

Then I suppose you want me to draw the conclusion that this hypothetical alien is just standing there shouting about standardness, but its words don't mean anything because they have no way to pin down one model as opposed to another one. And I expect this species is also magically forbidden from talking about all possible subsets of a set?

"Yeah. They can't talk about the largest powerset, just like they can't talk about the smallest model of Peano arithmetic."

Then you could arguably deny that shouting about the 'standard' numbers would mean anything, to the members of this particular species. You might as well shout about the 'fleem' numbers, I guess.

"Right. Even if all the members of this species *did *have a built-in sense that there was a special model of first-order arithmetic that was fleemer than any other model, if that fleem-ness wasn't bound to anything else, it would be meaningless. Just a floating word. Or if all you could do was define fleemness as floobness and floobness as fleemness, you'd have a loop of floating words; and that might give you the impression that each particular word had a meaning, but the loop as a whole wouldn't be connected to reality. That's why it doesn't help to say that the standard model of numbers is the smallest among all possible models of Peano arithmetic, if you can't define 'smallest possible' any more than you can define 'connected chain' or 'finite number of predecessors'."

But second-order logic *does *seem to have consequences first-order logic doesn't. Like, what about all that Godelian stuff? Doesn't second-order arithmetic semantically imply... its own Godel statement? Because the unique model of second-order arithmetic doesn't contain any number encoding a proof of a contradiction from second-order arithmetic? Wait, now I'm confused.

"No, that's correct. It's not paradoxical, because there's no effective way of finding all the *semantic *implications of a collection of second-order axioms. There's no analogue of Godel's Completeness Theorem for second-order logic - no *syntactic* system for deriving *all *the semantic consequences. Second-order logic is *sound*, in the sense that anything syntactically provable from a set of premises, is true in any model obeying those premises. But second-order logic isn't *complete*; there are semantic consequences you can't derive. If you take second-order logic at face value, there's no effectively computable way of deriving all the consequences of what you say you 'believe'... which is a major reason some mathematicians are suspicious of second-order logic. What does it mean to believe something whose consequences you can't derive?"

But second-order logic clearly has *some *experiential consequences first-order logic doesn't. Suppose I build a Turing machine that looks for proofs of a contradiction from first-order Peano arithmetic. If PA's consistency isn't provable in PA, then by the Completeness Theorem there must exist nonstandard models of PA where this machine halts after finding a proof of a contradiction. So first-order logic doesn't tell me that this machine runs forever - maybe it has nonstandard halting times, i.e., it runs at all standard N, but halts at -2* somewhere along a disconnected chain. Only second-order logic tells me there's no proof of PA's inconsistency *and therefore* this machine runs forever. Only second-order logic tells me I should *expect to see this machine keep runnin*g, and *not expect* - note falsifiability - that the machine ever halts.

"Sure, you just used a second-order theory to derive a consequence you didn't get from a first-order theory. But that's not the same as saying that you can *only *get that consequence using second-order logic. Maybe another first-order theory would give you the same prediction."

Like what?

"Well, for one thing, first-order *set theory* can prove that first-order *arithmetic *has a model. Zermelo-Fraenkel set theory can prove the existence of a set such that all the first-order Peano axioms are true about that set. It follows within ZF that sound reasoning on first-order arithmetic will never prove a contradiction. And since ZF can prove that the syntax of classical logic is sound -"

What does it mean for *set theory* to prove that *logic *is sound!?

"ZF can quote formulas as structured, and encode models as sets, and then represent a finite ZF-formula which says whether or not a set of quoted formulas is true about a model. ZF can then prove that no step of classical logic goes from premises that are true inside a set-model, to premises that are false inside a set-model. In other words, ZF can represent the idea 'formula X is semantically true in model Y' and 'these syntactic rules never produce semantically false statements from semantically true statements'."

Wait, then why can't ZF prove *itself *consistent? If ZF believes in all the axioms of ZF, and it believes that logic never produces false statements from true statements -

"Ah, but ZF can't prove that there exists any set which is a model of ZF, so it can't prove the ZF-axioms are consistent. ZF *shouldn't* be able to prove that some set is a model of ZF, because that's not true in all models. Many models of ZF don't contain any *individual *set well-populated enough for that one set to be a model of ZF all by itself."

I'm kind of surprised in a Godelian sense that ZF *can *contain sets as large as the universe of ZF. Doesn't any given set have to be smaller than the whole universe?

"Inside *any particular model* of ZF, every set *within *that model is smaller than that model. But not all models of ZF are the same size; in fact, models of ZF of *every *size exist, by the Lowenheim-Skolem theorem. So you can have *some *models of ZF - some universes in which all the elements collectively obey the ZF-relations - containing individual sets which are larger than *other *entire models of ZF. A set that large is called a *Grothendieck universe* and assuming it exists is equivalent to assuming the existence of 'strongly inaccessible cardinals', sizes so large that you provably can't prove inside set theory that anything that large exists."

Whoa.

(Pause.)

But...

"But?"

I agree you've shown that *one *experiential consequence of second-order arithmetic - namely that a machine looking for proofs of inconsistency from PA, won't be seen to halt - can be derived from first-order set theory. Can you get *all *the consequences of second-order arithmetic in some particular first-order theory?

"You can't get all the *semantic *consequences of second-order logic, taken at face value, in *any *theory or *any *computable reasoning. What about the halting problem? Taken at face value, it's a semantic consequence of second-order logic that any given Turing machine either halts or doesn't halt -"

Personally I find it rather intuitive to imagine that a Turing machine either halts or doesn't halt. I mean, if I'm walking up to a machine and watching it run, telling me that its halting or not-halting 'isn't entailed by my axioms' strikes me as not describing any actual experience I can have with the machine. Either I'll see it halt eventually, or I'll see it keep running forever.

"My point is that the statements we *can *derive from the syntax of current second-order logic is limited by that syntax. And by the halting problem, we shouldn't ever expect a computable syntax that gives us *all *the semantic consequences. There's no possible theory you can *actually use* to get a correct advance prediction about whether an arbitrary Turing machine halts."

Okay. I agree that no computable reasoning, on second-order logic or anything else, should be able to solve the halting problem. Unless time travel is possible, but even then, you shouldn't be able to solve the expanded halting problem for machines that use time travel.

"Right, so the *syntax *of second-order logic can't prove everything. And in fact, it turns out that, in terms of what you can *prove syntactically* using the standard syntax, second-order logic is identical to a many-sorted first-order logic."

Huh?

"Suppose you have a first-order logic - one that doesn't claim to be able to quantify over all possible predicates - which does allow the universe to contain two different sorts of things. Say, the logic uses lower-case letters for all type-1 objects and upper-case letters for all type-2 objects. Like, '∀x: x = x' is a statement over all type-1 objects, and '∀Y: Y = Y' is a statement over all type-2 objects. But aside from that, you use the same syntax and proof rules as before."

Okay...

"Now add an element-relation x∈Y, saying that x is an element of Y, and add some first-order axioms for making the type-2 objects behave like collections of type-1 objects, including axioms for making sure that most describable type-2 collections exist - i.e., the collection X containing just x is guaranteed to exist, and so on. What you can *prove syntactically* in this theory will be identical to what you can prove using the standard syntax of second-order logic - even though the theory doesn't claim that *all possible* collections of type-1s are type-2s, and the theory will have models where many 'possible' collections are missing from the type-2s."

Wait, now you're saying that second-order logic is no more powerful than first-order logic?

"I'm saying that the supposed power of second-order logic derives from *interpreting* it a particular way, and taking on faith that when you quantify over *all properties*, you're actually talking about all properties."

But then second-order arithmetic is no more powerful than first-order arithmetic in terms of what it can actually *prove*?

"2nd-order arithmetic is *way *more powerful than first-order arithmetic. But that's because first-order set theory is more powerful than arithmetic, and adding the syntax of second-order logic corresponds to adding axioms with set-theoretic properties. In terms of which consequences can be *syntactically *proven, second-order arithmetic is more powerful than first-order arithmetic, but *weaker* than first-order set theory. First-order set theory can prove the existence of a model of second-order arithmetic - ZF can prove there's a collection of numbers and sets of numbers which models a many-sorted logic with syntax corresponding to second-order logic - and so ZF can prove second-order arithmetic consistent."

But first-order logic, including first-order set theory, can't even *talk about* the standard numbers!

"Right, but first-order set theory can *syntactically prove *more statements about 'numbers' than second-order arithmetic can prove. And when you combine that with the *semantic *implications of second-order arithmetic not being computable, and with any second-order logic being syntactically identical to a many-sorted first-order logic, and first-order logic having neat properties like the Completeness Theorem... well, you can see why some mathematicians would want to give up entirely on this whole second-order business."

But if you deny second-order logic you *can't even say the word 'finite'*. You would have to believe the word 'finite' was a *meaningless noise*.

"You'd define finiteness relative to whatever first-order model you were working in. Like, a set might be 'finite' only on account of the model not containing any one-to-one mapping from that set onto a smaller subset of itself -"

But that set wouldn't *actually *be finite. There wouldn't actually be, like, only a billion objects in there. It's just that all the mappings which could *prove *the set was infinite would be mysteriously missing.

"According to some *other *model, maybe. But since there is no one true model -"

How is this not crazy talk along the lines of 'there is no one true reality'? Are you saying there's no *really *smallest set of numbers closed under succession, without all the extra infinite chains? Doesn't talking about how these theories have multiple possible models, imply that those possible models are *logical thingies* and one of them actually *does *contain the largest powerset and the smallest integers?

"The mathematicians who deny second-order logic would see that reasoning as invoking an implicit background universe of set theory. Everything you're saying makes sense *relative *to some *particular model* of set theory, which would contain possible models of Peano arithmetic as sets, and could look over those sets and pick out the smallest *in that model*. Similarly, that set theory could look over a proposed model for a many-sorted logic, and say whether there were any subsets *within *the larger universe which were missing from the many-sorted model. Basically, your brain is insisting that it lives inside some *particular *model of set theory. And then, from that standpoint, you could look over some *other *set theory and see how it was missing subsets that *your* theory had."

Argh! No, damn it, I live in the set theory that *really does* have all the subsets, with no mysteriously missing subsets or mysterious extra numbers, or denumerable collections of all possible reals that could like totally map onto the integers if the mapping that did it hadn't gone missing in the Australian outback -

"But *everybody *says that."

Okay...

"Yeah?"

Screw set theory. I live in the *physical universe* where when you run a Turing machine, and keep watching forever *in the physical universe*, you never *experience a time* where that Turing machine outputs a proof of the inconsistency of Peano Arithmetic. Furthermore, I live in a universe where space is *actually *composed of a real field and space is *actually *infinitely divisible and contains *all *the points between A and B, rather than space only containing a denumerable number of points whose existence can be proven from the first-order axiomatization of the real numbers. So to talk about *physics *- forget about mathematics - I've got to use second-order logic.

"Ah. You know, that particular response is not one I have seen in the previous literature."

Yes, well, I'm not a pure mathematician. When I ask whether I want an Artificial Intelligence to think in second-order logic or first-order logic, I wonder how that affects what the AI does in *the actual physical universe*. Here in the *actual physical universe* where times are followed by successor times, I *strongly suspect* that we should only expect to experience *standard *times, and not experience any nonstandard times. I think time is *connected*, and global connectedness is a property I can only talk about using second-order logic. I think that every *particular* time is finite, and yet time has no upper bound - that there are all finite times, but only finite times - and that's a property I can only characterize using second-order logic.

"But if you can't ever tell the difference between standard and nonstandard times? I mean, *local *properties of time can be described using first-order logic, and you can't directly *see *global properties like 'connectedness' -"

But I *can *tell the difference. There are only nonstandard times where a proof-checking machine, running forever, outputs a proof of inconsistency from the Peano axioms. So I don't expect to experience seeing a machine do that, since I expect to experience only standard times.

"Set theory can also prove PA consistent. If you use set theory to define time, you similarly won't expect to see a time where PA is proven inconsistent - those nonstandard integers don't exist in any model of ZF."

Why should I anticipate that my physical universe is restricted to having only the nonstandard times allowed by a *more *powerful set theory, instead of nonstandard times allowed by first-order arithmetic? If I then talk about a nonstandard time where a proof-enumerating machine proves ZF inconsistent, will you say that only nonstandard times allowed by some still more powerful theory can exist? I think it's clear that the way you're deciding which experimental outcomes you'll have to excuse, is by secretly assuming that *only standard times* exist regardless of which theory is required to narrow it down.

"Ah... hm. Doesn't physics say this universe is going to run out of negentropy before you can do an infinite amount of computation? Maybe there's only a bounded amount of time, like it stops before googolplex or something. That can be characterized by first-order theories."

We don't know that for certain, and I wouldn't want to build an AI that just *assumed* lifespans had to be finite, in case it was wrong. Besides, should I use a different *logic *than if I'd found myself in Conway's Game of Life, or something else really infinite? Wouldn't the same sort of creatures evolve in that universe, having the same sort of math debates?

"Perhaps no universe like that *can *exist; perhaps only finite universes can exist, because only finite universes can be uniquely characterized by first-order logic."

You just used the word 'finite'! Furthermore, taken at face value, our own universe *doesn't* look like it has a finite collection of entities related by first-order logical rules. Space and time both look like infinite collections of points - continuous collections, which is a second-order concept - and then to characterize the *size *of that infinity we'd need second-order logic. I mean, by the Lowenheim-Skolem theorem, there aren't just *denumerable *models of first-order axiomatizations of the reals, there's also *unimaginably large cardinal infinities* which obey the same premises, and that's a possibility straight out of H. P. Lovecraft. Especially if there are any *things *hiding in the *invisible cracks of space*."

"How could *you *tell if there were inaccessible cardinal quantities of points hidden inside a straight line? And anything that *locally *looks continuous each time you try to split it at a describable point, can be axiomatized by a first-order schema for continuity."

That brings up another point: Who'd *really *believe that the reason Peano arithmetic works on physical times, is because that whole infinite axiom *schema *of induction, containing for every Φ a *separate *rule saying...

(Φ(0) ∧ (∀x: Φ(x) → Φ(Sx))) → (∀n: Φ(n))

...was used to specify our universe? How improbable would it be for an* infinitely long* list of rules to be true, if there wasn't a unified reason for all of them? It seems much more likely that the *real *reason first-order PA works to describe time, is that all *properties *which are true at a starting time and true of the successor of any time where they're true, are true of all later times; and this generalization over *properties* makes induction hold for *first-order formulas* as a special case. If my native thought process is first-order logic, I wouldn't see the connection between each individual formula in the axiom schema - it would take separate evidence to convince me of each one - they would feel like independent mathematical facts. But after doing *scientific *induction over the fact that many *properties *true at zero, with succession preserving truth, seem to be true everywhere - after generalizing the simple, compact *second-order* theory of numbers and times - *then *you could invent an infinite first-order theory to approximate it.

"Maybe that just says you need to adjust whatever theory of scientific induction you're using, so that it can more easily induct infinite axiom schemas."

But why the heck would you *need *to induct infinite axiom schemas in the first place, if Reality *natively *ran on first-order logic? Isn't it far more likely that the way we ended up with these infinite lists of axioms was that Reality was manufactured - forgive the anthropomorphism - that Reality was manufactured using an underlying schema in which time is a *connected *series of events, and space is a *continuous* field, and these are properties which happen to require second-order logic for humans to describe? I mean, if you picked out first-order theories at random, what's the chance we'd end up inside an infinitely long axiom schema that just *happened *to look like the projection of a compact second-order theory? Aren't we ignoring a sort of *hint*?

"A hint to what?"

Well, I'm not that sure myself, at this depth of philosophy. But I would currently say that finding ourselves in a physical universe where times have successor times, and space looks continuous, seems like a possible *hint *that the Tegmark Level IV multiverse - or the way Reality was manufactured, or whatever - might look more like *causal universes characterizable by compact second-order theories* than *causal universes characterizable by first-order theories*.

"But since any second-order theory can just as easily be *interpreted *as a many-sorted first-order theory with quantifiers that can range over either elements or sets of elements, how could using second-order syntax actually *improve *an Artificial Intelligence's ability to handle a reality like that?"

Good question. One obvious answer is that the AI would be able to induct what *you* would call an infinite axiom schema, as a single axiom - a simple, finite hypothesis.

"There's all *sorts *of obvious hacks to scientific induction of first-order axioms which would let you assign nonzero probability to computable infinite sequences of axioms -"

Sure. So beyond that... I would currently guess that the basic assumption behind 'behaving as if' second-order logic is true, says that the AI should act as if only the 'actually smallest' numbers will ever appear in physics, relative to some 'true' mathematical universe that it thinks it lives in, but knows it can't fully characterize. Which is roughly what I'd say human mathematicians do when they take second-order logic at face value; they assume that there's some *true *mathematical universe in the background, and that second-order logic lets them talk about it.

"And what behaviorally, experimentally distinguishes the hypothesis, 'I live in the true ultimate math with fully populated powersets' from the hypothesis, 'There's some random model of first-order set-theory axioms I happen to be living in'?"

Well... one behavioral consequence is suspecting that your time obeys an infinitely long list of first-order axioms with induction schemas for *every formula*. And then moreover believing that you'll never experience a time when a proof-checking machine outputs a proof that Zermelo-Fraenkel set theory is inconsistent - even though there's *provably *some models with times like that, which fit the axiom schema you just inducted. That sounds like secretly believing that there's a background 'true' set of numbers that you think characterizes physical time, and that it's the *smallest *such set. Another suspicious behavior is that as soon as you suspect Zermelo-Fraenkel set theory is consistent, you suddenly expect not to experience any *physical *time which ZF proves isn't a standard number. You don't think you're in the nonstandard time of some weaker theory like Peano arithmetic. You think you're in the minimal time expressible by *any and all theories*, so as soon as ZF can prove some number doesn't exist in the minimal set, you think that 'real time' lacks such a number. All of these sound like behaviors you'd carry out if you thought there was a single 'true' mathematical universe that provided the best model for describing all *physical *phenomena, like time and space, which you encounter - and believing that this 'true' backdrop used the *largest *powersets and *smallest *numbers.

"How *exactly *do you formalize all that reasoning, there? I mean, how would you *actually *make an AI reason that way?"

Er... I'm still working on that part.

"That makes your theory a bit hard to criticize, don't you think? Personally, I wouldn't be surprised if any such *formalized* reasoning looked just like believing that you live inside a first-order set theory."

I suppose I wouldn't be *too *surprised either - it's hard to argue with the results on many-sorted logics. But if comprehending the physical universe is best done by assuming that real phenomena are characterized by a 'true' mathematics containing *the *powersets and *the *natural numbers - and thus expecting that no mathematical model we can formulate will ever contain any larger powersets or smaller numbers than those of the 'true' backdrop to physics - then I'd call that a moral victory for second-order logic. In first-order logic we aren't even supposed to be able to talk about such things.

"Really? To me that sounds like believing you live inside a model of first-order set theory, and believing that all models of any theories *you *can invent must *also *be sets in the larger model. You can prove the Completeness Theorem inside ZF plus the Axiom of Choice, so ZFC already proves that all consistent theories have models which are sets, although of course it can't prove that ZFC itself is such a theory. So - anthropomorphically speaking - no model of ZFC *expects *to encounter a theory that has fewer numbers or larger powersets than itself. No model of ZFC expects to encounter any quoted-model, any set that a quoted theory entails, which contains larger powersets than the ones in its own Powerset Axiom. A first-order set theory can even make the leap from the finite statement, 'P is true of all my subsets of X', to believing, 'P is true of all my subsets of X that can be described by this denumerable collection of formulas' - it can encompass the jump from a single axiom over 'all my subsets', to a quoted axiom *schema *over formulas. I'd sum all that up by saying, 'second-order logic is how first-order set theory feels from the inside'."

Maybe. Even in the event that neither human nor superintelligent cognition will ever be able to 'properly talk about' unbounded finite times, global connectedness, particular infinite cardinalities, or true spatial continuity, it doesn't follow that Reality is similarly limited in which physics it can privilege.

Part of the sequence *Highly Advanced Epistemology 101 for Beginners*

Previous post: "Godel's Completeness and Incompleteness Theorems"

I kept expecting someone to object that "this Turing machine never halts" doesn't count as a prediction, since you can never have observed it to run forever.

Then the statement "this Turing machine halts for every input" doesn't count as a prediction either, because you can never have observed it for every input, even if the machine is just "halt". And the statement "this Turing machine eventually halts" is borderline, because its negation doesn't count as a prediction. What does this give us?

There is a lot of psuedo-arguing going on here. I'm a mathematician who specializes in logic and foundations, I think I can settle most of the controversy with the following summary:

1) Second-order logic is a perfectly acceptable way to formalize mathematics-in-general, stronger than Peano Arithmetic but somewhat weaker than ZFC (comparable in proof-theoretic strength to "Type Theory" of "Maclane Set Theory" or "Bounded Zermelo Set Theory", which means sufficient for almost all mainstream mathematical research, though inadequate for results which depend on Frankel's Replacement Axiom).

2) First-order logic has better syntactic properties than Second-order logic, in particular it satisfies Godel's Completeness Theorem which guarantees that the syntax is adequate for the semantics. Second-order logic has better semantic properties than first-order logic, in particular avoiding nonstandard models of the integers and of set-theoretical universes V_k for various ordinals k.

3) The critics of second-order logic say that they don't understand the concept of "finite integer" and "all subsets" and that the people using second-order logic are m... (read more)

(Since the average LW reader may not know whether to trust that this commenter is a mathematician specializing in logic and foundations, I remark that the above summary sounds much like the papers I've read on second-order logic. Though 'pseudo-arguing' is an odd way to describe the ancient expositional tradition of dialogue.)

One of the participants in this dialogue seems too concerned with pinning down models uniquely and also seems too convinced he knows what model he's in. Suppose we live in a simulation which is being run by superbeings who have access to oracles that can tell them when Turing machines are attempting to find contradictions in PA. Whenever they detect that something in the simulation is attempting to find contradictions in PA, that part of the simulation mysteriously stops working after the billionth or trillionth step or something. Then running such Turing machines can't tell us whether we live in a universe where PA is consistent or not.

I also wish both participants in the dialogue would take ultrafinitism more seriously. It is not as wacky as it sounds, and it seems like a good idea to be conservative about such things when designing AI.

... (read more)Edit:Here is an ultrafinitist fable that might be useful or at least amusing, from the link.How long do you expect to stay an ultrafinitist?

Until I'm destroyed, of course!

... but since Qiaochu asked that we take ultrafinitism seriously, I'll give a serious answer: something else will probably replace ultrafinitism as my preferred (maximum a posteriori) view of math and the world within 20 years or so. That is, I expect to determine that the question of whether ultrafinitism is true is not quite the right question to be asking, and have a better question by then, with a different best guess at the answer... just because similar changes of perspective have happened to me several times already in my life.

Is that because 2005 is as far from the present time as you dare to go?

I think the point of the fable is that Yesenin-Volpin was counting to each number in his head before declaring whether it was 'real' or not, so if you asked him whether 2^50 was 'real' he'd just be quiet for a really really long time.

As I understand it, this is precisely the kind of statement that ultrafinitists do not believe.

I confess that I lost track of the reasoning about which-order-logic-can-do-what-and-why somewhere in the last post or so. I'm also not clear how and why it is important in understanding this "Highly Advanced Epistemology 101 for Beginners". I'm wondering whether (or how) the ability to "properly talk about unbounded finite times, global connectedness, particular infinite cardinalities, or true spatial continuity" is essential or even important for (F)AI research.

This and previous articles in this series emphasize attaching meaning to sequences of symbols via discussion and gesturing toward models. That strategy doesn't seem compatible with your article regarding sheep and pebbles. Isn't there a way to connect sequences of symbols to sheep (and food and similar real-world consequences) directly via a discipline of "symbolcraft"?

I thought pebblecraft was an excellent description of how finitists and formalists think about confusing concepts like uncountability or actual infinity: Writing down "... is uncountably infinite ..." is part of our pebblecraft, and we have some confidence that when we get to the bottom line of the formal argument, we'll be able to interpret the results "in the usual way". The fact that someone (perhaps an alien) might watch our pebblecraft and construe us (in medias res) to be referring to (abstract, mathematical) things other than the (abstract, mathematical) things that we usually construe ourselves to be referring to doesn't stop the pebblecraft magic from working.

Feeling our way into a new formal system is part of our (messy, informal) pebblecraft. Sometimes people propose formal systems starting with their intended semantics (roughly, model theory). Sometimes people propose formal systems starting with introduction and elimination rules (roughly, proof theory). If the latter, people sometimes look for a semantics to go along with the syntax (and vice versa, of course).

For example, lambda calculus started with rules for performing beta reduction. In talking about lambda calculus, people refer to it as "functions from functions to functions". Mathematicians knew that there was no nontrivial set S with a bijection to the set of all functions from S to S. So something else must be going on. Dana Scott invented domain theory partially to solve this problem. Domains have additional structure, such that some domains D do have bijections with the set of all

structure-preservingmaps from D to D. http://en.wikipedia.org/wiki/Domain_theory Similarly, modal logics started with just proof theory, and then Saul Kripke invented a semantics for them. http://en.wikipedia.org/wiki/Kripke_semanticsThere's always a syntactical model which you can ... (read more)

Suggestion: if your goal is AI, maybe abandon logic and set theory in favor of type theory. It seems better suited to computers anyway thanks to computational trinitarianism. The problem of pinning down models uniquely seems to be solved by something called the univalence axiom, which I currently don't understand at all.

Is this true because of the absent C, true in the sense of the (larger) model, or just false?

Zah? The linked answer says,

It goes on to say that this need not hold when we interpret "model of ZFC" in the sense of the (outer) model.

This discussion seems to equate the question of what language we want an AGI to most often

reason with, and what language we think our universe, or (Tegmark?-)reality in general, is likely 'written' in. But there may be practical virtues of one language over the other that do not translate intodistinct reasonsto believe that reality is written in the more practical language; so the two questions should be kept separate to some extent. 'Think like reality' is aheuristicfor helping alleviate dissonance between our beliefs, and better aligning our beliefs... (read more)What I gather is that the real question here isn't so much "First or second order logic?", but "Is it sensible to try to convey/instill the idea of 'largest powerset' to/in an AI, and if so how?" "Largest powerset" seems both meaningful and simple (for the purpose of applying Occam's razor), but we don't know how to capture its meaning using syntactic proof rules (or rather, we know that we can't). Maybe we can capture the meaning some other way, for example in the form of a "math intuition module" that can arrive at "probabilistic" conclusions about mathematical structures that are defined in part by "largest powerset"?

What I got out of this post is that second-order logic is a lie.

... (read more)I'm still worried about the word "model". You talk about models of second-order logic, but what

isa model of second-order logic? Classically speaking, it's a set, and you do talk about ZF proving the existence of models of SOL. But if we need to use set theory to reason about the semantic properties of SOL, then are we not then working within a first-order set theory? And hence we're vulnerable to unexpected "models" of that set theory affecting the theorems we prove about SOL within it.It seems like you're treating "model" a... (read more)

My reply got a bit long:

http://lesswrong.com/r/discussion/lw/g81/re_secondorder_logic_the_controversy/

Was there supposed to be a word after "structured" and before the comma?

Actually, in NBG you have explicitness of assumptions and of first-order logic — and at the same time axiom of induction is a single axiom.

Actually, if you care about cardinality, you need a well-specified set theory more than just axioms of reals. Second-order theory has a unique model, yes, but it has the notion of "all" subsets, so it just smuggles some set theory without specifying it. As I understand, this was the motivation for Henkin semantics.

And if you look for a set theory (explicit or implicit) for reals as used in physics, I am not e... (read more)

For what it's worth, mathematicians care about first-order logic because it satisfies nicer theorems than second-order logic, in the same way that mathematicians care about Lebesgue integration because it satisfies nicer theorems than Henstock-Kurzweil integration. The statements that mathematicians talk about in many contexts are first-order anyway, so we might as well figure out what there is to... (read more)

Eliezer - why is it that you seem to adamant that our physical reality is better modeled by the standard natural numbers rather than some supernatural model? If we try to see whether a physical implementation of a Turing machine provides an inconsistency in PA, then of course it won't because it will run for fewer than Ackermann(20) iterations, and all number theories say that it won't halt by then. If we instead lived in something that looked like the game of life and we did expect it to last infinitely long why exactly would we expect time steps to be in... (read more)

Too many contradictions, I think: it should read "deny that shouting about the standard numbers

wouldmean anything".(Removed wrong edit here)

When I learned about Lowenhein-Skolem, I just thought "lol first-order logic sux". I didn't question whether distinguishing among infinite cardinalities was a meaningful thing for humans to do. Eliezer is clearly a higher-order student of logic.

As far as complexity-of-logic-theories-for-reason-of-believing-in-them, that should be proportional to the minimal Turing machine that would check if something is an axiom or not. (Of course, in the case of a finite list, approximating it to the total length of the axioms is reasonable, because the Turing machine that does "check if input is equal to following set:" followed by set adds a constant size to the set -- but that approximation breaks down badly for infinite axiom schema).

hey that was really interesting about whether or not "all properties" in second order logic falls victim to the same weirdness as the idea of a natural number does in first order logic. I never thought of that before.

oh yeah do you have a reference for "there's tricks for making second-order logic encode any proposition in third-order logic and so on."? I would like to study that too, even though you've already given me a lot of things to read about already!

That might sound weird, but do we have any evidence that our time follows the standard numbers (or a continuous version of them) only? Is it even possible to get such evidence? Maybe our turing machine (looking for contradictions in PA) stops at -2* - "we" cannot see it, as "we" are on the standard numbers only, experiencing only effects of previous standard numbers.

"Doesn't physics say this universe is going to run out of negentropy before you can do an infinite amount of computation?" Actually, there is a [proposal that could create a computer that runs forever.

"If there's a collection of third-order axioms that characterizes a model, there's a collection of second-order axioms that characterizes the same model. Once you make the jump to second-order logic, you're done - so far as anyone knows (so far as I know) there's nothing more powerful than second-order logic in terms of which models it can characterize."

You clearly can state Continuum Hypothesis in the higher order logic, while a 2nd order formulation seems elusive. Are you sure about it?

Second order logic may be able to express all expressible statements in 3rd order logic, 4th order, up to any finite Nth order, but perhaps there exist ∞th order statements that 2nd order logic cannot express. Albeit, such statements could never be fully decidable and would thus be, at best, semidecidable or co-semidecidable. This may not be complete, but science works the same way.