Proofs, Implications, and Models

Followup to: Causal Reference

From a math professor's blog:

One thing I discussed with my students here at HCSSiM yesterday is the question of what is a proof.

They’re smart kids, but completely new to proofs, and they often have questions about whether what they’ve written down constitutes a proof. Here’s what I said to them.

A proof is a social construct – it is what we need it to be in order to be convinced something is true. If you write something down and you want it to count as a proof, the only real issue is whether you’re completely convincing.

This is not quite the definition I would give of what constitutes "proof" in mathematics - perhaps because I am so used to isolating arguments that are convincing, but ought not to be.

Or here again, from "An Introduction to Proof Theory" by Samuel R. Buss:

There are two distinct viewpoints of what a mathematical proof is. The first view is that proofs are social conventions by which mathematicians convince one another of the truth of theorems. That is to say, a proof is expressed in natural language plus possibly symbols and figures, and is sufficient to convince an expert of the correctness of a theorem. Examples of social proofs include the kinds of proofs that are presented in conversations or published in articles. Of course, it is impossible to precisely define what constitutes a valid proof in this social sense; and, the standards for valid proofs may vary with the audience and over time. The second view of proofs is more narrow in scope: in this view, a proof consists of a string of symbols which satisfy some precisely stated set of rules and which prove a theorem, which itself must also be expressed as a string of symbols. According to this view, mathematics can be regarded as a 'game' played with strings of symbols according to some precisely defined rules. Proofs of the latter kind are called "formal" proofs to distinguish them from "social" proofs.

In modern mathematics there is a much better answer that could be given to a student who asks, "What exactly is a proof?", which does not match either of the above ideas. So:

Meditation: What distinguishes a correct mathematical proof from an incorrect mathematical proof - what does it mean for a mathematical proof to be good? And why, in the real world, would anyone ever be interested in a mathematical proof of this type, or obeying whatever goodness-rule you just set down? How could you use your notion of 'proof' to improve the real-world efficacy of an Artificial Intelligence?

...
...
...

Consider the following syllogism:

  1. All kittens are little;
  2. Everything little is innocent;
  3. Therefore all kittens are innocent.

Here's four mathematical universes, aka "models", in which the objects collectively obey or disobey these three rules:

little innocent kitten, little innocent kitten, big evil dog, little innocent dog little innocent kitten, big evil kitten, big evil dog, little innocent dog
little innocent kitten, little evil kitten, big innocent dog, little innocent dog little innocent kitten, big innocent kitten, big evil dog, little evil dog

There are some models where not all kittens are little, like models B and D. And there are models where not everything little is innocent, like models C and D. But there are no models where all kittens are little, and everything little is innocent, and yet there exists a guilty kitten. Try as you will, you won't be able to imagine a model like that. Any model containing a guilty kitten has at least one kitten that isn't little, or at least one little entity that isn't innocent - no way around it.

Thus, the jump from 1 & 2, to 3, is truth-preserving: in any universe where premises (1) and (2) are true to start with, the conclusion (3) is true of the same universe at the end.

Which is what makes the following implication valid, or, as people would usually just say, "true":

"If all kittens are little and everything little is innocent, then all kittens are innocent."

The advanced mainstream view of logic and mathematics (i.e., the mainstream view among professional scholars of logic as such, not necessarily among all mathematicians in general) is that when we talk about math, we are talking about which conclusions follow from which premises. The "truth" of a mathematical theorem - or to not overload the word 'true' meaning comparison-to-causal-reality, the validity of a mathematical theorem - has nothing to do with the physical truth or falsity of the conclusion in our world, and everything to do with the inevitability of the implication. From the standpoint of validity, it doesn't matter a fig whether or not all kittens are innocent in our own universe, the connected causal fabric within which we are embedded. What matters is whether or not you can prove the implication, starting from the premises; whether or not, if all kittens were little and all little things were innocent, it would follow inevitably that all kittens were innocent.


To paraphrase Mayor Daley, logic is not there to create truth, logic is there to preserve truth. Let's illustrate this point by assuming the following equation:

x = y = 1

...which is true in at least some cases.  E.g. 'x' could be the number of thumbs on my right hand, and 'y' the number of thumbs on my left hand.

Now, starting from the above, we do a little algebra:

1
x = y = 1
starting premise
2
x2 = xy
multiply both sides by x
3
x2 - y2 = xy - y2
subtract y2 from both sides
4
(x + y)(x - y) = y(x - y)
factor
5
x + y = y
cancel
6
2 = 1
substitute 1 for x and 1 for y

 

We have reached the conclusion that in every case where x and y are equal to 1, 2 is equal to 1. This does not seem like it should follow inevitably.

You could try to find the flaw just by staring at the lines... maybe you'd suspect that the error was between line 3 and line 4, following the heuristic of first mistrusting what looks like the most complicated step... but another way of doing it would be to try evaluating each line to see what it said concretely, for example, multiplying out x2 = xy in line 2 to get (12) = (1 * 1) or 1 = 1. Let's try doing this for each step, and then afterward mark whether each equation looks true or false:

1
x = y = 1
1 = 1
true
2
x2 = xy
1 = 1
true
3
x2 - y2 = xy - y2
0 = 0
true
4
(x + y)(x - y) = y(x - y)
0 = 0
true
5
x + y = y
2 = 1
false
6
2 = 1
2 = 1
false

 

Logic is there to preserve truth, not to create truth. Whenever we take a logically valid step, we can't guarantee that the premise is true to start with, but if the premise is true the conclusion should always be true. Since we went from a true equation to a false equation between step 4 and step 5, we must've done something that is in general invalid.

In particular, we divided both sides of the equation by (x - y).

Which is invalid, i.e. not universally truth-preserving, because (x - y) might be equal to 0.

And if you divide both sides by 0, you can get a false statement from a true statement. 3 * 0 = 2 * 0 is a true equation, but 3 = 2 is a false equation, so it is not allowable in general to cancel any factor if the factor might equal zero.

On the other hand, adding 1 to both sides of an equation is always truth-preserving. We can't guarantee as a matter of logic that x = y to start with - for example, x might be my number of ears and y might be my number of fingers.  But if x = y then x + 1 = y + 1, always. Logic is not there to create truth; logic is there to preserve truth. If a scale starts out balanced, then adding the same weight to both sides will result in a scale that is still balanced:

I will remark, in some horror and exasperation with the modern educational system, that I do not recall any math-book of my youth ever once explaining that the reason why you are always allowed to add 1 to both sides of an equation is that it is a kind of step which always produces true equations from true equations.

What is a valid proof in algebra? It's a proof where, in each step, we do something that is universally allowed, something which can only produce true equations from true equations, and so the proof gradually transforms the starting equation into a final equation which must be true if the starting equation was true. Each step should also - this is part of what makes proofs useful in reasoning - be locally verifiable as allowed, by looking at only a small number of previous points, not the entire past history of the proof. If in some previous step I believed x2 - y = 2, I only need to look at that single step to get the conclusion x2 = 2 + y, because I am always allowed to add y to both sides of the equation; because I am always allowed to add any quantity to both sides of the equation; because if the two sides of an equation are in balance to start with, adding the same quantity to both sides of the balance will preserve that balance. I can know the inevitability of this implication without considering all the surrounding circumstances; it's a step which is locally guaranteed to be valid. (Note the similarity - and the differences - to how we can compute a causal entity knowing only its immediate parents, and no other ancestors.)


You may have read - I've certainly read - some philosophy which endeavors to score points for counter-intuitive cynicism by saying that all mathematics is a mere game of tokens; that we start with a meaningless string of symbols like:

...and we follow some symbol-manipulation rules like "If you have the string 'A ∧ (A → B)' you are allowed to go to the string 'B'", and so finally end up with the string:

...and this activity of string-manipulation is all there is to what mathematicians call "theorem-proving" - all there is to the glorious human endeavor of mathematics.

This, like a lot of other cynicism out there, is needlessly deflationary.

There's a family of techniques in machine learning known as "Monte Carlo methods" or "Monte Carlo simulation", one of which says, roughly, "To find the probability of a proposition Q given a set of premises P, simulate random models that obey P, and then count how often Q is true." Stanislaw Ulam invented the idea after trying for a while to calculate the probability that a random Canfield solitaire layout would be solvable, and finally realizing that he could get better information by trying it a hundred times and counting the number of successful plays. This was during the era when computers were first becoming available, and the thought occurred to Ulam that the same technique might work on a current neutron diffusion problem as well.

Similarly, to answer a question like, "What is the probability that a random Canfield solitaire is solvable, given that the top card in the deck is a king?" you might imagine simulating many 52-card layouts, throwing away all the layouts where the top card in the deck was not a king, using a computer algorithm to solve the remaining layouts, and counting what percentage of those were solvable. (It would be more efficient, in this case, to start by directly placing a king on top and then randomly distributing the other 51 cards; but this is not always efficient in Monte Carlo simulations when the condition to be fulfilled is more complex.)

Okay, now for a harder problem. Suppose you've wandered through the world a bit, and you've observed the following:

(1) So far I've seen 20 objects which have been kittens, and on the 6 occasions I've paid a penny to observe the size of something that's a kitten, all 6 kitten-objects have been little.

(2) So far I've seen 50 objects which have been little, and on the 30 occasions where I've paid a penny to observe the morality of something little, all 30 little objects have been innocent.

(3) This object happens to be a kitten. I want to know whether it's innocent, but I don't want to pay a cent to find out directly. (E.g., if it's an innocent kitten, I can buy it for a cent, sell it for two cents, and make a one-cent profit. But if I pay a penny to observe directly whether the kitten is innocent, I won't be able to make a profit, since gathering evidence is costly.)

Your previous experiences have led you to suspect the general rule "All kittens are little" and also the rule "All little things are innocent", even though you've never before directly checked whether a kitten is innocent.

Furthermore...

You've never heard of logic, and you have no idea how to play that 'game of symbols' with K(x), I(x), and L(x) that we were talking about earlier.

But that's all right. The problem is still solvable by Monte Carlo methods!

First we'll generate a large set of random universes. Then, for each universe, we'll check whether that universe obeys all the rules we currently suspect or believe to be true, like "All kittens are little" and "All little things are innocent" and "The force of gravity goes as the square of the distance between two objects and the product of their masses".  If a universe passes this test, we'll check to see whether the inquiry of interest, "Is the kitten in front of me innocent?", also happens to be true in that universe.

We shall repeat this test a large number of times, and at the end we shall have an approximate estimate of the probability that the kitten in front of you is innocent.

On this algorithm, you perform inference by visualizing many possible universes, throwing out universes that disagree with generalizations you already believe in, and then checking what's true (probable) in the universes that remain. This algorithm doesn't tell you the state of the real physical world with certainty. Rather, it gives you a measure of probability - i.e., the probability that the kitten is innocent - given everything else you already believe to be true.

And if, instead of visualizing many imaginable universes, you checked all possible logical models - which would take something beyond magic, because that would include models containing uncountably large numbers of objects - and the inquiry-of-interest was true in every model matching your previous beliefs, you would have found that the conclusion followed inevitably if the generalizations you already believed were true.

This might take a whole lot of reasoning, but at least you wouldn't have to pay a cent to observe the kitten's innocence directly.

But it would also save you some computation if you could play that game of symbols we talked about earlier - a game which does not create truth, but preserves truth. In this game, the steps can be locally pronounced valid by a mere 'syntactic' check that doesn't require us to visualize all possible models. Instead, if the mere syntax of the proof checks out, we know that the conclusion is always true in a model whenever the premises are true in that model.

And that's a mathematical proof:  A conclusion which is true in any model where the axioms are true, which we know because we went through a series of transformations-of-belief, each step being licensed by some rule which guarantees that such steps never generate a false statement from a true statement.

The way we would say it in standard mathematical logic is as follows:

A collection of axioms X semantically implies a theorem Y, if Y is true in all models where X are true. We write this as X  Y.

A collection of axioms X syntactically implies a theorem Y within a system S, if we can get to Y from X using transformation steps allowed within S. We write this as X  Y.

The point of the system S known as "classical logic" is that its syntactic transformations preserve semantic implication, so that any syntactically allowed proof is semantically valid:

If X ⊢ Y, then X  Y.

If you make this idea be about proof steps in algebra doing things that always preserve the balance of a previously balanced scale, I see no reason why this idea couldn't be presented in eighth grade or earlier.

I can attest by spot-checking for small N that even most mathematicians have not been exposed to this idea. It's the standard concept in mathematical logic, but for some odd reason, the knowledge seems constrained to the study of "mathematical logic" as a separate field, which not all mathematicians are interested in (many just want to do Diophantine analysis or whatever).


So far as real life is concerned, mathematical logic only tells us the implications of what we already believe or suspect, but this is a computational problem of supreme difficulty and importance. After the first thousand times we observe that objects in Earth gravity accelerate downward at 9.8 m/s2, we can suspect that this will be true on the next occasion - which is a matter of probabilistic induction, not valid logic. But then to go from that suspicion, plus the observation that a building is 45 meters tall, to a specific prediction of how long it takes the object to hit the ground, is a matter of logic - what will happen if everything we else we already believe, is actually true. It requires computation to make this conclusion transparent.  We are not 'logically omniscient' - the technical term for the impossible dreamlike ability of knowing all the implications of everything you believe.

The great virtue of logic in argument is not that you can prove things by logic that are absolutely certain. Since logical implications are valid in every possible world, "observing" them never tells us anything about which possible world we live in. Logic can't tell you that you won't suddenly float up into the atmosphere. (What if we're in the Matrix, and the Matrix Lords decide to do that to you on a whim as soon as you finish reading this sentence?) Logic can only tell you that, if that does happen, you were wrong in your extremely strong suspicion about gravitation being always and everywhere true in our universe.

The great virtue of valid logic in argument, rather, is that logical argument exposes premises, so that anyone who disagrees with your conclusion has to (a) point out a premise they disagree with or (b) point out an invalid step in reasoning which is strongly liable to generate false statements from true statements.

For example: Nick Bostrom put forth the Simulation Argument, which is that you must disagree with either statement (1) or (2) or else agree with statement (3):

(1) Earth-originating intelligent life will, in the future, acquire vastly greater computing resources.

(2) Some of these computing resources will be used to run many simulations of ancient Earth, aka "ancestor simulations".

(3) We are almost certainly living in a computer simulation.

...but unfortunately it appears that not only do most respondents decline to say why they disbelieve in (3), most are unable to understand the distinction between the Simulation Hypothesis that we are living in a computer simulation, versus Nick Bostrom's actual support for the Simulation Argument that "You must either disagree with (1) or (2) or agree with (3)".  They just treat Nick Bostrom as having claimed that we're all living in the Matrix. Really. Look at the media coverage.

I would seriously generalize that the mainstream media only understands the "and" connective, not the "or" or "implies" connective. I.e., it is impossible for the media to report on a discovery that one of two things must be true, or a discovery that if X is true then Y must be true (when it's not known that X is true). Also, the media only understands the "not" prefix when applied to atomic facts; it should go without saying that "not (A and B)" cannot be reported-on.

Robin Hanson sometimes complains that when he tries to argue that conclusion X follows from reasonable-sounding premises Y, his colleagues disagree with X while refusing to say which premise Y they think is false, or else say which step of the reasoning seems like an invalid implication.  Such behavior is not only annoying, but logically rude, because someone else went out of their way and put in extra effort to make it as easy as possible for you to explain why you disagreed, and you couldn't be bothered to pick one item off a multiple-choice menu.

The inspiration of logic for argument is to lay out a modular debate, one which conveniently breaks into smaller pieces that can examined with smaller conversations.  At least when it comes to trying to have a real conversation with a respected partner - I wouldn't necessarily advise a teenager to try it on their oblivious parents - that is the great inspiration we can take from the study of mathematical logic: An argument is a succession of statements each allegedly following with high probability from previous statements or shared background knowledge. Rather than, say, snowing someone under with as much fury and as many demands for applause as you can fit into sixty seconds.

Michael Vassar is fond of claiming that most people don't have the concept of an argument, and that it's pointless to try and teach them anything else until you can convey an intuitive sense for what it means to argue. I think that's what he's talking about.


Meditation: It has been claimed that logic and mathematics is the study of which conclusions follow from which premises. But when we say that 2 + 2 = 4, are we really just assuming that? It seems like 2 + 2 = 4 was true well before anyone was around to assume it, that two apples equaled two apples before there was anyone to count them, and that we couldn't make it 5 just by assuming differently.


Mainstream status.

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Logical Pinpointing"

Previous post: "Causal Reference"

209 comments, sorted by
magical algorithm
Highlighting new comments since Today at 4:25 AM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

A proof is a social construct – it is what we need it to be in order to be convinced something is true. If you write something down and you want it to count as a proof, the only real issue is whether you’re completely convincing.

Some philosophers of mathematics (often those who call themselves "naturalists") argue that what we ought to be paying attention to in mathematics is what mathematicians actually do. If you look at it like this, you notice that very few mathematicians actually produce fully valid proofs in first-order logic, even when they're proving a new theorem (although there are salutory attempts to change this, such as proving lots of existing theorems using Coq).

The best encapsulation that I've heard of what mathematicians actually do when they offer proofs is something like:

You don't have to provide a fully rigorous proof, you just have to convince me that you could do if you had to.

It seems pretty clear to me that this still indicates that mathematicians are really interested in the existence of the formal proof, but they use a bunch of heuristics to figure out whether it exists and what it is. Rather than mathematical practice showing us that the notion of proof is really this more social notion, and that therefore the formal version is somehow off base.

Another way of looking at it, would be to imagine what would happen if we started writing proofs in formal logic. Quite quickly, people would notice patterns of inferences that appeared again and again, and save space and time by encoding the pattern in a single statement which anyone could use, not actually an additional axiom since it follows from the others, but working like one. If you do this a lot quite soon you would be almost entirely using short-cuts like this, and begin to notice higher-level patterns in the short-cuts you are using. Keep extending this short-cut-generating process until mathematical papers are actually of a remotely readable length, and you get something that looks quite a lot like modern mathematics.

I'm not claiming this is what actually happened, what actually happened is a few hundred years ago mathematics really was as wishy-washy as you say it is, and people gradually realised what they should actually be doing, then skipped the stage of writing 10000-page proofs in formal logic because mathematicians aren't stupid and could see where the chain ended from there.

I can attest by spot-checking for small N that even most mathematicians have not been exposed to this idea. It's the standard concept in mathematical logic, but for some odd reason, the knowledge seems constrained to the study of "mathematical logic" as a separate field, which not all mathematicians are interested in (many just want to do Diophantine analysis or whatever).

I'm surprised by this claim. Most mathematicians have at least some understanding of mathematical logic. What you may be encountering are people who simply haven't had to think about these issues in a while. But your use of Diophantine analysis, a subbranch of number theory, as your other example is a bit strange because number theory and algebra have become quite adept in the last few years at using model theory to show the existence of proofs even when one can't point to the proof in question. The classical example is the Ax-Grothendieck theorem, Terry Tao discusses this and some related issues here. Similarly , Mochizuki's attempted proof of the ABC conjecture (as I very roughly understand it) requires delicate work with models.

I'm not going to say they haven't been exposed to it, but I think quite few mathematicians have ever developed a basic appreciation and working understanding of the distinction between syntactic and semantic proofs.

Model theory is, very rarely, successfully applied to solve a well-known problem outside logic, but you would have to sample many random mathematicians before you could find one that could tell you exactly how, even if you restricted to only asking mathematical logicians.

I'd like to add that in the overwhelming majority of academic research in mathematical logic, the syntax-semantics distinction is not at all important, and syntax is suppressed as much as possible as an inconvenient thing to deal with. This is true even in model theory. Now, it is often needed to discuss formulas and theories, but a syntactical proof need not ever be considered. First-order logic is dominant, and the completeness theorem (together with soundness) shows that syntactic implication is equivalent to semantic implication.

If I had to summarize what modern research in mathematical logic is like, I'd say that it's about increasingly elaborate notions of complexity (of problems or theorems or something else), and proving that certain things have certain degrees of complexity, or that the degrees of complexity themselves are laid out in a certain way.

There are however a healthy number of logicians in computer science academia who care a lot more about syntax, including proofs. These could be called mathematical logicians, but the two cultures are quite different.

(I am a math PhD student specializing in logic.)

Mainstream status:

This is meant to present a completely standard view of semantic implication, syntactic implication, and the link between them, as understood in modern mathematical logic. All departures from the standard academic view are errors and should be flagged accordingly.

Although this view is standard among the professionals whose job it is to care, it is surprisingly poorly known outside that. Trying to make a function call to these concepts inside your math professor's head is likely to fail unless they have knowledge of "mathematical logic" or "model theory".

Beyond classical logic lie the exciting frontiers of weird logics such as intuitionistic logic, which doesn't have the theorem ¬¬P → P. These stranger syntaxes can imply entirely different views of semantics, such as a syntactic derivation of Y from X meaning, "If you hand me an example of X, I can construct an example of Y."

I can't actually recall where I've seen someone else say that e.g. "An algebraic proof is a series of steps that you can tell are locally licensed because they maintain balanced weights", but it seems like an obvious direct specialization of "syntactic implication should preserve semantic implication" (which is definitely standard). Similarly, I haven't seen the illustration of "Where does this first go from a true equation to a false equation?" used as a way of teaching the underlying concept, but that's because I've never seen the difference between semantic and syntactic implication taught at all outside of one rare subfield of mathematics. (AAAAAAAAAAAAHHHH!)

The idea that logic can't tell you anything with certainty about the physical universe, or that logic is only as sure as its premises, is very widely understood among Traditional Rationalists:

... as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

--Albert Einstein

I can't actually recall where I've seen someone else say that e.g. "An algebraic proof is a series of steps that you can tell are locally licensed because they maintain balanced weights"

The metaphor of a scale is at least a common teaching tool for algebra: see 1, 2, 3.

I was taught algebra with a scale in the sixth grade. We had little weights that said "X" on them and learned that you could add or take away "x" from both sides.

Yeah, we were taught in basically the exact same way -- moving around different colored weights on plastic print-outs of balances. I'll also note that this was a public (non-magnet) school -- a reasonably good public school in the suburbs, to be sure, but not what I would think of as an especially advanced primary education.

I join lots of other commenters as being genuinely surprised that the content of this post is understood so little, even by mathematicians, as it all seemed pretty common sense to me. Indeed, my instinctive response to the first meditation was almost exactly what Eliezer went on to say, but I kept trying to think of something else for a while because it seemed too obvious.

GOOD. Especially this one: http://www.howtolearn.com/2011/02/demystifying-algebra

But I don't recall ever getting that in my classes. Also, the illustration of "first step going from true equation to false equation" I think is also important to have in there somewhere.

I love this idea, so I've taken it to the next level: http://sphotos-a.xx.fbcdn.net/hphotos-ash4/405144_10151268024944598_356596037_n.jpg

Hanger, paper clips, dental floss, tupperware, pencil, ruler, and lamp. If we're trying to be concrete about this, no need to do it only part way.

My data point: as an undergraduate mathematician at Oxford, the mathematical logic course was one of the most popular, and certainly covered most of this material. I guess I haven't talked a huge number of mathematicians about logic, but I'd be pretty shocked if they didn't know the difference between syntax and semantics. YMMV.

Another data point: in Cambridge the first course in logic done by mathematics undergraduates is in third year. It covers completeness and soundness of propositional and predicate logic and is quite popular. But in third year people are already so specialised that probably way less than half of us take it.

It terrifies me that I seem to be unique in having had pretty much all of this covered in my high school's standard math curriculum (not even an advanced or optional class). Eliezer's method of "find the point where it becomes untrue" wasn't standard, but I think (p ~0.5) that my teacher went over it when I wrote a proof of 2=1 on the board. I knew he was a cool math teacher who made a point of tutoring flagging students, but I hadn't realized he was this exceptional.

Judging just from your description, he's probably more than two standard deviations of abnormal.

Your curriculum sounds at least a good deal above average, but the core problem is that most "curricula" are effectively nothing more than a list of things that teachers should make sure to mention, along with a separate, disjoint, often not correlated list of things that will be "tested" in an exam.

I expect many curricula would contain a good deal of the good parts of traditional rationality and mathematics, but there are many steps between a list on one sheet of paper that each teacher must read once a year and actual non-password understanding becoming commonplace among students.

I still have a copy of my Secondary 4 (US 10th / high school sophomore) curriculum somewhere, which my math teacher gave me secretly back then despite the threat of severe reprimand (our teachers were not even allowed to disclose the actual curriculum - that's how bad things often are). We both verified back then that not even half of what's supposed to be covered according to this piece of paper ever actually gets taught in most classes. That teacher really was that exceptional, but he only had so much time, split across several hundred students.

"An algebraic proof is a series of steps that you can tell are locally licensed because they maintain balanced weights", but it seems like an obvious direct specialization of "syntactic implication should preserve semantic implication"

Eliezer, I very much like your answer to the question of what makes a given proof valid, but I think your explanation of what proofs are is severely lacking. To quote Andrej Bauer in his answer to the question "Is rigour just a ritual that most mathematicians wish to get rid of if they could?", treating proofs as syntatic objects "puts logic where analysis used to be when mathematicians thought of functions as symbolic expressions, probably sometime before the 19th century." If the only important thing about the steps of a proof are that they preserve balanced weights (or semantic implication), then it shouldn't be important which steps you take, only that you take valid steps. Consequently, it should be either nonsensical or irrelevant to ask if two proofs are the same; the only important property of a proof, mathematically, validity (under this view). But this isn't the case. People care about whether or not proofs are new and original, and which methods they use, and whether or not they help give a person intuition. Furthermore, it is common to prove a theorem, and refer to a constant constructed in that proof. (If the only important property of a proof were validity, this should be an inadmissible practice.) Finally, as we have only recently discovered, it is fruitful to interpret proofs of equality as paths in topological spaces! If a proof is just a series of steps which preserve semantic implication, we should be wary of models which only permit continuous preservation of semantic implication, but this interpretation is precisely the one which leads to homotopy type theory.

So while I like your answer to the question "What exactly is the difference between a valid proof and an invalid one?", I think proof-relevant mathematics has a much better answer to the question you claimed to answer "What exactly is a proof?"

(By the way, I highly reccomend Andrej Bauer's answer to anyone interested in eloquent prose about why proof-relevant mathematics and homotopy type theory are interesting.)

Similarly, I haven't seen the illustration of "Where does this first go from a true equation to a false equation?" used as a way of teaching the underlying concept,

My middle school algebra books discussed this on a very basic level. The problem in the book would show the work of a fictional child who was trying to solve a math problem and failing, and the real student would be told to identify where the first one went wrong and how the problem should have been solved instead.

I never saw it done with anything more complicated than algebra, though.

I'm not sure whether or not this is common, either.

In Hungary this (model theory and co.) is part of the standard curriculum for Mathematics BSc. Or at least was in my time.

How mainstream is the Monte Carlo methods stuff, specifically the kitten-observing material?

to not overload the word 'true' meaning comparison-to-causal-reality, the validity of a mathematical theorem

Ok, deductively true things should be labeled valid to avoid confusion with inductively true things. That's an excellent resolution of the overloading of the word "true" in popular vernacular.

But sometimes you aren't clear in earlier writings about whether you are referring to true or valid. For example, one message of Zero and One are not Probabilities is a sophisticated restatement of the problem of induction. Still, I'm honestly uncertain whether you intended a broader point. I mostly agree with what I understand, but there is some uncertainty about what precise rents are paid by these specific beliefs.

Great post!

Trying to make a function call to these concepts inside your math professor's head is likely to fail unless they have knowledge of "mathematical logic" or "model theory".

Hrmm... I don't think that's right at all, at least for professors who teach proof-based classes. Doing proofs, in any area, really does feel like going from solid ground to new ground, only taking steps that are guaranteed to keep you from falling into the lava. My models of my math professors understand how an argument connects its premises to its conclusions - but I haven't gotten into philosophical discussions with them. My damned philosophy professors, on the other hand, are the logically-rudest people I've ever spoken to.

BTW,

Robin Hanson sometimes complains that when he tries to argue that conclusion X follows from reasonable-sounding premises Y, his colleagues disagree with X while declining to say which premise Y they think is false, or pointing out which step of the reasoning seems like an invalid implication.

I parsed this as "...his colleagues disagree with X while declining ... or while pointing out which step of the reasoning seems like an invalid implication", which is the opposite of what you meant.

Tried an edit.

Math professors certainly understand instinctively what connects premises to conclusions, or they couldn't do algebra. It's trying to talk about the process explicitly where the non-modern-logicians start saying things like "proofs are absolutely convincing, hence social constructs".

A math professor who failed to get a solid course in formal logic sounds unlikely... (If it's not what you are saying, it's not clear to me what that is.)

I go to a Big Ten university where the graduate-level sequence in formal logic hasn't been taught in six years.

Update: It's being taught again in the fall semester!

I never got a course in formal logic, and could probably have been close to being a math professor by now.

Yes, well, the problem is that many courses on "logic" don't teach model theory, at least from what I've heard. Try it and see. I could've just been asking the wrong mathematicians - e.g. a young mathematician visiting CFAR knew about model theory, but that itself seems a bit selected. But the modern courses could be better than the old courses! It's been known to happen, and I'd sure be happy to hear it.

(I'm pretty sure mathbabe has been through a course on formal logic and so has Samuel R. Buss...)

When I went to Rutgers, the course on proof theory and model theory was taught by the philosophy department. (And it satisfied my humanities requirement for graduation!)

At Yale, the situation is similar. I took a course on Gödel's incompleteness theorem and earned a humanities credit from it. The course was taught by the philosophy department and also included a segment on the equivalence of various notions of computability. Coolest humanities class ever!

I shudder to think of what politics were involved to classify it as such, though.

Probably it was that a Phil professor wanted to teach the class, and no one cared to argue. It's not things like which classes are taught that are the big political fights, to my knowledge; the fights are more often about who gets the right to teach a topic of their choosing, and who doesn't.

Have you read the book of Marker? I love that thing to pieces.

Robin Hanson sometimes complains that when he tries to argue that conclusion X follows from reasonable-sounding premises Y, his colleagues disagree with X while declining to say which premise Y they think is false, or pointing out which step of the reasoning seems like an invalid implication.

I parsed this as "...his colleagues disagree with X while declining ... or while pointing out which step of the reasoning seems like an invalid implication", which is the opposite of what you meant.

I think the correct syntax here is "...his colleagues disagree with X while declining to say which premise..., or to point out which step...", which links the two infinitives "to say" and "to point out" to the verb "declining". ETA: The current edit works too.

My damned philosophy professors, on the other hand, are the logically-rudest people I've ever spoken to.

Really? I find people who have done just first year philosophy worse. Maybe I was lucky and had the two philosophy professors who philosophize well.

Ok, fair enough. My philosophy professors were the logically-rudest adults I've spoken to. Actually, that's not even true. Rather, my philosophy professors were the people I most hoped would have less than the standard rudeness, but did not at all.

An anecdote: spring quarter last year, I tried to convince my philosophy professor that logic preserves certainty, but that we could (probably) never be absolutely certain that we had witnessed a correct derivation. He dodged, and instead sermonized about the history of logic. At one point I mentioned GEB, and he said, I quote, "Hofstadter is something of a one trick pony". Here, "one trick" refers to "self-reference". I was too flabbergasted to respond politely.

He dodged, and instead sermonized about the history of logic

Or tried to tell you something you didn't get.

Or tried to tell you something you didn't get.

From the description TsviBT gives it is far more likely that the professor was stubbornly sermonizing against a strawman.

If TsviBT failed to get something, it is quite likely that from TsviBT 's perspective the professor was waffling pointlessly, and that TsviBTs account would reflect that. We cannot appeal to TsviBT's subjective perspective as proving the objective validity of itself, can we?

If TsviBT failed to get something, it is quite likely that from TsviBT 's perspective the professor was waffling pointlessly, and that TsviBTs account would reflect that. We cannot appeal to TsviBT's subjective perspective as proving the objective validity of itself, can we?

I can look at the specific claim TsviBT says he made and evaluate whether it is a true claim or a false claim. It happens to be a true claim.

Assuming you accept the above claim then before questioning whether TsviBT failed to comprehend you must first question whether what TsviBT says he said is what he actually said. It seems unlikely that he is lying about what he said and also not especially likely that he forgot what point he was making---it is something etched firmly in his mind. It is more likely that the professor did not pay sufficient attention to comprehend than it is than that Tsvi did not say what he says he said. The former occurs far more frequently than I would prefer.

Edit:

It happens to be a true claim.

Is it? I think it's a bit misleading. Logic would preserve certainty if there were any certainty. But there probably isnt. Maybe the prof was trying to make that point.

Assuming you accept the above claim then before questioning whether TsviBT failed to comprehend you must first question whether what TsviBT says he said is what he actually said.

No, that isn;t the issue. TsviBT only offered a subjective reaction to the professor's words, not the words themselves. We cannot judge from that whether the professor was rudely missing his birlliant point, or making an even more birlliant riposte, the subteleties of which passed TsviBT by.

I disagree regarding the accuracy of the claim as stated (you seem to be making the mistake a professor may carelessly make by considering a different more trivial point). I also disagree that the alleged "brilliant riposte" could be 'brilliant' as more than an effective social move given that it moved to to a different point (as a riposte to the claim and not just an appropriate subject change) rather than acknowledging the rather simple technical correction.

You are giving the professor the benefit of doubt that can not exist without TsviBT's claim of what he personally said being outright false.

We don;t know that the riposte moved to a differnt point because we weren;t there and do not have the profs words.

We don;t know that the riposte moved to a differnt point because we weren;t there and do not have the profs words.

It did one of moving to a different point, agreeing with TsviBT or being outright incorrect. (Again following your assumption that it was, in fact, a riposte.) Moving to a different point is the most likely (and most generous) assumption.

(I have expressed my point as much as I ought and most likely then some. It would be best for me to stop.)

TsviBT 's point was not outright correct,, which leads to further options such as expounding on a subtle error.

I was too flabbergasted to respond politely.

I hope that means you refrain from responding at all. You can't fix broken high status people!

Wait. Oh bother. I try to do that all the time. But I at least tend to direct my efforts towards influencing the social environment such that the incentives for securing said status further are changed so that on the margin the behavior of the high-status people (including, at times, logical rudeness) is somewhat altered. "Persuasion" of a kind.

Name three ways of you performing said persuasion.

Name three ways of you performing said persuasion.

No. Not at this time. (I would prefer to be not believed than to give examples of this right now.)

Actually it was more on the line of "give me practical examples so I can extrapolate the rule better than from an abstract summary", but, sure, suit yourself.

To be fair to the professor, conflating deductive and inductive reasoning is a basic error that's easy to pattern match.

To be fair to the professor, conflating deductive and inductive reasoning is a basic error that's easy to pattern match.

To be fair to TsviBT it is pattern matching to the nearest possible stupid thing that is perhaps the most annoying logical rude tactics there is (whether employed deliberately or instinctively according to social drives).

An anecdote: spring quarter last year, I tried to convince my philosophy professor that logic preserves certainty, but that we could (probably) never be absolutely certain that we had witnessed a correct derivation.

This does seem correct as stated. I wonder if he deliberately avoided the point to save face regarding a previous error or, as Tim suggested, pattern matched to something different.

Rather, my philosophy professors were the people I most hoped would have less than the standard rudeness, but did not at all.

Now that is far less surprising and on average I can agree there (although I personally had exceptions!) It was the absolute scale that I perhaps questioned.

To be fair, Hoftstadter is basically a one-trick pony, in that he published one academically-relevant book and then more or less jumped straight to Professor Emeritus in all but name, publishing very little and interacting with academia even less.

Just wishing I had read GEB sooner. Reading it now and it seems to be getting ruined by politics.

Politics? I don't understand how.

Also, above comment should in no way be taken as criticism of GEB. It's great. It's just that that's pretty much all he has to his credit.

I mean people want to tear chunks out of it for status reasons... ...I think.

FWIW, the undergraduate Philosophy department at Southern Connecticut State University has at least 2 logic courses, and the "Philosophical logic" special topics course covers everything here and then some in a lot of detail. The first half of the course goes through syntax and semantics for propositional logic separately, as well as the relevant theorems that show how to link them up.