Followup to: Causal Reference
From a math professor's blog:
One thing I discussed with my students here at HCSSiM yesterday is the question of what is a proof.
They’re smart kids, but completely new to proofs, and they often have questions about whether what they’ve written down constitutes a proof. Here’s what I said to them.
A proof is a social construct – it is what we need it to be in order to be convinced something is true. If you write something down and you want it to count as a proof, the only real issue is whether you’re completely convincing.
This is not quite the definition I would give of what constitutes "proof" in mathematics - perhaps because I am so used to isolating arguments that are convincing, but ought not to be.
Or here again, from "An Introduction to Proof Theory" by Samuel R. Buss:
There are two distinct viewpoints of what a mathematical proof is. The first view is that proofs are social conventions by which mathematicians convince one another of the truth of theorems. That is to say, a proof is expressed in natural language plus possibly symbols and figures, and is sufficient to convince an expert of the correctness of a theorem. Examples of social proofs include the kinds of proofs that are presented in conversations or published in articles. Of course, it is impossible to precisely define what constitutes a valid proof in this social sense; and, the standards for valid proofs may vary with the audience and over time. The second view of proofs is more narrow in scope: in this view, a proof consists of a string of symbols which satisfy some precisely stated set of rules and which prove a theorem, which itself must also be expressed as a string of symbols. According to this view, mathematics can be regarded as a 'game' played with strings of symbols according to some precisely defined rules. Proofs of the latter kind are called "formal" proofs to distinguish them from "social" proofs.
In modern mathematics there is a much better answer that could be given to a student who asks, "What exactly is a proof?", which does not match either of the above ideas. So:
Meditation: What distinguishes a correct mathematical proof from an incorrect mathematical proof - what does it mean for a mathematical proof to be good? And why, in the real world, would anyone ever be interested in a mathematical proof of this type, or obeying whatever goodness-rule you just set down? How could you use your notion of 'proof' to improve the real-world efficacy of an Artificial Intelligence?
...
...
...
Consider the following syllogism:
- All kittens are little;
- Everything little is innocent;
- Therefore all kittens are innocent.
Here's four mathematical universes, aka "models", in which the objects collectively obey or disobey these three rules:
There are some models where not all kittens are little, like models B and D. And there are models where not everything little is innocent, like models C and D. But there are no models where all kittens are little, and everything little is innocent, and yet there exists a guilty kitten. Try as you will, you won't be able to imagine a model like that. Any model containing a guilty kitten has at least one kitten that isn't little, or at least one little entity that isn't innocent - no way around it.
Thus, the jump from 1 & 2, to 3, is truth-preserving: in any universe where premises (1) and (2) are true to start with, the conclusion (3) is true of the same universe at the end.
Which is what makes the following implication valid, or, as people would usually just say, "true":
"If all kittens are little and everything little is innocent, then all kittens are innocent."
The advanced mainstream view of logic and mathematics (i.e., the mainstream view among professional scholars of logic as such, not necessarily among all mathematicians in general) is that when we talk about math, we are talking about which conclusions follow from which premises. The "truth" of a mathematical theorem - or to not overload the word 'true' meaning comparison-to-causal-reality, the validity of a mathematical theorem - has nothing to do with the physical truth or falsity of the conclusion in our world, and everything to do with the inevitability of the implication. From the standpoint of validity, it doesn't matter a fig whether or not all kittens are innocent in our own universe, the connected causal fabric within which we are embedded. What matters is whether or not you can prove the implication, starting from the premises; whether or not, if all kittens were little and all little things were innocent, it would follow inevitably that all kittens were innocent.
To paraphrase Mayor Daley, logic is not there to create truth, logic is there to preserve truth. Let's illustrate this point by assuming the following equation:
x = y = 1
...which is true in at least some cases. E.g. 'x' could be the number of thumbs on my right hand, and 'y' the number of thumbs on my left hand.
Now, starting from the above, we do a little algebra:
1 |
x = y = 1 |
starting premise |
2 |
x^{2} = xy |
multiply both sides by x |
3 |
x^{2} - y^{2} = xy - y^{2} |
subtract y^{2} from both sides |
4 |
(x + y)(x - y) = y(x - y) |
factor |
5 |
x + y = y |
cancel |
6 |
2 = 1 |
substitute 1 for x and 1 for y |
We have reached the conclusion that in every case where x and y are equal to 1, 2 is equal to 1. This does not seem like it should follow inevitably.
You could try to find the flaw just by staring at the lines... maybe you'd suspect that the error was between line 3 and line 4, following the heuristic of first mistrusting what looks like the most complicated step... but another way of doing it would be to try evaluating each line to see what it said concretely, for example, multiplying out x^{2} = xy in line 2 to get (1^{2}) = (1 * 1) or 1 = 1. Let's try doing this for each step, and then afterward mark whether each equation looks true or false:
1 |
x = y = 1 |
1 = 1 |
true |
2 |
x^{2} = xy |
1 = 1 |
true |
3 |
x^{2} - y^{2} = xy - y^{2} |
0 = 0 |
true |
4 |
(x + y)(x - y) = y(x - y) |
0 = 0 |
true |
5 |
x + y = y |
2 = 1 |
false |
6 |
2 = 1 |
2 = 1 |
false |
Logic is there to preserve truth, not to create truth. Whenever we take a logically valid step, we can't guarantee that the premise is true to start with, but if the premise is true the conclusion should always be true. Since we went from a true equation to a false equation between step 4 and step 5, we must've done something that is in general invalid.
In particular, we divided both sides of the equation by (x - y).
Which is invalid, i.e. not universally truth-preserving, because (x - y) might be equal to 0.
And if you divide both sides by 0, you can get a false statement from a true statement. 3 * 0 = 2 * 0 is a true equation, but 3 = 2 is a false equation, so it is not allowable in general to cancel any factor if the factor might equal zero.
On the other hand, adding 1 to both sides of an equation is always truth-preserving. We can't guarantee as a matter of logic that x = y to start with - for example, x might be my number of ears and y might be my number of fingers. But if x = y then x + 1 = y + 1, always. Logic is not there to create truth; logic is there to preserve truth. If a scale starts out balanced, then adding the same weight to both sides will result in a scale that is still balanced:
I will remark, in some horror and exasperation with the modern educational system, that I do not recall any math-book of my youth ever once explaining that the reason why you are always allowed to add 1 to both sides of an equation is that it is a kind of step which always produces true equations from true equations.
What is a valid proof in algebra? It's a proof where, in each step, we do something that is universally allowed, something which can only produce true equations from true equations, and so the proof gradually transforms the starting equation into a final equation which must be true if the starting equation was true. Each step should also - this is part of what makes proofs useful in reasoning - be locally verifiable as allowed, by looking at only a small number of previous points, not the entire past history of the proof. If in some previous step I believed x^{2} - y = 2, I only need to look at that single step to get the conclusion x^{2} = 2 + y, because I am always allowed to add y to both sides of the equation; because I am always allowed to add any quantity to both sides of the equation; because if the two sides of an equation are in balance to start with, adding the same quantity to both sides of the balance will preserve that balance. I can know the inevitability of this implication without considering all the surrounding circumstances; it's a step which is locally guaranteed to be valid. (Note the similarity - and the differences - to how we can compute a causal entity knowing only its immediate parents, and no other ancestors.)
You may have read - I've certainly read - some philosophy which endeavors to score points for counter-intuitive cynicism by saying that all mathematics is a mere game of tokens; that we start with a meaningless string of symbols like:
...and we follow some symbol-manipulation rules like "If you have the string 'A ∧ (A → B)' you are allowed to go to the string 'B'", and so finally end up with the string:
...and this activity of string-manipulation is all there is to what mathematicians call "theorem-proving" - all there is to the glorious human endeavor of mathematics.
This, like a lot of other cynicism out there, is needlessly deflationary.
There's a family of techniques in machine learning known as "Monte Carlo methods" or "Monte Carlo simulation", one of which says, roughly, "To find the probability of a proposition Q given a set of premises P, simulate random models that obey P, and then count how often Q is true." Stanislaw Ulam invented the idea after trying for a while to calculate the probability that a random Canfield solitaire layout would be solvable, and finally realizing that he could get better information by trying it a hundred times and counting the number of successful plays. This was during the era when computers were first becoming available, and the thought occurred to Ulam that the same technique might work on a current neutron diffusion problem as well.
Similarly, to answer a question like, "What is the probability that a random Canfield solitaire is solvable, given that the top card in the deck is a king?" you might imagine simulating many 52-card layouts, throwing away all the layouts where the top card in the deck was not a king, using a computer algorithm to solve the remaining layouts, and counting what percentage of those were solvable. (It would be more efficient, in this case, to start by directly placing a king on top and then randomly distributing the other 51 cards; but this is not always efficient in Monte Carlo simulations when the condition to be fulfilled is more complex.)
Okay, now for a harder problem. Suppose you've wandered through the world a bit, and you've observed the following:
(1) So far I've seen 20 objects which have been kittens, and on the 6 occasions I've paid a penny to observe the size of something that's a kitten, all 6 kitten-objects have been little.
(2) So far I've seen 50 objects which have been little, and on the 30 occasions where I've paid a penny to observe the morality of something little, all 30 little objects have been innocent.
(3) This object happens to be a kitten. I want to know whether it's innocent, but I don't want to pay a cent to find out directly. (E.g., if it's an innocent kitten, I can buy it for a cent, sell it for two cents, and make a one-cent profit. But if I pay a penny to observe directly whether the kitten is innocent, I won't be able to make a profit, since gathering evidence is costly.)
Your previous experiences have led you to suspect the general rule "All kittens are little" and also the rule "All little things are innocent", even though you've never before directly checked whether a kitten is innocent.
Furthermore...
You've never heard of logic, and you have no idea how to play that 'game of symbols' with K(x), I(x), and L(x) that we were talking about earlier.
But that's all right. The problem is still solvable by Monte Carlo methods!
First we'll generate a large set of random universes. Then, for each universe, we'll check whether that universe obeys all the rules we currently suspect or believe to be true, like "All kittens are little" and "All little things are innocent" and "The force of gravity goes as the square of the distance between two objects and the product of their masses". If a universe passes this test, we'll check to see whether the inquiry of interest, "Is the kitten in front of me innocent?", also happens to be true in that universe.
We shall repeat this test a large number of times, and at the end we shall have an approximate estimate of the probability that the kitten in front of you is innocent.
On this algorithm, you perform inference by visualizing many possible universes, throwing out universes that disagree with generalizations you already believe in, and then checking what's true (probable) in the universes that remain. This algorithm doesn't tell you the state of the real physical world with certainty. Rather, it gives you a measure of probability - i.e., the probability that the kitten is innocent - given everything else you already believe to be true.
And if, instead of visualizing many imaginable universes, you checked all possible logical models - which would take something beyond magic, because that would include models containing uncountably large numbers of objects - and the inquiry-of-interest was true in every model matching your previous beliefs, you would have found that the conclusion followed inevitably if the generalizations you already believed were true.
This might take a whole lot of reasoning, but at least you wouldn't have to pay a cent to observe the kitten's innocence directly.
But it would also save you some computation if you could play that game of symbols we talked about earlier - a game which does not create truth, but preserves truth. In this game, the steps can be locally pronounced valid by a mere 'syntactic' check that doesn't require us to visualize all possible models. Instead, if the mere syntax of the proof checks out, we know that the conclusion is always true in a model whenever the premises are true in that model.
And that's a mathematical proof: A conclusion which is true in any model where the axioms are true, which we know because we went through a series of transformations-of-belief, each step being licensed by some rule which guarantees that such steps never generate a false statement from a true statement.
The way we would say it in standard mathematical logic is as follows:
A collection of axioms X semantically implies a theorem Y, if Y is true in all models where X are true. We write this as X ⊨ Y.
A collection of axioms X syntactically implies a theorem Y within a system S, if we can get to Y from X using transformation steps allowed within S. We write this as X ⊢ Y.
The point of the system S known as "classical logic" is that its syntactic transformations preserve semantic implication, so that any syntactically allowed proof is semantically valid:
If X ⊢ Y, then X ⊨ Y.
If you make this idea be about proof steps in algebra doing things that always preserve the balance of a previously balanced scale, I see no reason why this idea couldn't be presented in eighth grade or earlier.
I can attest by spot-checking for small N that even most mathematicians have not been exposed to this idea. It's the standard concept in mathematical logic, but for some odd reason, the knowledge seems constrained to the study of "mathematical logic" as a separate field, which not all mathematicians are interested in (many just want to do Diophantine analysis or whatever).
So far as real life is concerned, mathematical logic only tells us the implications of what we already believe or suspect, but this is a computational problem of supreme difficulty and importance. After the first thousand times we observe that objects in Earth gravity accelerate downward at 9.8 m/s^{2}, we can suspect that this will be true on the next occasion - which is a matter of probabilistic induction, not valid logic. But then to go from that suspicion, plus the observation that a building is 45 meters tall, to a specific prediction of how long it takes the object to hit the ground, is a matter of logic - what will happen if everything we else we already believe, is actually true. It requires computation to make this conclusion transparent. We are not 'logically omniscient' - the technical term for the impossible dreamlike ability of knowing all the implications of everything you believe.
The great virtue of logic in argument is not that you can prove things by logic that are absolutely certain. Since logical implications are valid in every possible world, "observing" them never tells us anything about which possible world we live in. Logic can't tell you that you won't suddenly float up into the atmosphere. (What if we're in the Matrix, and the Matrix Lords decide to do that to you on a whim as soon as you finish reading this sentence?) Logic can only tell you that, if that does happen, you were wrong in your extremely strong suspicion about gravitation being always and everywhere true in our universe.
The great virtue of valid logic in argument, rather, is that logical argument exposes premises, so that anyone who disagrees with your conclusion has to (a) point out a premise they disagree with or (b) point out an invalid step in reasoning which is strongly liable to generate false statements from true statements.
For example: Nick Bostrom put forth the Simulation Argument, which is that you must disagree with either statement (1) or (2) or else agree with statement (3):
(1) Earth-originating intelligent life will, in the future, acquire vastly greater computing resources.
(2) Some of these computing resources will be used to run many simulations of ancient Earth, aka "ancestor simulations".
(3) We are almost certainly living in a computer simulation.
...but unfortunately it appears that not only do most respondents decline to say why they disbelieve in (3), most are unable to understand the distinction between the Simulation Hypothesis that we are living in a computer simulation, versus Nick Bostrom's actual support for the Simulation Argument that "You must either disagree with (1) or (2) or agree with (3)". They just treat Nick Bostrom as having claimed that we're all living in the Matrix. Really. Look at the media coverage.
I would seriously generalize that the mainstream media only understands the "and" connective, not the "or" or "implies" connective. I.e., it is impossible for the media to report on a discovery that one of two things must be true, or a discovery that if X is true then Y must be true (when it's not known that X is true). Also, the media only understands the "not" prefix when applied to atomic facts; it should go without saying that "not (A and B)" cannot be reported-on.
Robin Hanson sometimes complains that when he tries to argue that conclusion X follows from reasonable-sounding premises Y, his colleagues disagree with X while refusing to say which premise Y they think is false, or else say which step of the reasoning seems like an invalid implication. Such behavior is not only annoying, but logically rude, because someone else went out of their way and put in extra effort to make it as easy as possible for you to explain why you disagreed, and you couldn't be bothered to pick one item off a multiple-choice menu.
The inspiration of logic for argument is to lay out a modular debate, one which conveniently breaks into smaller pieces that can examined with smaller conversations. At least when it comes to trying to have a real conversation with a respected partner - I wouldn't necessarily advise a teenager to try it on their oblivious parents - that is the great inspiration we can take from the study of mathematical logic: An argument is a succession of statements each allegedly following with high probability from previous statements or shared background knowledge. Rather than, say, snowing someone under with as much fury and as many demands for applause as you can fit into sixty seconds.
Michael Vassar is fond of claiming that most people don't have the concept of an argument, and that it's pointless to try and teach them anything else until you can convey an intuitive sense for what it means to argue. I think that's what he's talking about.
Meditation: It has been claimed that logic and mathematics is the study of which conclusions follow from which premises. But when we say that 2 + 2 = 4, are we really just assuming that? It seems like 2 + 2 = 4 was true well before anyone was around to assume it, that two apples equaled two apples before there was anyone to count them, and that we couldn't make it 5 just by assuming differently.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "Logical Pinpointing"
Previous post: "Causal Reference"
Some philosophers of mathematics (often those who call themselves "naturalists") argue that what we ought to be paying attention to in mathematics is what mathematicians actually do. If you look at it like this, you notice that very few mathematicians actually produce fully valid proofs in first-order logic, even when they're proving a new theorem (although there are salutory attempts to change this, such as proving lots of existing theorems using Coq).
The best encapsulation that I've heard of what mathematicians actually do when they offer proofs is something like:
It seems pretty clear to me that this still indicates that mathematicians are really interested in the existence of the formal proof, but they use a bunch of heuristics to figure out whether it exists and what it is. Rather than mathematical practice showing us that the notion of proof is really this more social notion, and that therefore the formal version is somehow off base.
Mainstream status:
This is meant to present a completely standard view of semantic implication, syntactic implication, and the link between them, as understood in modern mathematical logic. All departures from the standard academic view are errors and should be flagged accordingly.
Although this view is standard among the professionals whose job it is to care, it is surprisingly poorly known outside that. Trying to make a function call to these concepts inside your math professor's head is likely to fail unless they have knowledge of "mathematical logic" or "model theory".
Beyond classical logic lie the exciting frontiers of weird logics such as intuitionistic logic, which doesn't have the theorem ¬¬P → P. These stranger syntaxes can imply entirely different views of semantics, such as a syntactic derivation of Y from X meaning, "If you hand me an example of X, I can construct an example of Y."
I can't actually recall where I've seen someone else say that e.g. "An algebraic proof is a series of steps that you can tell are locally licensed because they maintain balanced weights", but it seems like an obvious direct specialization of "syntacti... (read more)
The metaphor of a scale is at least a common teaching tool for algebra: see 1, 2, 3.
I was taught algebra with a scale in the sixth grade. We had little weights that said "X" on them and learned that you could add or take away "x" from both sides.
Eliezer, I very much like your answer to the question of what makes a given proof valid, but I think your explanation of what proofs are is severely lacking. To quote Andrej Bauer in his answer to the question "Is rigour just a ritual that most mathematicians wish to get rid of if they could?", treating proofs as syntatic objects "puts logic where analysis used to be when mathematicians thought of functions as symbolic expressions, probably sometime before the 19th century." If the only important thing about the steps of a proof are that they preserve balanced weights (or semantic implication), then it shouldn't be important which steps you take, only that you take valid steps. Consequently, it should be either nonsensical or irrelevant to ask if two proofs are the same; the only important property of a proof, mathematically, validity (under this view). But this isn't the case. People care about whether ... (read more)
My data point: as an undergraduate mathematician at Oxford, the mathematical logic course was one of the most popular, and certainly covered most of this material. I guess I haven't talked a huge number of mathematicians about logic, but I'd be pretty shocked if they didn't know the difference between syntax and semantics. YMMV.
I go to a Big Ten university where the graduate-level sequence in formal logic hasn't been taught in six years.
When I went to Rutgers, the course on proof theory and model theory was taught by the philosophy department. (And it satisfied my humanities requirement for graduation!)
I'm surprised by this claim. Most mathematicians have at least some understanding of mathematical logic. What you may be encountering are people who simply haven't had to think about these issues in a while. But your use of Diophantine analysis, a subbranch of number theory, as your other example is a bit strange because number theory and algebra have become quite adept in the last few years at using model theory to show the existence of proofs even when one can't point to the proof in question. The classical example is the Ax-Grothendieck theorem, Terry Tao discusses this and some related issues here. Similarly , Mochizuki's attempted proof of the ABC conjecture (as I very roughly understand it) requires delicate work with models.
I happen to be studying model theory at the moment. For anyone curious, when Eliezer say 'If X ⊢ Y, then X ⊨ Y' (that is, if a model proves a statement, that statement is true in the model), this is known as soundness. The converse is completeness, or more specifically semantic completeness, which says that if a statement is true in every model of a theory (in other words, in every possible world where that theory is true), then there is a finite proof of the statement. In symbols this is 'If X ⊨ Y, then X ⊢ Y'. Note that this notion of 'completeness' is not the one used in Gödel's incompleteness theorems.
This is not correct. In first-order logic, there are no counterexamples to "If X |=Y then X |- Y" -- this is (equivalent to) Gödel's completeness theorem. Gödel's incompleteness theorem says that every first-order theory with an effectively enumerable set of axioms, and which includes arithmetic, contains some propositions which have neither a proof nor a disproof. One typically goes on to say that the undecidable proposition that the proof constructs is true in the "usual model" of the natural numbers, but that is verging on philosophy. At any rate, the truth of the proposition is asserted for only one particular model of the axioms of arithmetic, not all possible models. By the completeness theorem, an undecidable proposition must be true in some models and false in others.
First-order logic is complete. There are no counterexamples. Since the Godel sentence of a theory is not provable from the axioms of a theory, it follows (from completeness) that the Godel sentence must be false in some "non-standard" models of the axioms.
You can read the first incompleteness theorem as saying that there's no way to come up with a finitely expressible set of axioms that can prove all the truths of arithmetic. This is not in conflict with the completeness theorem. The completeness theorem tells us that if there were some finite (or recursively enumerable) set of axioms of arithmetic such that the inference to any truth of arithmetic from those axioms was semantically valid, then all truths of arithmetic can be proven from that set of axioms. The incompleteness theorem tells us that the antecedent of this conditional is false; it doesn't deny the truth of the conditional itself.
Given several other assumptions, some empirical, some about anthropic reasoning.
"I will remark, in some horror and exasperation with the modern educational system, that I do not recall any math-book of my youth ever once explaining that the reason why you are always allowed to add 1 to both sides of an equation is that it is a kind of step which always produces true equations from true equations."
I can now say that my K-12 education was, at least in this one way, better than yours. I must have been 14 at the time, and the realization that you can do that hit me like a ton of bricks, followed closely by another ton of bricks -- choosing what to add is not governed by the laws of math -- you really can add anything, but not everything is equally useful.
E.g., "solve for x, x+1=5"
You can choose to add -1 to the equation, getting "x+1+-1=5+-1", simplify both sides and get "x=4" and yell "yay", but you can also choose to add, say, 37, and get (after simplification) "x+38=42" which is still true, just not useful. My immediate question after that was "how do you know what to choose" and, long story short, 15 years later I published a math paper... :)
The motivation for the extremely unsatisfying idea that proofs are social is that no language—not even the formal languages of math and logic—is self-interpreting. In order to understand a syllogism about kittens, I have to understand the language you use to express it. You could try to explain to me the rules of your language, but you would have to do that in some language, which I would also have to understand. Unless you assume some a priori linguistic agreement, explaining how to interpret your syllogism, requires explaining how to interpret the language of your explanation, and explaining how to interpret the language of that explanation, and so on ad infinitum. This is essentially the point Lewis Carroll was making when he said that "A" and "if A then B" were not enough to conclude B. You also need "if 'A' and 'if A then B' then B" and and so on and so on. You and I may understand that as implied already. But it is not logically necessary that we do so, if we don't already understand the English language the same way.
This is, I think, the central point that Ludwig Wittgenstein makes in The Philosophical Investigations and Remarks on the Foundati... (read more)
From my perspective, when I've explained why a single AI alone in space would benefit instrumentally from checking proofs for syntactic legality, I've explained the point of proofs. Communication is an orthogonal issue, having nothing to do with the structure of mathematics.
I thought of a better way of putting what I was trying to say. Communication may be orthogonal to the point of your question, but representation is not. An AI needs to use an internal language to represent the world or the structure of mathematics—this is the crux of Wittgenstein's famous "private language argument"—whether or not it ever attempts to communicate. You can't evaluate "syntactic legality" except within a particular language, whose correspondence to the world is not given a matter of logic (although it may be more or less useful pragmatically).
Before I read the rest of the post, I'd like to note that those pictures are adorable. I sincerely hope we'll be seeing more animal cartoons like that in the future.
I can testify that in differential geometry, almost nobody knows this idea (I certainly didn't back in that part of my career). But also nobody really thought as proofs as meaningless symbol manipulations; the models in geometry are so intuitively vivid, it feels you are actually exploring a real platonic realm.
(that's why I found logic so hard: you had to follow the proofs, and intuition wasn't reliable)
That's not what I thought it means. My understanding was that it's something like: "all parties should be heard", and it's more of a legal thing...
http://en.wikipedia.org/wiki/Audi_alteram_partem
This is false as a description of how mathematics is done in practice. If I am confronted with a purported theorem and a messy complicated "proof", there are ways I can refute it without either challenging a premise or a specific st... (read more)
Your account of "proof" is not actually an alternative to the "proofs are social constructs" description, since these are addressing two different aspects of proof. You have focused on the standard mathematical model of proofs, but there is a separate sociological account of how professional mathematicians prove things.
Here is an example of the latter from Thurston's "On Proof and Progress in Mathematics."
... (read more)Would the average 9-year old be able to understand such a math book? (For that matter, would the average teacher of 9-year-olds?)
http://www.wired.com/geekdad/2012/06/dragonbox/all/ Well, Dragon Box suggests that a five year old can get at least the basic idea, so I'd deeply hope a 9 year old could grasp it more explicitly.
I had a similar experience. When I took Algebra I, I understood that you could add, subtract, multiply, and divide (non-zero) some value to both sides of the equation and it would still be true. I remember at one point I was working on solvi... (read more)
Further to that, there is an advantage to the maker of an argument, in that they have to make explicit assumptions they may not have realised they were making. Speaking of imoplicit assumptions...
... (read more)Note that this doesn&apos... (read more)
Meditation:
It has been claimed that logic and mathematics is the study of which conclusions follow from which premises. But when we say that 2 + 2 = 4, are we really just assuming that? It seems like 2 + 2 = 4 was true well before anyone was around to assume it, that two apples equalled two apples before there was anyone to count them, and that we couldn't make it 5 just by assuming differently.
That Monte Carlo Method sounds a lot like dreaming.
This is something I can observe everyday in my life. Probably I am less lucky then other guys and in my surrounding there are not that much "respected partners" who are able to listen and not rudely immediately disagree. So yes, it seems like most people don't have the concept of a proper argument. But that way I also did not have thsi concept for a long t... (read more)
I want to add that this idea justifies material implication ("if 2x2 = 4, then sky is blue") and other counter-intuitive properties of formal logic, like "you can prove anything, if you assume a contradiction/false statement".
Usual way to show the latter goes like this... (read more)
I would suggest that the most likely reason for logical rudeness - not taking the multiple-choice - is that most arguments beyond a certain level of sophistication have more unstated premises than they have stated premises.
And I suspect it's not easy to identify unstated premises. Not just the ones you don't want to say, belief-in-belief sort of things, but ones you as an arguer simply aren't sufficiently skilled to describe.
As an example:
... (read more)Truth preserving logic is part of a family of axiom systems called Modal logics. In my humble opinion I think that Truth preserving logic isn't the most interesting; I think the most interesting logic is the Proof or Justification preserving logic, and coming in at a close second, the break-even-betting-on-probabilistic-outcomes logic.
Classical logic is the common name for truth preserving logic, Intuitionist logic for justification preserving and Bayesian reasoning for the last one.
Great post, Eliezer. I've seen all of this before, but never all in one place, and not explained so clearly.
I was going to write a long story about how I became interested in math, but it would take too much time, and I doubt many people would care. So, I'll abbreviate it: Number Munchers → "How can I solve an algebra problem in which there are "x"s on both sides of the equals sign?" → An obsession with pi (I had over 500 decimal places memorized at one point.) → Learning some easier calculus for fun in 9th grade → An obsession with p... (read more)
My [uninformed] interpretation of mathematics is that it is an abstraction of concepts which do exist in this world, which we have observed like we might observe gravity. We then go on to infer things about these abstract concepts using proofs.
So we would observe numbers in many places in nature, from which we would make a model of numbers (which would be an abstract model of all the things which we have observed following the rules of numbers), and from our model of numbers we could infer properties of numbers (much like we can infer things about a fallin... (read more)
.
Most of this is old, but upvoted for the last couple paragraphs.