[edit: sorry, the formatting of links and italics in this is all screwy.  I've tried editing both the rich-text and the HTML and either way it looks ok while i'm editing it but the formatted terms either come out with no surrounding spaces or two surrounding spaces]

In the latest Rationality Quotes thread, CronoDAS  quoted  Paul Graham: 

It would not be a bad definition of math to call it the study of terms that have precise meanings.

Sort of. I started writing a this as a reply to that comment, but it grew into a post.
We've all heard of the story of  epicycles  and how before Copernicus came along the movement of the stars and planets were explained by the idea of them being attached to rotating epicycles, some of which were embedded within other larger, rotating epicycles (I'm simplifying the terminology a little here).
As we now know, the Epicycles theory was completely wrong.  The stars and planets were not at the distances from earth posited by the theory, or of the size presumed by it, nor were they moving about on some giant clockwork structure of rings.  
In the theory of Epicycles the terms had precise mathematical meanings.  The problem was that what the terms were meant to represent in reality were wrong.  The theory involved applied mathematical statements, and in any such statements the terms don’t just have their mathematical meaning -- what the equations say about them -- they also have an ‘external’ meaning concerning what they’re supposed to represent in or about reality.
Lets consider these two types of meanings.  The mathematical, or  ‘internal’, meaning of a statement like ‘1 + 1 = 2’ is very precise.  ‘1 + 1’ is  defined  as ‘2’, so ‘1 + 1 = 2’ is pretty much  the  pre-eminent fact or truth.  This is why mathematical truth is usually given such an exhaulted place.  So far so good with saying that mathematics is the study of terms with precise meanings. 
But what if ‘1 + 1 = 2’ happens to be used to describe something in reality?  Each of the terms will then take on a  second meaning -- concerning what they are meant to be representing in reality.  This meaning lies outside the mathematical theory, and there is no guarantee that it is accurate.
The problem with saying that mathematics is the study of terms with precise meanings is that it’s all to easy to take this as trivially true, because the terms obviously have a precise mathematical sense.  It’s easy to overlook the other type of meaning, to think there is just  the  meaning of the term, and that there is just the question of the precision of their meanings.   This is why we get people saying "numbers don’t lie".  
‘Precise’ is a synonym for "accurate" and "exact" and it is characterized by "perfect conformity to fact or truth" (according to WordNet).  So when someone says that mathematics is the study of terms with precise meanings, we have a tendancy to take it as meaning it’s the study of things that are accurate and true.  The problem with that is, mathematical precision clearly does not guarantee the precision -- the accuracy or truth -- of applied mathematical statements, which need to conform with reality.
There are quite subtle ways of falling into this trap of confusing the two meanings.  A believer in epicycles would likely have thought that it must have been correct because it gave mathematically correct answers.  And  it actually did .  Epicycles actually did precisely calculate the positions of the stars and planets (not absolutely perfectly, but in principle the theory could have been adjusted to give perfectly precise results).  If the mathematics was right, how could it be wrong?  
But what the theory was actually calcualting was not the movement of galactic clockwork machinery and stars and planets embedded within it, but the movement of points of light (corresponding to the real stars and planets) as those points of light moved across the sky.  Those positions were right but they had it conceptualised all wrong.  
Which begs the question of whether it really matters if the conceptualisation is wrong, as long as the numbers are right?  Isn’t instrumental correctness all that really matters?  We might think so, but this is not true.  How would Pluto’s existence been predicted  under an epicycles conceptualisation?  How would we have thought about space travel under such a conceptualisation?
The moral is, when we're looking at mathematical statements, numbers are representations, and representations can lie.

If you're interested in knowing more about epicycles and how that theory was overthrown by the Copernican one, Thomas Kuhn's quite readable  The Copernican Revolution  is an excellent resource.  


New to LessWrong?

New Comment
80 comments, sorted by Click to highlight new comments since: Today at 5:51 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."

-- Albert Einstein

As far as I can see, that's just an acknowledgement that we can't know anything for certain -- so we can't be certain of any 'laws', and any claim of certainty is invalid. I was arguing that any applied maths term has two types of meanings -- one 'internal to' the equations and an 'external' ontological one, concerning what it represents -- and that a precise 'internal' meaning does not imply a precise 'external' meaning, even though 'precision' is often only thought of in terms of the first type of meaning. I don't see how that relates in any way to the question of absolute certainty. Is there some relationship I'm missing here?
The quote is getting at a distinction similar to yours. It's from the essay Geometry and Experience, published as one chapter in Sidelights on Relativity (pdf here). A different quote from the same essay goes:

You seem to be trying to, somewhat independently of the way it is done in science and mathematics typically, arrive at distinct concepts of accurate, precise, and predictive. At least how I'm using them, these terms can each describe a theory.

  • Precise - well defined and repeatable, with little or no error.

  • Accurate - descriptive of reality, as far as we know.

  • Predictive - we can extend this theory to describe new results accurately.

A theory can be any of these things, on different domains, to different degrees. A mathematical theory is precise*, but that need not make it accurate or predictive. It's of course a dangerous mistake to conflate these properties, or even to extend them beyond the domains on which they apply, when using a theory. Which is why these form part of the foundation of scientific method.

*There's a caveat here, but that would be an entire different discussion.

And this is not just an abstract issue. The fact that predictiveness has almost nothing to do with accuracy, in the sense of correspondence is one of the outstanding problems with physicalism.

I have not read Kuhn's work, but I have read some Ptolemy, and if I recall correctly he is pretty careful not to claim that the circles in his astronomy are present in some mechanical sense. (Copernicus, on the other hand, literally claims that the planets are moved by giant transparent spheres centered around the sun!)

In his discussion of his hypothesis that the planets' motions are simple, Ptolemy emphasizes that what seems simple to us may be complex to the gods, and vice versa. (This seems to me to be very similar to the distinction between concepts ... (read more)

From what I've heard and read, Ptolemy was a believer in the "shut up and calculate" interpretation of astronomical mechanics. If the equations make accurate predictions, the rest doesn't matter, right? Bohr took a similar attitude toward quantum mechanics when Einstein complained about it not making any sense: the "meaning" or "underlying reality" simply isn't important - the only thing that matters is whether or not the equations work.
4Eliezer Yudkowsky15y
Considering that, in the end, the Earth does go around the Sun, there are some fascinating lessons to be derived from all this. In particular - yes, the Gods may have a different notion of simplicity, as 'twere, but unless you can exhibit that alternative notion of simplicity, it seems we should still penalize hypotheses that sure look complicated.
Would it have been better for Ptolemy to forego the epicycles and suggest that the planets describe simple circles around the earth? Not only would that have been less accurate, but it would have obscured the coincidences that enabled later astronomers like Copernicus to take a god's eye view and notice that a heliocentric framework was a much simpler interpretation of the data. My point is that complexity was not the problem. If Ptolemy had tried on purpose to make his model less complex, it would likely have come at the expense of accuracy. The problem was that Ptolemy had too much common sense, and was not willing to let the math dictate his physics rather than the other way around.
I offer this link not as any sort of pedantic correction, but simply as a resource for those interested in learning exactly what modern physics has to say about this question. (Not difficult; highly recommended.) (An ulterior motive for posting this is that I always have a terrible time tracking down that particular post.)
And the Sun does go around the Earth. Of course, most of the observation that led to people thinking that the Sun goes around the Earth in the first place was based on the Earth's rotation on its axis, so that's a whole different issue.

Please see this previous comment of mine.

The point here is that it "1+1=2" should not be taken as a statement about physical reality, unless and until we have agreed (explicitly!) on a specific model of the world -- that is, a specific physical interpretation of those mathematical terms. If that model later turns out not to correspond to reality, that's what we say; we don't say that the mathematics was incorrect.

Thus, examples of things not to say:

  • "Relativity disproves Euclidean geometry."

  • "Quantum mechanics disproves classica

... (read more)
Re the last quote: I didn't expect Eliezer to say something like that. Has he actually ever seen a finite set?
Perhaps he meant "seen" in the sense of "visualized." What happens when we try to introspect on our visualization of some mathematical terms? Well I can't visualize an infinite set, but neither can I imagine a finite set, nor the number 5 for that matter. I can imagine five dots, or five apples, but not 5. In terms of my visualization, "5" seems to be an unfinished utterance. My mind wants to know, "5 what?" before it will visualize anything, or else it just puts up 5 black circles or whatever.
I interpreted that to mean that Eliezer doubts that a model that requires infinite sets will correspond to reality, not that the mathematics are incorrect. The figurative use of the word "atheist" makes the statement ambiguous, but his use of the phrase "actually seen" indicates that his concern is with modeling reality, not the math per se.
That was my (charitable) interpretation too, until, to my dismay, Eliezer confirmed (at a meetup) that he had "leanings" in the direction of constructivism/intuitionism -- apparently not quite aware of the discredited status of such views in mathematics. And indeed, when I asked Eliezer where he thinks the standard proof of infinite sets goes wrong, he pointed to the law of the excluded middle. His idol E.T. Jaynes may be to blame, who in PTLS explicitly allied himself with Kronecker, Brouwer, and Poincaré as opposed to Cantor, Hilbert, and Bourbaki -- once again apparently not understanding the settled status of that debate on the side of Cantor et al. One is inclined to suspect this is where Eliezer picked such attitudes up.
Can you elaborate on constructivism, intuitionism, and their discrediting? And what that has to do with the law of the excluded middle? I thought constructivism and intuitionism were epistemological theories, and it isn't immediately obvious how they apply to mathematics. Does a constructivist mathematician not believe in proof by contradiction? Also, I don't know what you mean by "the standard proof of infinite sets".
I think komponisto is a little confused about the discredited status of intuitionism, and you're a little confused about math vs epistemology. Here's a short sweet introduction to intuitionist math and when it's useful, much in the spirit of Eliezer's intuitive explanation of Bayes. Scroll down for the connection between intuitionism and infinitesimals - that's the most exciting bit. PS: that whole blog is pretty awesome - I got turned on to it by the post "Seemingly impossible functional programs" which demonstrates e.g. how the problem of determining equality of two black-box functions from reals in [0, 1] to booleans turns out to be computationally decidable in finite time (complete with comparison algorithm in Haskell).
Not at all. Precious few are the mathematicians who take the views of Kronecker or Brouwer seriously today. I mean, sure, some historically knowledgeable mathematicians will gladly engage in bull sessions about the traditional "three views" in the philosophy of mathematics (Platonism, intuitionism, and formalism), during which they treat them as if on par with each other. But then they get up the next day and write papers that depend on the Axiom of Choice without batting an eye.
The philosophical parts of intuitionism are mostly useless, but it contains useful mathematical parts like Martin-Löf type theory used in e.g. the Coq proof assistant. Not sure if this is relevant to Eliezer's "leanings" which started the discussion, but still.
Right, but in this context I wouldn't label such "mathematical parts" as part of intuitionism per se. What I'm talking about here is a certain school of thought that holds that mainstream (infinitary, nonconstructive) mathematics is in some important sense erroneous. This is a belief that Eliezer has been hitherto unwilling to disclaim -- for no reason that I can fathom other than a sense of warm glow around E.T. Jaynes. (Needless to say, Eliezer is welcome to set the record straight on this any time he wishes...)
2Eliezer Yudkowsky15y
I do not understand what the word "erroneous" is supposed to mean in this context. For the sake of argument, I will go ahead and ask what sort of nonconstructive entities you think an AI needs to reason about, in order to function properly.
Some senses of "erroneous" that might be involved here include (this list is not necessarily intended to be exhaustive): * Mathematically incorrect -- i.e. the proofs contain actual logical inconsistencies. This was argued by some early skeptics (such as Kronecker) but is basically indefensible ever since the formulation of axiomatic set theory and results such as Gödel's on the consistency of the Axiom of Choice. Such a person would have to actually believe the ZF axioms are inconsistent, and I am aware of no plausible argument for this. * Making claims that are epistemologically indefensible, even if possibly true. E.g., maybe there does exist a well-ordering of the reals, but mere mortals are in no position to assert that such a thing exists. Again, axiomatic formalization should have meant the end of this as a plausible stance. * Irrelevant or uninteresing as an area of research because of a "lack of correspondence" with "reality" or "the physical world". In order to be consistent, a person subscribing to this view would have to repudiate the whole of pure mathematics as an enterprise. If, as is more common, the person is selectively criticizing certain parts of mathematics, then they are almost certainly suffering from map-territory confusion. Mathematics is not physics; the map is not the territory. It is not ordained or programmed into the universe that positive integers must refer specifically to numbers of elementary particles, or some such, any more than the symbolic conventions of your atlas are programmed into the Earth. Hence one cannot make a leap e.g. from the existence of a finite number of elementary particles to the theoretical adequacy of finitely many numbers. To do so would be to prematurely circumscribe the nature of mathematical models of the physical world. Any criticism of a particular area of mathematics as "unconnected to reality" necessarily has to be made from the standpoint of a particular model of reality. But part (perhaps a lar
Why do you think that the axiomatic formulation of ZFC "should have meant an end" to the stance that ZFC makes claims that are epistemologically indefensible? Just because I can formalize a statement does not make that statement true, even if it is consistent. Many people (including me and apparently Eliezer, though I would guess that my views are different from his) do not think that the axioms of ZFC are self-evident truths. In general, I find the argument for Platonism/the validity of ZFC based on common acceptance to be problematic because I just don't think that most people think about these issues seriously. It is a consensus of convenience and inertia. Also, many mathematicians are not Platonists at all but rather formalists -- and constructivism is closer to formalism than Platonism is.
Regarding your three bullet points above: 1. It's rude to start refuting an idea before you've finished defining it. 2. One of these things is not like the others. There's nothing wrong with giving us a history of constructive thinking, and providing us with reasons why outdated versions of the theory were found wanting. It's good style to use parallel construction to build rhetorical momentum. It is terribly dishonest to do both at the same time -- it creates the impression that the subjective reasons you give for dismissing point 3 have weight equal to the objective reasons history has given for dismissing points 1 and 2. 3. Your talk in point 3 about "map-territory confusion" is very strange. Mathematics is all in your head. It's all map, no territory. You seem to be claiming that constructivsts are outside of the mathematical mainstream because they want to bend theory in the direction of a preferred outcome. You then claim that this is outside of the bounds of acceptable mathematical thinking, So what's wrong with reasoning like this: "Nobody really likes all of the consequences of the Axiom of Choice, but most people seem willing to put up with its bad behavior because some of the abstractions it enables -- like the Real Numbers -- are just so damn useful. I wonder how many of the useful properties of the Real Numbers I could capture by building up from (a possibly weakened version of) ZF set theory and a weakened version of the Axiom of Choice?"
I'm sorry, but I don't think there was anything remotely "rude" or "terribly dishonest" about my previous comment. If you think I am mistaken about anything I said, just explain why. Criticizing my rhetorical style and accusing me of violating social norms is not something I find helpful. Quite frankly, I also find criticisms of the form "you sound more confident than you should be" rather annoying. E.g: That's because for me, the reasons I gave in point 3 do indeed have similar weight to the reasons I gave in points 1 and 2. If you disagree, by all means say so. But to rise up in indignation over the very listing of my reasons -- is that really necessary? Would you seriously have preferred that I just list the bullet points without explaining what I thought? Nothing at all, except for the false claim that nobody likes the consequences of the Axiom of Choice. (Some people do like them, and why shouldn't they?) The target of my critique -- and I thought I made this clear in my response to cousin_it -- is the critique of mainstream mathematical reasoning, not the research program of exploring different axiomatic set theories. The latter could easily be done by someone fully on board with the soundness of traditional mathematics. Just as it is unnecessary to doubt the correctness of Euclid's arguments in order to be interested in non-Euclidean geometry.
Until very recently, I held a similar attitude. I think it's common to be annoyed by this sort of criticism... it's distracting and rarely relevant. That said, it seems to me that the above "rarely" isn't rare enough. If you're inadvertently violating a social norm, wouldn't you like to know? If you already know, what does it matter to have it pointed out to you? Just ignore the redundant information. I think this principle extends to a lot of speculative or subjective criticism. The potential value of just one accurate critique taken to heart seems quite high. Does such criticism have a positive expected value? That depends on the overall cost of the associated inaccurate or redundant statements (i.e., the vast majority of them). It seems this cost can be made to approach zero by just not taking them personally and ignoring them when they're misguided, so long as they're sufficiently disentangled from "object-level" statements.
Aaaand this makes me curious. Eliezer, for the sake of argument, do you really think we'd do good by prohibiting the AI from using reductio ad absurdum?
0Eliezer Yudkowsky15y
Nope. I do believe in classical first-order logic, I'm just skeptical about infinite sets. I'd like to hear k's answer, though.
Perhaps this would make a good subject for my inaugural top-level post. I'll try to write one up in the near future.
6Eliezer Yudkowsky15y
Okay. I have several sources of skepticism about infinite sets. One has to do with my never having observed a large cardinal. One has to do with the inability of first-order logic to discriminate different sizes of infinite set (any countably infinite set of first-order statements that has an infinite model has a countably infinite model - i.e. a first-order theory of e.g. the real numbers has countable models as well as the canonical uncountable model) and that higher-order logic proves exactly what a many-sorted first-order logic proves, no more and no less. One has to do with the breakdown of many finite operations, such as size comparison, in a way that e.g. prevents me from comparing two "infinite" collections of observers to determine anthropic probabilities. The chief argument against my skepticism has to do with the apparent physical existence of minimal closures and continuous quantities, two things that cannot be defined in first-order logic but that would, apparently, if you take higher-order logic at face value, suffice respectively to specify the existence of a unique infinite collection of natural numbers and a unique infinite collection of points on a line. Another point against my skepticism is that first-order set theory proper and not just first-order Peano Arithmetic is useful to prove e.g. the totalness of the Goodstein function, but while a convenient proof uses infinite ordinals, it's not clear that you couldn't build an AI that got by just as well on computable functions without having to think about infinite sets. My position can be summed up as follows: I suspect that an AI does not have to reason about large infinities, or possibly any infinities at all, in order to deal with reality.
One has to do with the breakdown of many finite operations, such as size comparison, in a way that e.g. prevents me from comparing two "infinite" collections of observers to determine anthropic probabilities. Born probabilities seem to fit your bill perfectly. :-)
1Eliezer Yudkowsky15y
Don't think I haven't noticed that. (In fact I believe I wrote about it.)
I reject infinity as anything more than "a number that is big enough for its smallness to be negligible for the purpose at hand." My reason for rejecting infinity in it's usual sense is very simple: it doesn't communicate anything. Here you said (about communication) "When you each understand what is in the other's mind, you are done." In order to communicate, there has to be something in your mind in the first place, but don't we all agree infinity can't ever be in your mind? If so, how can it be communicated? Edit to clarify: I worded that poorly. What I mean to ask is, Don't we all agree that we cannot imagine infinity (other than imagine something like, say, a video that seems to never end, or a line that is way longer than you'd ever seem to need)? If you can imagine it, please just tell me how you do it! Also, "reject" is too strong a word; I merely await a coherent definition of "infinity" that differs from mine.
Yes but it doesn't matter. The moon can't literally be in your mind either. Since your mind is in your brain, then if the moon were in your mind it would be in your brain, and I don't even know what would happen first: your brain would be crushed against your skull (which would in turn explode), and the weight of the moon would crush you flat (and also destroy whatever continent you were on and then very possibly the whole world). But you can still think about the moon without it literally having to be in your mind. Same with infinity.
I can visualize the moon. If I say the word "moon," and you get a picture of the moon in your mind - or some such thing - then I feel like we're on the same page. But I can't visualize "infinity," or when I do it turns out as above. If I say the word "infinity" and you visualize (or taste, or whatever) something similar, I feel like we've communicated, but then you would agree with my first line in the above post. Since you don't agree, when I say "infinity," you must get some very different representation in your mind. Does it do the concept any more justice that my representations? If so, please tell me how to experience it.
We refer to things with signs. The signs don't have to be visual representations. We can think about things by employing the signs which refer to them. What makes the sign for (say) countable infinity refer to it is the way that the sign is used in a mathematical theory (countable infinity being a mathematical concept). Learn the math, and you will learn the concept. Compare to this: you probably cannot visualize the number 845,264,087,843,113. You can of course visualize the sign I just now wrote for it, but you cannot visualize the number itself (by, for example, visualizing a large bowl with exactly that number of pebbles in it). What you can do is visualize a bowl with a vast number of pebbles in it, while thinking the thought, "this imagined bowl has precisely 845,264,087,843,113 pebbles in it." Here you would be relying entirely on the sign to make your mental picture into a picture of exactly that number of pebbles. In fact you could dispense with the picture entirely and keep only the sign, and you would successfully be thinking about that number, purely by employing its sign in your thoughts. Note that you can do operations on that sign, such as subtracting another number by manipulating the two signs via the method you learned as a child. So you have mastered (some of) the relevant math, so the sign, as you employ it, really does refer to that number.
Well I agree that I can think just with verbal signs, so long as the verbal sentences or symbolic statements mean something to me (could potentially pay rent*) or the symbols are eventually converted into some other representation that means something to me. I can think with the infinity symbol, which doesn't mean anything to me (unless it means what I first said above: in short, "way big enough"), and then later convert the result back into symbols that do mean something to me. So I'm fine with using infinity in math, as long as it's just a formalism (a symbol) like that. But here is one reason why I want to object to the "realist" interpretation of infinity via this argument that it's just a formalism and has no physical or experiential interpretation, besides "way big enough": The Christian god, for example, is supposed to be infinite this and infinite that. This isn't intended - AFAIK - as a formalism nor as an approximation ("way powerful enough"), but as an actual statement. Once you realize this really isn't communicating anything, theological noncognitivism is a snap: the entity in question is shown to be a mere symbol, if anything. (Or, to be completely fair, God could just be a really powerful, really smart dude.) I know there are other major problems with theology, but this approach seems cleanest. *ETA: This needs an example. Say I have a verbal belief or get trusted verbal data, like a close friend says in a serious and urgent voice, "(You'd better) duck!" The sentence means something to me directly: it means I'll be better off taking a certain action. That pays rent because I don't get hit in the head by a snowball or something. To make it into thinking in words (just transforming sentences around using my knowledge of English grammar), my friend might have been a prankster and told me something of the form, "If not A, then not B. If C, then B. If A, then you'd better duck. By the way, C." Then I'd have to do the semantic transforms to derive the co
To know reality we employ physics. Physics employs calculus. Calculus employs limits. Limits employ infinite sequences. Does that pay enough rent?
I did say I'm fine with using infinity in math as a formalism, and also that statements using it could be reconverted (using mathematical operations) into ones that do pay rent. It's just that the symbol infinity doesn't immediately mean anything to me (except my original definition). But I am interested in the separate idea that limits employ infinite sequences. It of course depends on the definition of limit. The epsilon-delta definition in my highschool textbook didn't use infinite sequences, except in the sense of "you could go on giving me epsilons and I could go on giving you deltas." That definition of infinity (if we'll call it that) directly means something to me: "this process of back and forth is not going to end." There is also the infinitesimal approach of nonstandard analysis, but see my reply to ata for that.
If statement A can be converted into statement B and statement B pays rent, then statement A pays rent. Your original definition: Is a terrible one for most purposes, because for them, no matter how big you make a finite number, it won't serve the purpose. Also, meaning is not immediate. Your sense that a word means something may arise with no perceptible delay, but meaning takes time. To use the point you raised, meaning pays rent and rent takes time to pay. Anticipated sensory experiences are scheduled occur in the future, i.e. after a delay. The immediate sense that a word means something is not, itself, the meaning, but only a reliable intuition that the word means something. If you study the mathematics of infinity, then you will likewise develop the intuition that infinity means something. The epsilon delta definition is meaningful because of the infinite divisibility of the reals. Unlike your original definition, this is a good definition (at least, once it's been appropriately cleaned up and made precise).
Only if the mathematical operation is performed by pure logically entailment, which - if a meaningless definition of infinity is used and that definition is scrapped in the final statement - it would not be. We will just go on about what constitutes a mathematical operation and such, but all I am saying is that if there is a formal manipulation rule that says something like, "You can change the infinity symbol to 'big enough'* here" (note: this is not logical entailment) then I have no objection to the use of the formal symbol "infinity." *ETA: or just use the definition we agree on instead. This is a minor technical point, hard to explain, and I'm not doing a good job of it. I'll leave it in just in case you started a reply to it already, but I don't think it will help many people understand what I'm talking about, rather than just reading the parts below this. For example? Although, if we agree on the definition below, there's maybe no point. That's why I said "could potentially pay rent." Looks like we're in agreement, then, and I am not a finitist if that is what is meant by infinite sequences. But then, to take it back to the original, I still agree with Eliezer that an "infinite set" is a dubious concept. Infinite as an adverb I can take (describes a process that isn't going to end (in the sense that expecting it to end never pays rent)); infinite as an adjective, and infinity the noun, seem like reification: Harmless in some contexts, but harmful in others.
A very early appearance of infinity is the proof that there are infinitely many primes. It is most certainly not a proof that there is a very large but finite number of primes.
I can agree with "there are infinitely many primes" if I interpret it as something like "if I ever expect to run out of primes, that belief won't pay rent." In this case, and in most cases in mathematics, these statements may look and operate the same - except mine might be slower and harder to work with. So why do I insist on it? I'm happy to work with infinities for regular math stuff, but there are some cases where it does matter, and these might all be outside of pure math. But in applied math there can be problems if infinity is taken seriously as a static concept rather than as a process where the expectation that it will end will never pay rent. Like if someone said, "Black holes have infinite density," I would have to ask for clarification. Can it be put into a verbal form at least? How would it pay rent in terms of measurements? That kind of thing.
Actually, the way I learned calculus, allowable values of functions are real (or complex), not infinite. The value of the function 1/x at x=0 is not "infinity", but "undefined" (which is to say, there is no function at that point); similarly for derivatives of functions where the functions go vertical. Since that time, I discovered that apparently physicists have supplemented the calculus I know with infinite values. They actually did it because this was useful to them. Don't ask me why, I don't remember. But here is a case where the pure math does not have infinities, and then the practical folk over in the physics department add them in. Apparently the practical folk think that infinity can pay rent. As for gravitational singularities, the problem here is not the concept of infinity. That is an innocent bystander. The problem is that the math breaks down. That happens even if you replace "infinite" with "undefined".
This isn't really correct. Allowable values of functions are whatever you want. If you define a function on R-{0} by "x goes to 1/x", it's not defined at 0; I explicitly excluded it from the domain. If you define a function on R by "x goes to 1/x"... you can't, there's no such thing as 1/0. If you define a function on R by "x goes to 1/x if x is nonzero, and 0 goes to infinity", this is a perfectly sensible function, which it is convenient to just abbreviate as "1/x". Though for obvious reasons I would only recommend doing this if the "infinity" you are using represents both arbitrarily large positive and negative quantities. (EDIT: But if you want to define a function on [0,infty) by "x goes to 1/x if x is nonzero, and 0 goes to infinity" with "infinity" now only being large in the positive direction, which is likely what's actually under consideration here, then this is not so dumb.) All this is irrelevant to any actual physical questions, where whether using infinities is appropriate or not just depends on, well, the physics of it.
They are limited by the scope of whatever theory you are working in.
Yes, and of course which theory will be appropriate is going to be determined by the actual physics. My point is just that your statement that "pure math does not have infinities" and physicists "added them in" is wrong (even ignoring historical inaccuracies).
Selective quotation. I said: That is not a statement that the field of mathematics does not have infinities. I was referring specifically to "the way I learned calculus". Unless you took my class, you don't know what I did or did not learn and how I learned it. My statement was true, your "correction" was false.
Ah, sorry then. This is the sort of mistake I that's common enough that it seemed more obvious to me to read it the that way rather than the literal and correct way.
I might call engineers "practical folk"; astrophysicists I'm not so sure. I'd like to see their reason for doing so.
I never really got why the math is said to 'break down'. Is it just because of a divide by zero thing or something more significant? I guess I just don't see a particular problem with having a part of the universe really being @%%@ed up like that.
What I think is more likely is that the universe does not actually divide by zero, and the singularity is a gap in our knowledge. Gaps in knowledge are the problem of science, whose function is to fill them.
I'm really surprised at the amount of anti-infinitism that rolls around Less Wrong.
"Infinity-noncognitivist" would be more accurate in my case (but it all depends on the definition; I await one that I can see how to interpret, and I accept all the ones that I already know how to interpret [some mentioned above]).
It's not just you. There was just recently another thread going on about how the real numbers ought to be countable and what-not.
From your post it sounds like you in fact do not have a clear picture of infinity in your head. I have a feeling this is true for many people, so let me try to paint one. Throughout this post I'll be using "number" to mean "positive integer". Suppose that there is a distinction we can draw between certain types of numbers and other types of numbers. For example, we could make a distinction between "primes" and "non-primes". A standard way to communicate the fact that we have drawn this distinction is to say that there is a "set of all primes". This language need not be construed as meaning that all primes together can be coherently thought of as forming a collection (though it often is construed that way, usually pretty carelessly); the key thing is just that the distinction between primes and non-primes is itself meaningful. In the case of primes, the fact that the distinction is meaningful follows from the fact that there is an algorithm to decide whether any given number is prime. Now for "infinite": A set of numbers is called infinite if for every number N, there exists a number greater than N in the set. For example, Euclid proved that the set of primes is infinite under this definition. Now this definition is a little restrictive in terms of mathematical practice, since we will often want to talk about sets that contain things other than numbers, but the basic idea is similar in the general case: the semantic function of a set is provided not by the fact that its members "form a collection" (whatever that might mean), but rather by the fact that there is a distinction of some kind (possibly of the kind that can be determined by an algorithm) between things that are in the set and things that are not in the set. In general a set is "infinite" if for every number N, the set contains more than N members (i.e. there are more than N things that satisfy the condition that the set encodes). So that's "infinity", as used in standard mathematical practice. (Well, t
I think you mean, 'determine that it does not satisfy the conclusion'.
I think my original sentence is correct; there is no known algorithm that provably outputs the answer to the question "Does N satisfy the conclusion of the conjecture?" given N as an input. To do this, an algorithm would need to do both of the following: output "Yes" if and only if N satisfies the conclusion, and output "No" if and only if N does not satisfy the conclusion. There are known algorithms that do the first but not the second (unless the twin prime conjecture happens to be true).
You're pointing to a concept represented in your brain, using a label which you expect will evoke analogous representations of that concept in readers' brains, and asserting that that thing is not something that a human brain could represent. The various mathematical uses of infinity (infinite cardinals, infinity as a limit in calculus, infinities in nonstandard analysis, etc.) are all well-defined and can be stored as information-bearing concepts in human brains. I don't think there's any problem here.
It looks like we agree but you either misread or I was unclear: I'm not asserting that the definition of infinity I mentioned at the beginning ("a number that is big enough for its smallness to be negligible for the purpose at hand") is not something a human brain could represent. I'm saying that if the speaker considers "infinity" to be something that a human brain cannot represent, I must question what they are even doing when they utter the word. Surely they are not communicating in the sense Eliezer referred to, of trying to get someone else to have the same content in their head. (If they simply want me to note a mathematical symbol, that is fine, too.) I also agree that various uses of concepts that could be called infinity in math can be stored in human brains, but that depends on the definitions. I am not "anti-infinity" except if the speaker claims that their infinity cannot be represented in anyone's mind, but they are talking about it anyway. That would just be a kind of "bluffing," as it were. If there are sensical definitions of infinity that seem categorically different than the ones I mentioned so far, I'd like to see them. In short, I just don't get infinity unless it means one of the things I've said so far. I don't want to be called a "finitist" if I don't even know what the person means by "infinite."
Oi, that's not right. The domain of these functions is not the set of reals in [0, 1] but the set of infinite sequences of bits; while there is a bijection between these two sets, it's not the obvious one of binary expansion, because in binary, 0.0111... and 0.1000... represent the same real number. There is no topology-preserving bijection between the two sets. Also, the functions have to be continuous; it's easy to come up with a function (e.g. equality to a certain sequence) for which the given functions don't work. Of course, it happens that the usual way of handing "real numbers" in languages like Haskell actually handles things that are effectively the same as bit sequences, and that there's no way to write a total non-continuous function in a language like Haskell, making my point somewhat moot. So, carry on, then.
Your comment is basically correct. This paper deals with the representation issue somewhat. But I think those results are applicable to computation in general, and the choice of Haskell is irrelevant to the discussion. You're welcome to prove me wrong by exhibiting a representation of exact reals that allows decidable equality, in any programming language.
Yes, a constructivist mathematician) does not believe in proof by contradiction.
Huh. Good to know.
I fully agree, and this is completely in line with the points I was trying to make.

Or: "Physics is not Math"

This seems to be a common response - Tyrrell_McAllister said something similar: I take that distinction as meaning that a precise maths statement isn't necessarily reflecting reality like physics does. That is not really my point. For one thing, my point is about any applied maths, regardless of domain. That maths could be used in physics, biology, economics, engineering, computer science, or even the humanities. But more importantly, my point concerns what you think the equations are about, and how you can be mistaken about that, even in physics. The following might help clarify. A successful test of a mathematical theory against reality means that it accurately describes some aspect of reality. But a successful test doesn't necessarily mean it accurately describes what you think it does. People successfully tested the epicycles theory's predictions about the movement of the planets and the stars. They tended to think that this showed that the planets and stars were carried around on the specified configuration of rotating circles, but all it actually showed was that the points of light in the sky followed the paths the theory predicted. They were committing a mind projection 'fallacy' - their eyes were looking at points of light but they were 'seeing' planets and stars embedded in spheres. The way people interpreted those successful predictions made it very hard to criticise the epicycles theory.
The issue people are having is, that you start out with "sort of" as your response to the statement that math is the study of precisely-defined terms. In doing so, you decide to throw away that insightful and useful perspective by confusing math with attempts to use math to describe phenomena. The pitfalls of "mathematical modelling" are interesting and worth discussing, but it actually doesn't help clarify the issue by jumbling it all together yourself, then trying to unjumble what was clear before you started.
I've never gotten that impression. Proponents of epicycles were working from the assumption that celestial motion must be perfect, and therefore circular, and so were making the math line up with that. Aside from trying to fit an elliptical peg into a circular hole, they seemed to merely believe that the points of light in the sky follow the paths the theory predicts. But then, it's been a few years since I've read any of the relevant sources.

As the other commenters have indicated, I think that your distinction is really just the distinction between physics and mathematics.

I agree that mathematical assertions have different meanings in different contexts, though. Here's my attempt at a definition of mathematics:

Mathematics is the study of very precise concepts, especially of how they behave under very precise operations.

I prefer to say that mathematics is about concepts, not terms. There seems to me to be a gap between, on the one hand, having a precise concept in one's mind and, on the other... (read more)

I think the following says something similar quite nicely: — Edward Teller

Grahams point s straightforward if expressed as 'pure maths is the study of terms with precise meannigs'.

Which begs the question of whether it really matters if the conceptualisation is wrong, as long as the numbers are right? Isn’t instrumental correctness all that really matters?

I'm not in the business of telling people what values to have, but if you are a physcalist, you are comited to more than instrumental.

The fact that predictiveness has almost nothing to do with accuracy, in the sense of correspondence is one of the outstanding problems with physicalism

Relativity teaches us that "the earth goes around the sun" and "the sun goes around the earth, and the other planets move in complicated curves" are both right. So to say, "Those positions [calculated by epicycles] were right but they had it conceptualised all wrong," makes no sense.

Hence, when you say the epicycles are wrong, all you can mean that they are more complicated and harder to work with. This is a radical redefinition of the word wrong.

So, basically, I disagree completely with your conclusion. You can't say that a representation gives the right answers, but lies.

You're technically right about general relativity (so far as I grok it), but the hypothesis of geocentrism as understood pre-GR still fails hard compared to that of heliocentrism understood pre-GR. Geocentrism doesn't logically imply that the earth doesn't rotate, but that hypothesis was never taken seriously except by heliocentrists, who then found experimental evidence of that rotation. Not to mention that epicycles were capable of explaining practically any regular pattern, and thus incapable of making the novel predictions of Newtonian gravity, which gives almost the right predictions assuming heliocentrism but gives nonsense assuming geocentrism. It's far worse than just being more complicated; most epicycle-type hypotheses fail harder than neural networks once they leave their training set.

Isn’t instrumental correctness all that really matters? We might think so, but this is not true. How would Pluto’s existence been predicted under an epicycles conceptualisation? How would we have thought about space travel under such a conceptualisation?

Your counterexamples don't seem apposite to me. Out of sample predictive ability strikes me as an instrumental good.

Formatting point: please use the "summary break" button when you have a long post.

To add to what others have already commented...

It is theoretically possible to accurately describe the motions of celestial bodies using epicycles, though one might need infinite epicycles, and epicycles would themselves need to be on epicycles. If you think there's something wrong with the math, it won't be in its inability to describe the motion of celestial bodies. Rather, feasibility, simplicity, usefulness, and other such concerns will likely be factors in it.

While 'accurate' and 'precise' are used as synonyms in ordinary language, please never use ... (read more)

But I don't think there's anything "wrong with the math" - I even said precisely that: I was trying to talk about how people actually use them, and one of the things I was suggesting is that people do actually tend to treat them as synonymous. Isn't this a little picky? The way I used 'begs the question', in the sense of 'raises the question', is fairly common usage. Language is constantly evolving and if you wanted to claim that people only should use terms and phrases in line with their original meanings you'd have throw away most language.
Language is always evolving, but more recently, and especially currently, evolving usages are still pretty sloppy. If you want to be less wrong you need to use language more precisely. That is, don't use new usages when an older usage is more precise or accurate, unless there is a real need, especially don't use technical terms in sloppy common usages.