# 17

Frontpage

I'm a mathematics undergraduate in the UK, in my final year, and I have been thinking a lot about Platonism. I largely find myself completely unconvinced by attempts to naturalise the metaphysics behind the ontology of mathematical objects, though I am not unsympathetic to those attempts, and of those attempts I know, I can see their point.

I will state plainly here that as of this post I am a Platonist (by which, I believe that mathematical objects are abstract objects, but obviously that I am unsure of what this entails), and i want to sort of articulate why. I will mostly be focusing on the notion of 'mathematics is just a useful tool/language, and it has no abstract independent existence outside of this viewpoint'.

Most of the posts I've seen on lesswrong talk about natural numbers as the object of our attention when we discuss mathematics, though i could have missed some. To be sure, the natural numbers are about as quintessentially 'MATHEMATICS' as you can get, but that's not what i want to really discuss. I want to talk about abstractions. Hilbert space, Banach space, Measure space, Galois symmetries, Categories, Topoi, etc. are just some examples of extremely abstract notions that offer powerful and, from my perspective, necessary insight into applied mathematics.

Something that has almost plagued me over this final year is the notion of how the abstract notions of modern day mathematics has applicability to the real world. It is not that I am saying 'mathematics fits the world so well, we are so lucky to be able to have such a versatile tool'. But i am saying 'Why should it be the case that an abstract notion even has such powerful applicability at all? To the point where you can derive iff statements about applied structures.' I don't buy that we make mathematics fit the world, per se, although we certainly do that insofar as we are trying to model something. But its more, when we uncover structures built into the general areas of applied math, the mathematics says that 'if we assume such and such holds about, say partial differential equations, then this implies, through a chain of abstract reasoning, we get something like a concrete result within the real world.' Its essentially this process that disturbs me. This process can fail in the real world, especially when we try to model specific phenomena, but those models have to obey PDE conditions (from our example), and this includes both right and wrong models.

So, here is the crux of my question to you, as I am interested in responses to this question:

'Why does the role of abstraction hold permanent and seemingly irrefutable sway over the domain of modelling the real world with mathematics?' If mathematics were just a tool, with no independent reality, then shouldn't it be the case that the probability that we encounter this phenomenon (that is, purely abstract reasoning tells us something objective about modelling process as a whole, and what's more provides direct utility and insight into how to find solutions to these models) be exceedingly low? If we just made a bunch of stuff up through logically following semantics and syntax through to the end, the idea that this stuff, which contains no ontological independence, has direct and immediate applicability to the world seems completely the opposite of what you should expect.

I'll appreciate it if i've missed things posting this, if its unclear then I'll re-edit and make it much clearer with examples, feel free to tear me apart in the comments.

New Comment

# 2 Answers sorted by top scoring

bhishma

### Dec 18, 2019

11

Yeah even I used to feel the same, Wigner wrote an article about it "The unreasonable Effectiveness of Mathematics in the Natural Sciences" you will definitely find it useful.

shminux

### Dec 19, 2019

7

I think your question is "why are abstractions useful?" Thinking more about it, let's start with what you mean by an abstraction. From Wikipedia:

Abstraction in its main sense is a conceptual process where general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods.
"An abstraction" is the outcome of this process—a concept that acts as a common noun for all subordinate concepts, and connects any related concepts as a group, field, or category.

One can think of an abstraction as a category in itself. You start with one domain of the territory and create a surjective (but not injective, maps are lossy) morphism that preserves the rules (arrows) governing the domain in the codomain. Then you find another such category, with a different domain but the same codomain, and notice that there is a functor that preserves these morphisms. Maybe you can find more categories like that. You end up with a natural transformation between a these functors, a functor category. And that is your abstraction. You end up relating, say, electrostatics, fluid dynamics and Newtonian gravity through the codomain of the Poisson equation.

So, if one defines an abstraction as a natural transformation between categories of maps of the territory, the question becomes, why are these natural transformations useful?

To look deeper into this, I would note that, as often discussed here, every agent is an embedded agent, meaning the algorithm it runs is based on maps that are also a (tiny) part of the territory. For these maps to be useful, the territory must be mappable to begin with, i.e. the parts of the territory that are important for the agent's survival must be predictable from the maps. Certainly there is a pressure to minimize the resources while maximizing the domains where these resources create useful maps, and one way to do it is to increase the "abstraction level", and the natural transformations is pretty abstract. The broader the swaths of the territory one wants to map, the more pressure is on creating higher abstraction levels. I can imagine that even higher levels of abstractions are created, like lax natural transformations, with enough optimization pressure.

Abstractions are not unique to humans, or to consciousness. But when they bubble up to the conscious mind we call it mathematics. Even though our subconscious minds are experts at solving non-linear PDEs super quickly, say, when throwing a ball toward a target.

So, to answer a related question, the "unreasonable effectiveness" of mathematics is an artifact of optimization pressures on an embedded agent in a (partially) predictable universe.

I like this mode of thinking, and its angle is something I haven't considered before. How would you interpret/dissolve the kind of question I posed in the answer to Pattern in the comments below?

Namely:

'My point is the process of maths is (to a degree) invented or discovered, and under the invented hypothesis, where one would adhere to strict physicalist-style nominalism, the very act of predicting that the solutions to very real problems are dependent on abstract insight is literally incompatible with that position, to the point where seeing it ...

7shminux3y
Sorry, my spam filter ate your reply notification :( To "dissolve" the math invented/discovered question, it's a false dichotomy, as constructing mathematical models, conscious or subconscious, is constructing the natural transformations between categories that allow high "compression ratio" of models of the world. They are as much "out there" in the world as the compression would allow. But they are not in some ideal Platonic world separate from the physical one. Not sure if this makes sense. There might be a circularity, but I do not see one. The chain of reasoning is, as above: 1. There is a somewhat predictable world out there 2. There are (surjective) maps from the world to its parts (models) 3. There are commonalities between such maps such that the procedure for constructing one map can be applied to another map. 4, These commonalities, which would correspond to natural transformations in the CT language, are a way to further compress the models. 5. To an embedded agent these commonalities feel like mathematical abstractions. I do not believe I have used CT to define abstractions, only to meta-model them.
3Faustus23y
Don't worry it no trouble :) Thank you, I see your reasoning more clearly now, and my thought of circularity is no longer there for me. Also I see the mental distinction between compression models and platonic abstracts.
'mathematics is just a useful tool/language, and it has no abstract independent existence outside of this viewpoint'.

I'd say it's something in our heads or machines. In part it can be grounded "in computers/programming languages" - but the ease of that project may better reflect how well that has already been done in the computers/programming languages.

'Why should it be the case that an abstract notion even has such powerful applicability at all? To the point where you can derive iff statements about applied structures.'

a) It was based off a form of order in our world - it is powerful because it is true.

-It is the low hanging fruit in understanding the world.

b) It possessed order within itself, that similar things could be built on top of it.

c) We took a part of something within us, and created it without. With this increased space it grew, within and without. (Although for parts of it to be passed on, they either have to provide clear enough instructions for people to build the model in their minds, or have enough clues that people can recreate something close enough.)

d) once you have a clear model, you can test it. Math provides or is part of a powerful feedback loop.

-If you did the math right, and accounted for all the factors (well enough) then it will work (in the world). If it didn't then you failed at one of those two things.

-If you think you did both, then that points towards a missing/new factor for you to discover.e)

'if we assume such and such holds about, say partial differential equations, then this implies, through a chain of abstract reasoning, we get something like a concrete result within the real world.'

Perhaps this skips over how the connection to reality/our senses is specified. (See D above.)

This is actually similar to a kind of reasoning i have undertaken, but I want to ask you what you make of the fact that such high level abstraction even has any kind of utility at all. Say one day we as all physicalists (assumption on my part) all sit down (in present day mind, but without any knowledge of abstract maths), and we do the same type of physical formal kind of thinking that Newton or Leibniz undertook and developed calculus, which gives us the underpinning of most of modern applied mathematics. We then see how the kind of geometrical reasoning we have employed is able to tap into the 'world logic' (for want of a better phrase). If I were to come up to you, describe how we can obtain an enormous amount of world logic insight by some utterly far removed and esoteric notions concerning topological spaces, sobolev spaces and understanding complex abstract interactions in banach spaces (or what have you), why would you, as a physicalist, take that seriously? ie: you can believe there is more work to be done, but that it is very very unlikely to come from understanding concepts that have little or (very often) NOTHING to do with the real world. (indeed this can go quite far, see: https://ncatlab.org/nlab/show/differential+equation)

The core of what I've been struggling with is to produce a sufficient response that dissolves that question satisfactorily, and although there must be a connection between mathematics and neural structures (there is a good amount of literature as to this, echoing your own thoughts), I feel like this question is actually quite far removed from that even being the case or not.

With your point b) we can recognise it has order within itself, but then when it comes to that next step, that seeing what structures work on top of it, that process is either (very broadly speaking) an act of creation by the person (leaving out the nature of that act, lets consider it neural) or we simply have insight into another category of the world (platonic), and then we ask the question: 'If it is neural, why SHOULD this extra addition work? Reality has no means of simply abiding by logical structure or bending itself to fit into abstract reasoning. But now, given that it does work, how are we to interpret the nature of our two choices?'

My point is the process of maths is (to a degree) invented or discovered, and under the invented hypothesis, where one would adhere to strict physicalist-style nominalism, the very act of predicting that the solutions to very real problems are dependent on abstract insight is literally incompatible with that position, to the point where seeing it done, even once, forces you to make some drastic ramifications to your own ontological model of the world.

If I've messed anything up, or if you feel I haven't seen your point, any response would be appreciated.

If I've messed anything up, or if you feel I haven't seen your point, any response would be appreciated.

I think we're on the same page. Your question is something like 'why do we live in world that's more like B than A?'

Story A:

[Physics] is a really cool bunch of fields! It's done all these really big things!

It's all because we were able to take all this data, and (eventually) figure out a lot about our universe, at a bunch of different scales. We're still working on applying some of the information, and we're working hard to get more data (on really small things, and really big things). We're also putting a little effort into refining our existing theories more, but not much because we figure future changes in that area will be small, without more data, and the more abstract things are the less they apply to (and are useful in) reality.

Story B:

[Physics] is a really cool bunch of fields! It's done all these really big things!

You'd think that we'd need lots of data to make all these discoveries, but somehow all the deep truths about the workings of the universe can be deduced if you think about numbers long enough. Weird, huh?

My point is the process of maths is (to a degree) invented or discovered, and under the invented hypothesis, where one would adhere to strict physicalist-style nominalism, the very act of predicting that the solutions to very real problems are dependent on abstract insight is literally incompatible with that position, to the point where seeing it done, even once, forces you to make some drastic ramifications to your own ontological model of the world.

One account is that the particular grouping of features into a definition is "invented", in the same way that the concept of a "tree" is invented; but there is still a pattern in the world corresponding to tree. But from your original post I think we're in agreement on this point?

the mathematics says that 'if we assume such and such holds about, say partial differential equations, then this implies, through a chain of abstract reasoning, we get something like a concrete result within the real world.' Its essentially this process that disturbs me. This process can fail in the real world, especially when we try to model specific phenomena, but those models have to obey PDE conditions (from our example), and this includes both right and wrong models.

I believe Pattern's reasoning above could be summed-up by saying that abstraction is a way for us to model the real world, and the process of reasoning abstractly a way for us to run some sort of efficient simulation with our models. (@Pattern Is that a fair one-line summary?)

In which case, my understanding of your original question is one of the two: why is it the case that the world could be *efficiently* simulated? Perhaps your question is even one level deeper: why is it the case the the world could *simulated* at all? After all, it is possible that the only way to predict the outcome of a physical process is to observe the physical process. (Is this a fair summary of what disturbs you about the PDEs example?)

This could be rephrased slightly more concretely as a question about the Church-Turing thesis: how come there is such a thing as a *universal* Turing machine. Made even more concrete, it turns into a deep physics problem: what kind of laws of physics permit the existence of a universal Turing machine. That's a deep (and technical!) question, which in this particular form was popularized by David Deutsch. This blog post by Michael Nielsen is a good general-audience introduction.

Is that a fair one-line summary?