# 8

I'm a mathematics undergraduate in the UK, in my final year, and I have been thinking a lot about Platonism. I largely find myself completely unconvinced by attempts to naturalise the metaphysics behind the ontology of mathematical objects, though I am not unsympathetic to those attempts, and of those attempts I know, I can see their point.

I will state plainly here that as of this post I am a Platonist (by which, I believe that mathematical objects are abstract objects, but obviously that I am unsure of what this entails), and i want to sort of articulate why. I will mostly be focusing on the notion of 'mathematics is just a useful tool/language, and it has no abstract independent existence outside of this viewpoint'.

Most of the posts I've seen on lesswrong talk about natural numbers as the object of our attention when we discuss mathematics, though i could have missed some. To be sure, the natural numbers are about as quintessentially 'MATHEMATICS' as you can get, but that's not what i want to really discuss. I want to talk about abstractions. Hilbert space, Banach space, Measure space, Galois symmetries, Categories, Topoi, etc. are just some examples of extremely abstract notions that offer powerful and, from my perspective, necessary insight into applied mathematics.

Something that has almost plagued me over this final year is the notion of how the abstract notions of modern day mathematics has applicability to the real world. It is not that I am saying 'mathematics fits the world so well, we are so lucky to be able to have such a versatile tool'. But i am saying 'Why should it be the case that an abstract notion even has such powerful applicability at all? To the point where you can derive iff statements about applied structures.' I don't buy that we make mathematics fit the world, per se, although we certainly do that insofar as we are trying to model something. But its more, when we uncover structures built into the general areas of applied math, the mathematics says that 'if we assume such and such holds about, say partial differential equations, then this implies, through a chain of abstract reasoning, we get something like a concrete result within the real world.' Its essentially this process that disturbs me. This process can fail in the real world, especially when we try to model specific phenomena, but those models have to obey PDE conditions (from our example), and this includes both right and wrong models.

So, here is the crux of my question to you, as I am interested in responses to this question:

'Why does the role of abstraction hold permanent and seemingly irrefutable sway over the domain of modelling the real world with mathematics?' If mathematics were just a tool, with no independent reality, then shouldn't it be the case that the probability that we encounter this phenomenon (that is, purely abstract reasoning tells us something objective about modelling process as a whole, and what's more provides direct utility and insight into how to find solutions to these models) be exceedingly low? If we just made a bunch of stuff up through logically following semantics and syntax through to the end, the idea that this stuff, which contains no ontological independence, has direct and immediate applicability to the world seems completely the opposite of what you should expect.

I'll appreciate it if i've missed things posting this, if its unclear then I'll re-edit and make it much clearer with examples, feel free to tear me apart in the comments.

# 8

New Comment

shminux

### Dec 19, 2019

5

I think your question is "why are abstractions useful?" Thinking more about it, let's start with what you mean by an abstraction. From Wikipedia:

Abstraction in its main sense is a conceptual process where general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods.
"An abstraction" is the outcome of this process—a concept that acts as a common noun for all subordinate concepts, and connects any related concepts as a group, field, or category.

One can think of an abstraction as a category in itself. You start with one domain of the territory and create a surjective (but not injective, maps are lossy) morphism that preserves the rules (arrows) governing the domain in the codomain. Then you find another such category, with a different domain but the same codomain, and notice that there is a functor that preserves these morphisms. Maybe you can find more categories like that. You end up with a natural transformation between a these functors, a functor category. And that is your abstraction. You end up relating, say, electrostatics, fluid dynamics and Newtonian gravity through the codomain of the Poisson equation.

So, if one defines an abstraction as a natural transformation between categories of maps of the territory, the question becomes, why are these natural transformations useful?

To look deeper into this, I would note that, as often discussed here, every agent is an embedded agent, meaning the algorithm it runs is based on maps that are also a (tiny) part of the territory. For these maps to be useful, the territory must be mappable to begin with, i.e. the parts of the territory that are important for the agent's survival must be predictable from the maps. Certainly there is a pressure to minimize the resources while maximizing the domains where these resources create useful maps, and one way to do it is to increase the "abstraction level", and the natural transformations is pretty abstract. The broader the swaths of the territory one wants to map, the more pressure is on creating higher abstraction levels. I can imagine that even higher levels of abstractions are created, like lax natural transformations, with enough optimization pressure.

Abstractions are not unique to humans, or to consciousness. But when they bubble up to the conscious mind we call it mathematics. Even though our subconscious minds are experts at solving non-linear PDEs super quickly, say, when throwing a ball toward a target.

So, to answer a related question, the "unreasonable effectiveness" of mathematics is an artifact of optimization pressures on an embedded agent in a (partially) predictable universe.