(crossposted from here)

I've noticed there are certain epistemic motifs (i.e. legible & consistent patterns of knowledge production) that come up in fairly wide circumstances across mathematics and philosophy. Here's one that I think is fairly powerful:

Abstract-Concrete Cycles and Domain Expansion

Given a particular object [structure/definition] that models some concept, we can modify it by either [abstracting out some of its features] or [instantiating a more concrete version of it] as we [modify the domain of discourse] that the object operates on, accompanied with [intuition juice from the real-world that guides our search]. Repeat this over various abstraction-levels, and you end up with a richer set of objects.

Examples of how this plays out

a) Space

Here's an example from mathematics (inspired from here). You have some concrete notion of a thing you want to capture, say, space. So you operationalize it using some immediately obvious definition like  which, being extremely concrete, comes equipped with a bunch of implicit structures (metric, angle, explicit coordinates, etc) - many of which you probably didn't explicitly intend or even superfluous to your aims.

Then you abstract away the structures one-by-one, e.g.,

  • Metric spaces abstract out the notion of "distance" using metrics.
  • Inner product spaces abstract out the notion of "similarity," which caches out to abstracting the notion of "magnitude" and "angle."
  • Topological spaces abstract out the notion of "locality" using open sets.

And with this more abstract notion of space in hand, you can project it down to more concrete structures in domains different from that of original consideration, effectively using it as a generator. e.g.,

  • Function spaces: "Now that I have this much less restrictive notion of space, what happens if I extend its domain of discourse to, say, infinite-dimensional objects like functions?"

From here, the loop continues. With a concrete structure in hand that captures what you want, but now with a flavor of operating in a different domain, this may yield insight into how certain structures that were not originally under your consideration might further be abstracted away.

  • Of course, this process is guided by our [intuition about real-world stuff/aesthetic/practical utility/simplicity] rather than being a blind search in conceptspace.

After you continue this process of abstracting/concretizing your structure over many different levels of abstraction (each being conducive to describing different sorts of domain of discourse), you end with an extremely rich notion of space, some far different from your original .

b) Notion of "Optimization"

In philosophy, I think the case is much more obvious. Alex Flint's ground of optimization, for example, compares different definitions of optimization primarily by means of domain of discourse expansion, i.e. case analysis of the definition in "weird" scenarios, see which notions generalize better and fit our intuitive notion of what the word "optimization" should mean.

c) Epistemic motif of "Theorem as Definition"

Theorems in one structure can now become a definition in another. This is highly related to the earlier motif: we can consider some theorem as a consequence of the concrete [features/axioms] of the original structure, and view [the act of turning the theorem into a definition in a much more [relaxed/general] structure] as abstracting out [the particular concept that the theorem captured in the original structure]. e.g.,

  • Exponentiation: Start with the notion of repeated multiplication. Discover that exponentiation has a Taylor series approximation. Notice that the latter has a much more general domain of discourse, and use it to generalize the notion in the context of complex numbers, operators, etc.
  • Fractal Dimensions: Start with the observation that the dimension of a space corresponds to scaling exponents, and use it to define a measure of "fractal-ness."

It's The Rising Sea!

Why does it work?

I can think of a couple reasons why this process works (inspired from here), with "why" in the sense of the reason for its effectiveness as a human process for epistemology.

Just by selection, the concepts that we end up modeling (and with scientists building intuitions about what can be modeled in the first place) tend to be ones that can be captured by a small number of desiderata.

If the desiderata rule out all possibilities or rule in too much, we can correct our intuitions/desiderata, eg discovering assumptions that subtly restricted the form of our structures in ways which we haven't noticed.

And by finding such "minimal conditions" our structure must satisfy, it becomes more conducive to adding new structures back in, such that it could be made more concrete for applications into a new domain of discourse. From there, we can list examples and find new theorems, thus enriching our intuition that feeds back into this loop.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 12:03 AM

I know 'Theorems as Definitions' as 'French definitions'. 

In algebraic geometry the main objects of study are algebraic varieties and more generally schemes. Although the proper definition of schemes is famously involved one can best think of them as the space of solutions of polynomials equations ('zeroes of polynomials'). The natural topology on this space (the Zariski topology) is quite weird - it is generically non-Hausdorff and can even have non-closed points.  For instance, the 'right way' of saying a scheme is Hausdorff is not asking for the underlying topological space to be Hausdorff [which only gives very trivial examples] but to ask for the diagonal map to be a closed subscheme. 

This is an example of your abstract-concrete cycle: for ordinary topological spaces Hausdorff and closed diagonal are equivalent, but the latter can be defined more broadly for schemes. 

Grothendieck is famous for his many French definitions. Many of the constructions he proposed were inspired by constructions and theorems in classicial differential and complex analytic geometry but then ported to the algebraic and arithmetic realm. 

Grothendieck predecessors  noted that many features of algebraic varieties resemble those of more traditionally geometric spaces. I.e. Andre Weil, in his Weil conjectures observed that the number of solutions of a variety over finite fields (i.e. modulo p) is closely related to the number of higher-dimensional holes ('Betti numbers') of the complex analytic manifold associated to the set of equations of the variety. This geometric shadow was a guiding motif for Grothendieck's work. He realized that the Zariski topology did not properly capture the 'inherent' geometric structure of algebraic equations. 

The naive solution to this conundrum would be to look for a topology on the sets of solutions of polynomials that is Hausdorff to replace the Zariski topology. But this is the wrong move - there is no such topology. Instead Grothendieck noted that the properties that are required these geometric analogues are only a subset of the properies of a topological space. Roughly speaking, for (co)homological reasoning to go through one only needs to known when a collection of opens 'cover' another open. This is an instance of your 'Abstract-Concrete-Cycle'.

By pursuing this analogy He generalized the notion of topological space to Grothendieck topology (which leads to fancy things like sites and topoi). This leads to the famous etale topology and etale cohomology which eventually lead to the proof of the Weil conjectures.

An early triumph of the Grothendieckian way of thinking was his French definition of the etale fundamental group. Naively trying to define a fundamental group for an algebraic variety fails because there aren't enough loops, hence the usual fundamental group is almost always trivial. This is Zariski topology weirdness. 

Instead, Grothendieck noted that there is a classical theorem that the fundamental group can also be seen as an automorphism group of locally trivial topological covers. He then identified a property 'etaleness' (following a suggestion by Serre) that is the analogy of locally trivial topological covers for algebraic varieties and schemes. The étale fundamental group is then simply defined in terms of automorphisms of these etale covers. This may strike one as just a mummer's trick - but it's actually profound because it turns out Galois theory becomes a special case of the theory of etale fundamental groups ('Grothendieck-Galois theory'). 

Update: have been reading Grothenieck, a mathematical portrait. A remarkable book. Recommend. One does need a serious acquitance with scheme theory & related fields to get most out of it. 

One takeaway for me is that Grothenieck's work was more evolutionary than revolutionary. 
Many ideas often associated with scheme theory were already pioneered by others, e.g. : 

The idea of generic points  and specializations(Andre Weil),  lifting from characteristic p to zero (Weil & Zariski), definition of etale fundamental group & description in terms of complex fundamental group (Abhyankar, Zariski), the concept & need for the etale cohomology (Weil*), notion of etale maps & importance in cohomological computations (Serre), prime spectrum of a ring (Krull, many others), notion of scheme (Cartier, Serre, Chevalley, others. The name of schema is due to Chevalley iirc?), infinitesimals as key to deformations (Italian school - famously imprecise, but had many of the geometric ideas), jet & arc schemes (pioneered by John Nash Jr. - yes the Beautiful Mind John Nash), category theory & Yoneda lemma (invented and developed in great detail by Maclane-Eilenberg), locally ringed spaces (Cartan's school), spectral sequences (invented by Leray), sheaf theory and sheaf cohomology (Leray, then introduced by Serre into algebraic geometry), injective/projective resolutions and abstract approach to cohomology (common technique in group cohomology).

More generally the philosophy of increased abstrraction and rising-sea style mathematics was common in the 'French school', famously as espounded by Nicolas Bourbaki. 

One wouldn't be wrong to say that, despite the scheme-theoretic language, 90% of the ideas in the standard algebraic geometry textbook of Hartshorne precede Grothendieck. 

As pointed out clearly by Jean-Pierre Serre, the Grothendieckian style of mathematics wasn't universally succesful as many mathematical problems & phenomena resist a general abstract framework.

[Jean-Pierre Serre favored the big tent esthetic of 'belle choses' (all things beautiful), appreciating the whole spectrum, the true diversity of mathematical phenomena, from the ultra-abstract and general to the super-specific and concrete.]

What then were the great contributions to Alexandre Grothendieck?

Although the abstraction has become increasingly dominant in modern mathematics, most famously pioneered by the French school of Bourbarki, Grothendieck was surely the more DAKKA Master of this movement in mathematics pushing the creation and utilization of towering abstractions to new heights. 

Yet, in a sense, much of the impact of Grothendieck's work was only felt many decades later, indeed much of its impact is perhaps yet to come. 

To name just a few: the central role of scheme theory in the later breakthrus of arithmetic geometry (Mazur, Faltings, Langlands, most famously Wiles), (higher) stacks, anabelian geometry, galois-teichmuller theory, the elephant of topos theory. There are many other fields of which I must remain silent. 

On the other hand, although Grothendieck envisioned topos theory he did not appreciate the (imho) very important logical & type-theoretic aspects of topos theory, which were pioneered by Lawvere, Joyal and (many) others. And although Grothendieck envisioned the centrality and importance of a very abstract homotopy theory very similar to the great influence and character of homotopy theory today, he was weirdly allergic for the simplicial techniques that are the bread-and-butter of modern homotopy theory. Indeed, simplicial techniques lie at the heart of Jacob Lurie's work, surely the mathematician who most can lay the claim to be Grothendieck's heir. 

* indeed Weil's conjectures were much more than a set of mere guesses but a precise set of conjectures, of which he proved important special cases, provided numerical evidence. Central to the conjectures was the realization & proof strategy that a conjectural cohomology theory -'Weil cohomology'- would lead to the proof of these conjectures.  Importantly, this involved a very precise description of conjectured properties the conjectured cohomology theory. Clearly circumscribing the need and use for a hypothetical mathematical object is an important contribution.