This post describes a pattern of abstraction that is common in mathematics, which I haven't seen described in explicit terms elsewhere. I would appreciate pointers to any existing discussions. Also, I would appreciate more examples of this phenomenon, as well as corrections and other insights!

Note on prerequisites for this post: in the opening example below, I assume familiarity with linear algebra and plane geometry, so this post probably won't make much sense without at least some superficial knowledge of these subjects. In the second part of the post, I give a bunch of further examples of the phenomenon, but these examples are all independent, so if you haven't studied a particular subject before, that specific example might not make sense, but you can just skip it and move on to the ones you do understand.

There is something peculiar about the dependency of the following concepts in math:

  • Pythagorean theorem
  • Law of cosines
  • Angle between two vectors
  • Dot product
  • Inner product

In the Euclidean geometry of (the plane) and (three-dimensional space), one typically goes through a series of steps like this:

  1. Using the axioms of Euclidean geometry (in particular the parallel postulate), we prove the Pythagorean theorem.
  2. We take the right angle to have angle and calculate other angles in terms of this one.
  3. The Pythagorean theorem allows us to prove the law of cosines (there are many proofs of the law of cosines, but this is one way to do it).
  4. Now we make the Cartesian leap to analytic geometry, and start treating points as strings of numbers in some coordinate system. In particular, the Pythagorean theorem now gives us a formula for the distance between two points, and the law of cosines can also be restated in terms of coordinates.
  5. Playing around with the law of cosines (stated in terms of coordinates) yields the formula , where and are two vectors and is the angle between them (and similarly for three dimensions), which motivates us to define the dot product (as being precisely this quantity).

In other words, we take angle and distance as primitive, and derive the inner product (which is the dot product in the case of Euclidean spaces).

But now, consider what we do in (abstract) linear algebra:

  1. We have a vector space, which is a structured space satisfying some funny axioms.
  2. We define an inner product between two vectors and , which again satisfies some funny properties.
  3. Using the inner product, we define the length of a vector as , and the distance between two vectors and as .
  4. Using the inner product, we define the angle between two non-zero vectors and as the unique number satisfying .
  5. Using these definitions of length and angle, we can now verify the Pythagorean theorem and law of cosines.

In other words, we have now taken the inner product as primitive, and derived angle, length, and distance from it.

Here is a shot at describing the general phenomenon:

  • We start in a concrete domain, where we have two notions and , where is a definition and is some theorem. (In the example above, is length/angle and is the inner product, or rather, is the theorem which states the equivalence of the algebraic and geometric expressions for the dot product.)
  • We find some abstractions/generalizations of the concrete domain.
  • We realize that in the abstract setting, we want to talk about and , but it's not so easy to see how to talk about them (because the setting is so abstract).
  • At some point, someone realizes that instead of trying to define directly (as in the concrete case), it's better to generalize/"find the principles" that make tick. We factor out these principles as axioms of .
  • Finally, using , we can define .
  • We go back and check that in the concrete domain, we can do this same inverted process.

Here is a table that summarizes this process:

Notion Concrete case Generalized case
primitive; defined on its own terms defined in terms of
a theorem defined axiomatically

In what sense is this pattern of generalization "allowed"? I don't have a satisfying answer here, other than saying that generalizing in this particular way turned out to be useful/interesting. It seems to me that there is a large amount of trial-and-error and art involved in picking the correct theorem to use as the in the process. I will also say that explicitly verbalizing this process has made me more comfortable about inner product spaces (previously, I just had a vague feeling that "something is not right").

Here are some other examples of this sort of thing in math. In the following examples, the step of using to define does not take place (in this sense, the inner product case seems exceptional; I would greatly appreciate hearing about more examples like it).

  • Metric spaces: in Euclidean geometry, the triangle inequality is a theorem. But in the theory of metric spaces, the triangle inequality is taken as part of the definition of a metric.
  • Sine and cosine: in middle school, we define these functions in terms of angles and ratios of side lengths of a triangle. Then we can prove various things about them, like the power series expansion. When we generalize to complex inputs, we then take the series expansion as the definition.
  • Probability: in elementary probability, we define the probability of an event as the number of successful outcomes divided by the number of all possible outcomes. Then we notice that this definition satisfies some properties, namely: (1) the probability is always nonnegative; (2) if an event happens for certain, then its probability is ; (3) if we have some mutually exclusive events, then we can find the probability that at least one of them happens by summing their individual probabilities. When we generalize to cases where the outcomes are crazy (namely, countably or uncountably infinite), instead of defining probability as a ratio, we take the properties (1), (2), (3) as the definition.
  • Conditional probability: when working with finite sets, we can define the conditional probability as . We then see that if is the (finite) sample space, we have . But now when we move to infinite sets, we just define the conditional probability as .
  • Convergence in metric spaces: in basic real analysis in , we say that if the sequence satisfies some epsilon condition (this is the definition). Then we can prove that if and only if . Then in more general metric spaces, we define "" to mean that . (Actually, this example is a little cheating, since we can just take the epsilon condition and swap in for .)
  • Differentiability: in single-variable calculus, we define the derivative to be if this limit exists. We can then prove that if and only if . This latter limit is an expression that makes sense in the several-variable setting, and is what we use to define differentiability.
  • Continuity: in basic real analysis, we define continuity using an epsilon–delta condition. Later, we prove that this is equivalent to some statement involving open sets. Then in general topology we take the open sets statement as the definition of continuity.
  • (Informal.) Arithmetic: in elementary school arithmetic, we "intuitively apprehend" the rational numbers. We discover (as theorems) that two rational numbers and are equal if and only if , and that the rationals have the addition rule . But in the formal construction of number systems, we define the rationals as equivalence classes of pairs of integers (with second coordinate is non-zero), where iff , and define addition on the rationals by . Here we aren't really even generalizing anything, just formalizing our intuitions.
  • (Somewhat speculative.) Variance: if a random variable has a normal distribution, its probability density can be parametrized by two parameters, and , which have intuitive appeal (by varying these parameters, we can change the shape of the bell curve in predictable ways). Then we find out that has the property . This motivates us to define the variance as for other random variables (which might not have such nice parametrizations).

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 10:23 AM

Russian mathematician V.I. Arnold had a semi-famous rant against taking this inversion too far. Example quote:

What is a group? Algebraists teach that this is supposedly a set with two operations that satisfy a load of easily-forgettable axioms. This definition provokes a natural protest: why would any sensible person need such pairs of operations? "Oh, curse this maths" - concludes the student (who, possibly, becomes the Minister for Science in the future).

We get a totally different situation if we start off not with the group but with the concept of a transformation (a one-to-one mapping of a set onto itself) as it was historically. A collection of transformations of a set is called a group if along with any two transformations it contains the result of their consecutive application and an inverse transformation along with every transformation.

This is all the definition there is. The so-called "axioms" are in fact just (obvious) properties of groups of transformations. What axiomatisators call "abstract groups" are just groups of transformations of various sets considered up to isomorphisms (which are one-to-one mappings preserving the operations). As Cayley proved, there are no "more abstract" groups in the world. So why do the algebraists keep on tormenting students with the abstract definition?

The 'art' in picking the correct theorem in B seems related to structural realism. ie figuring out where we are importing structure from and how as we port across representations.

Was this intended to gesture at this process:

1) Mathematics (Axioms -> Theorems), 2) Reverse Mathematics? (Theorems -> (sets of axioms* from which it could be proved)

or this process:

2) See what may be proved in System A. 2) Create system B out of what was proved in system A, and prove things.


*made as small as possible