This is an expansion of a linkdump I made a while ago with examples of mathematicians splitting other mathematicians into two groups, which may be of wider interest in the context of the recent elephant/rider discussion. (Though probably not especially wide interest, so I'm posting this to my personal page.)

The two clusters vary a bit, but there's some pattern to what goes in each - it tends to be roughly ‘algebra/problem-solving/analysis/logic/step-by-step/precision/explicit’ vs. 'geometry/theorising/synthesis/intuition/all-at-once/hand-waving/implicit’.

(Edit to add: 'analysis' in the first cluster is meant to be analysis as opposed to 'synthesis' in the second cluster, i.e. 'breaking down' as opposed to 'building up'. It's not referring to the mathematical subject of analysis, which is hard to place!)

These seem to have a family resemblance to the S2/S1 division, but there's a lot lumped under each one that could helpfully be split out, which is where some of the confusion in the comments to the elephant/rider post is probably coming in. (I haven't read The Elephant in the Brain yet, but from the sound of it that is using something of a different distinction again, which is also adding to the confusion). Sarah Constantin and Owen Shen have both split out some of these distinctions in a more useful way.

I wanted to chuck these into the discussion because: a) it's a pet topic of mine that I'll happily shoehorn into anything; b) it shows that a similar split has been present in mathematical folk wisdom for at least a century; c) these are all really good essays by some of the most impressive mathematicians and physicists of the 20th century, and are well worth reading on their own account.

“It is impossible to study the works of the great mathematicians, or even those of the lesser, without noticing and distinguishing two opposite tendencies, or rather two entirely different kinds of minds. The one sort are above all preoccupied with logic; to read their works, one is tempted to believe they have advanced only step by step, after the manner of a Vauban who pushes on his trenches against the place besieged, leaving nothing to chance.
The other sort are guided by intuition and at the first stroke make quick but sometimes precarious conquests, like bold cavalrymen of the advance guard.”
  • Felix Klein's 'Elementary Mathematics from an Advanced Standpoint' in 1908 has 'Plan A' ('the formal theory of equations') and 'Plan B' ('a fusion of the perception of number with that of space'). He also separates out 'ordered formal calculation' into a Plan C.
  • Gian-Carlo Rota made a division into ‘problem solvers and theorizers’ (in ‘Indiscrete Thoughts’, excerpt here).
  • Timothy Gowers makes a very similar division in his ‘Two Cultures of Mathematics’ (discussion and link to pdf here).
  • Vladimir Arnold's ‘On Teaching Mathematics’ is an incredibly entertaining rant from a partisan of the geometry/intuition side - it's over-the-top but was 100% what I needed to read when I first found it.
  • Michael Atiyah makes the distinction in ‘What is Geometry?’:
Broadly speaking I want to suggest that geometry is that part of mathematics in which visual thought is dominant whereas algebra is that part in which sequential thought is dominant. This dichotomy is perhaps better conveyed by the words “insight” versus “rigour” and both play an essential role in real mathematical problems.

There’s also his famous quote:

Algebra is the offer made by the devil to the mathematician. The devil says: `I will give you this powerful machine, it will answer any question you like. All you need to do is give me your soul: give up geometry and you will have this marvellous machine.’
  • Grothendieck was seriously weird, and may not fit well to either category, but I love this quote from Récoltes et semailles too much to not include it:
Since then I’ve had the chance in the world of mathematics that bid me welcome, to meet quite a number of people, both among my “elders” and among young people in my general age group who were more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle – while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things I had to learn (so I was assured), things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates almost by sleight of hand, the most forbidding subjects.
In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective or thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve done all things, often beautiful things, in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone.
  • Freeman Dyson calls his groups ‘Birds and Frogs’ (this one’s more physics-focussed).
  • This may be too much partisanship from me for the geometry/implicit cluster, but I think the Mark Kac 'magician' quote is also connected to this:
There are two kinds of geniuses: the ‘ordinary’ and the ‘magicians.’ an ordinary genius is a fellow whom you and I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what they’ve done, we feel certain that we, too, could have done it. It is different with the magicians... Feynman is a magician of the highest caliber.

The algebra/explicit cluster is more 'public' in some sense, in that its main product is a chain of step-by-step formal reasoning that can be written down and is fairly communicable between people. (This is probably also the main reason that formal education loves it.) The geometry/implicit cluster relies on lots of pieces of hard-to-transfer intuition, and these tend to stay 'stuck in people's heads' even if they write a legitimising chain of reasoning down, so it can look like 'magic' on the outside.

Edit to add: Seo Sanghyeon contributed the following example by email, from Weinberg's Dreams of a Final Theory:

Theoretical physicists in their most successful work tend to play one of two roles: they are either sages or magicians... It is possible to teach general relativity today by following pretty much the same line of reasoning that Einstein used when he finally wrote up his work in 1915. Then there are magician-physicists, who do not seem to be reasoning at all but who jump over all intermediate steps to a new insight about nature. The authors of physics textbook are usually compelled to redo the work of the magicians so they seem like sages; otherwise no reader would understand the physics.
New Comment
41 comments, sorted by Click to highlight new comments since:

There's an obvious but cool meta-thing going on with the number 2 that might be useful to pick out. Some pieces of this thing:

All over the place, we speak in terms of dichotomies and not trichotomies or more. The reason is basically that each dichotomy corresponds to doing PCA and projecting space onto a single axis, and a one-dimensional line has two directions. This suggests that much of the interesting conversation about any given topic (i.e. axis) can be picked up by having exactly two people talk about it. Two people will always differ slightly on the axis. Adding any additional people to a conversation has rapidly diminishing returns: you may have more total disagreement, but rarely more total dimensionality in the disagreement.

Duo Talks is the idea that most productive conversations occur between two people. Even with Trio walks, the setup is for one person to stay on the sidelines and wait for a chance to rotate in. A single conversation really only happens along one dimension, and it requires only two distinct people to detect.

I wonder if monogamy is some kind of attractor state because interactions between two people are the most productive.

Applying the Solitaire Principle, for all the same reasons it's useful to have conversations between two people, it's most useful to draw dichotomies between two pieces of the mind instead of more. This is why we have Internal Double Crux instead of Triple or more. A conversation/internal conflict is always about something and should have a purpose, and that purpose projects the entire conversation onto the one relevant dimension, so it's really only necessary to divide into two sides along this axis. Thus we get S1/S2, Elephant/Rider, Episodic/Diachronic, etc.

I like this a lot.

There are many reasons it's so tempting to project onto a single axis but maybe the foundational reason is the dichotomy between approaching and avoiding, or if you prefer, between positive and negative reward in reinforcement learning terms. This blows up into good vs. evil, friend vs. enemy, and so forth.

Edit: Also this is why Venkatesh Rao is much more sophisticated than we are; he does PCA but projects onto 2 axes and makes a 2x2 square.

Very pleased to see all of these dichotomies collected in one place. The natural question is whether these divides can be integrated to a useful picture with more pieces.

My take on the "Two Cultures" model of problem-solvers and theory-builders: theory-building fields of mathematics like algebraic topology (say) are those where the goal is to articulate grand meta-theorems that are bigger than any particular application. This was the work of a Grothendieck.

Meanwhile, concrete problem-solving fields of mathematics like combinatorics are those where the goal is to become the grand meta-theorem that contains more understanding than any particular theorem you can prove. This was the style of an Erdos. The inarticulate grand meta-theorems lived in his cognitive strategies so that the theorems he actually proved are individually only faint impressions thereof.

Yeah, there's something less legible about combinatorics compared to most other fields of mathematics. People like Erdos know lots of important principles and meta-principles for solving combinatorial problems but it's a tremendous chore to state those principles explicitly in terms of theorems and nobody really does it (the closest thing I've seen is Flajolet and Sedgewick - by the way, amazing book, highly recommended). A concrete example here is the exponential formula, which is orders of magnitude more complicated to state precisely than it is to understand and use.

(Ray has been suggesting to me in person that an important chunk of the current big LW debate is not about S1/S2 but about illegibility and that sounds right to me.)

I really like the phrasing "become the grand meta-theorem."

I think I heard the "become grand meta-theorem" phrasing originally from Alon & Spencer. I actually bought the Flajolet and Sedgewick book a couple months ago (only got through the first chapter), but it was mind-boggling that something like this could be done for combinatorics.

Of course reality is self-similar, so it's not surprising that there's currently a big divide in combinatorics between what I would call the "algebraic/enumerative" style of Richard Stanley containing the Flajolet and Sedgewick stuff, characterized by fancy algebra/explicit formulae/crystalline structures and the "analytic/extremal" style of Erdos, characterized by asymptotic formulae and less legibility. It's surprisingly rare to see a combinatorialist bridge this gap.

I went through most of the first half of Flajolet and Sedgewick when I was 18 or so and was blown away, then recently went through the second half and was blown away in a completely different way. It's really wild. Take a look. It's where I learned the argument in this blog post about the asymptotics of the partition function.

Do you think that theory-building and problem-solving maps at all to your hammers and nails dichotomy? One would be about becoming a hammer that can hit all the nails, and the other is more about really understanding each particular nail.

I think it's more about the nature of the hammers. Theory-building hammers are legible: they're big theorems, or maybe big messes of definitions and then theorems (the term of art for this is "machinery"). Problem-solving hammers are illegible: they're a bunch of tacit knowledge sitting inside some mathematician's head.

I mostly agree. By the end of the hammers and nails post I realized the real dichotomy was between systematic (including both hammers and nails) and haphazard, and this is a different dichotomy from all the others mentioned in this thread because I will actually make a value judgment that systematic is just better.

Then again, these are just two stages of development and you can extrapolate there's some third stage that's even better than systematic that looks like haphazard genius from the outside.

I eat corn like an an analyst, and I am an analyst. I also use vim over emacs, like Lisp, and find object-oriented programming weirdly distasteful.

However I don’t think analysis and algebra are usually lumped together and opposed to geometry; my understanding was that traditionally algebra, analysis, and geometry were the three main fields of math.

I tend to think of the distinctions within math as about how much we posit that we know about the objects we work with. The objects of study of mathematical logic are very general and thus can be very “perverse”; the objects of study of algebra and topology are also quite general; the objects of study of geometry are more pinned down because you have a metric; the objects of study of analysis are the ”best behaved” of all, because they have smoothness and integrability properties.

I find analysis much easier than algebra because I rely a lot on the concreteness of being able to measure, estimate, and (sometimes) visualize. People who are more algebra-oriented are more likely than me to become irritated by doing fiddly computations, but they have more ability to reason about very abstract objects.

No, I also definitely wouldn't lump mathematical analysis in with algebra... I've edited the post now as that was confusing, also see this reply.

Your 'how much we know about the objects' distinction is a good one and I'll think about it.

Also vim over emacs for me, though I'm not actually great at either. I've never used Lisp or Haskell so can't say. Objects aren't distasteful for me in themselves, and I find Javascript-style prototypal inheritance fits my head well (it's concrete-to-abstract, 'examples first'), but I find Java-style object-oriented programming annoying to get my head around.

Great! These are some of my favorite essays on mathematics and I'm excited to see what other rationalists think of them.

I think many mathematicians would object to lumping algebra and analysis together; I know a lot of people who are great at algebra and terrible at analysis. My clusters here are not terribly well-formed but my rough sense is that algebra, analysis, and geometry are three fairly different things.

Somewhat weirdly, I have seen analysis usually described as belonging squarely into the intuition cluster. And I actually partially discovered my love of analysis after I read the corn-eating post on analysis vs. algebra, realized that I eat corn like an analyst but thought of myself as an algebraist, and then realized that all the algebra perspectives I like most are coming from the intuition/geometry perspective (Linear Algebra Done Right and 3Blue1Brown's videos being two of my top 3 educational resources, and both being heavily intuition-based as opposed to algebra-based).

(I do not know whether the corn-eating thing is real in any meaningful sense, but it did get me to reconsider my perspective on mathematics)

So, I was an undergrad mathematician, and planned to become an academic, but bailed out of my PhD and became a programmer instead. I made notes as I was reading the various articles.

My strong suit in maths was analysis. I just never 'got' algebra at all and didn't touch it after the first year.

Weirdly I was very good at linear algebra/matrices/spectral theory/fourier analysis. But all that seemed like a geometrical, intuitive theory about high-dimensional spaces to me. I had very strong reliable intuition there, but I never had any intuition, or idea about how one might go about acquiring one, for rings/fields/groups or mathematical logic.

I never liked any sort of symbol-manipulation. I felt I understood things if and only if I could make mental pictures of what was going on that would imply the answers 'as if by magic'.

M. Meray's endeavours seem unappealing. I appreciate them in the abstract but cannot imagine getting interested. Prof. Klein's conducting sphere seems a fascinating masterstroke.

Feynman/Einstein are definitely 'what I'd be if I was twenty times better'. I recognise their ways of thinking, at least as they explained them.

I agree wholeheartedly with Arnold's rant.

I'm ambivalent on the problem-solver/theorizer distinction. I think I'm more of a theorizer, but problem-solving is important and they both matter. I'd have been proud to have contributed in either way.

Maths is very visual for me. The symbols mean nothing without the pictures.

As a programmer, I:

loathe OO

love lisp, and found it mind blowing when I first found it. By default I use a lisp variant called Clojure both personally and professionally, although I've tried almost everything. I avoid java and c++ if I can.

have occasionally tried Haskell, and feel that I ought to understand it, but it feels like programming with one hand tied behind my back. An awful lot of extra effort for no gain.

am quite fond of python, although I use it as a watered-down lisp and avoid all its OO facilities.

adored "Why Arc isn't especially Object Oriented"

have never tried template metaprogramming, C++ is just too dirty for me, although I love C itself.

like both vi and emacs, and was originally a vi user, but these days I use emacs almost exclusively, and have done ever since I discovered what a joy it is as a lisp editor.

I think that all, with the exception of emacs, puts me strongly on the analysis/intuition side of things and weakly confirms the suggested dichotomy and its relationship to programming styles.

But it's been a long time since I ate corn-on-the-cob. When I try to visualise it I see myself eating it in rows rather than spirals. But I don't want to go out and find some, because then I'd bias the result. Somehow I have to catch myself in the act of eating it unconsciously. Any suggestions?

  • Am a computer scientist, working on AI alignment theory.
  • I'm probably one of the people where I work who is more sympathetic to MIRI-style ways of thinking about alignment.
  • Leaned towards a type of thinking that I labelled "algebraic" as a math undergrad.
  • My best course in undergrad was intro to analysis, but it was taught by a PDEs guy. Our department only had one real analyst, and was predominantly composed of algebra people.
  • My favourite take on linear algebra involves a 'geometric' approach, e.g. thinking of linear operators, not matrices, and taking this sort of view of the singular value decomposition.
  • I wish that everybody would always denote vectors with bra-ket notation.
  • My primary academic contribution to CS was to take a bunch of proofs about one family of probability distributions, and see if they worked on a different family of probability distributions (if this doesn't sound CS-y.... uh, I'm a fake CS boy).
  • I really like Haskell, and deviations from it really bother me. In particular, the bits I like are the fact that it's functional and strictly typed.
  • I mostly use Python because it's easier.
  • Object-oriented programming seems weird and creepy to me.
  • I use emacs, and have briefly tried vi-type things but they never stuck.
  • When evaluating arguments, I tend to ask questions like "is this argument symmetric in the appropriate variables", "if you take this variable to 0 or infinity, does the argument still work", or "does this type check". I could translate this into terms that make more sense for verbal/non-mathematical arguments, but honestly this is how I think of it.
  • When eating corn on the cob, I think I do it in spirals.
  • I only eat corn on the cob at my family home where I grew up, which is a different part of my life than the part that contains everything else on this list.

Looking over the post, I guess that I'm basically an algebraist except for the way I eat corn?

That post is hilarious, and fascinating.

I eat corn like an analyst, vastly prefer Lisp to Haskell, use Vim, identify much more strongly with the personality description of the analyst, and while I haven't done much higher math, have a deep and abiding love for the delta-epsilon definition of a limit.

Very curious to hear other results, either successful or not.

[-]gjm40

I eat corn like an analyst, and

  • [+] did my PhD in a fairly analysis-y field (but also fairly geometrical, contrary to the analysis+algebra/geometry split kinda-implied by the OP here)
  • [+] prefer Lisp to Haskell but [-] feel vaguely guilty about that from time to time and feel I really "ought" to learn Haskell properly
  • [+] use Vim but [-] only because Emacs was bad for my wrists
  • [-] don't much care for fancy C++ template metaprogramming but [+] also don't much care for hardcore OO programming, though [-] I don't by any means object to OO, "design patterns", etc.

That is an amazing post.

Eating corn on the cob is messy and gets stuff stuck in my teeth. It’s also slow. I always find a knife (even just a plastic butter knife), cut the corn off, and eat it with a fork or spoon. What category does that fit in? Until I started doing this, I think I kept experimenting with eating in different patterns. I have no idea what it’s like to eat corn without trying to optimize the process.

[-]gjm20

My feeling is that that's probably analysis-style rather than algebra-style. (Even though the actual order of corn-kernel removal is more like that of algebraists.) Are any of the other distinctions that allegedly correlate with it ones that you can match up with your life? Of course they won't be if you're not a mathematics/software type.

(It would be very interesting to know whether the algebra/analysis divide among mathematicians is a special case of something that applies to a much broader range of people, and corn-eating might be a way to explore that. But I don't think cornology is far enough advanced yet to make confident conjectures about what personality features might correlate with different modes of corn-eating.)

I'm a software engineer and my degree in college required a good chunk of advanced math. I am currently in the process of trying to relearn the math I've forgotten, plus some, so I'm thinking that if this analysis/algebra dichotomy points at a real preference difference, knowing which I am might help me choose more effective learning sources.

But I find it hard to point to one category or another for most aspects. Even the corn test is inconclusive! (I agree that it sounds more like an analysis thing to do.)

  • I love the step-by-step bits of algebra and logic, but I also love geometry.
  • I think I do tend to form an "idiosyncratic mental model of specific problems." As I come to understand problems more, I feel like they have a quality or character that makes them recognizable to me. I did best in school when teaching myself from outside sources and then using the teacher's methods to spot check and fill in gaps in my models.
  • I think object oriented programming is very useful, and functional programming is very appealing.
  • I use(d) vi/vim because that's what I know well enough to function in. I barely touched emacs a couple times, was like, "dafuq is this?" and went back to vim. I never gave emacs a fair chance.
  • I think I lean towards 'building up' my understanding of things in chunks, filling in a bigger picture. But the skill of 'breaking down' massive concepts into bite-sized chunks seems like an important way to do this!

My tentative self diagnoses is that I have a weak preference for analysis. Reading more of the links in the OP might help me confirm this.

I just start gnawing on the corn cob somewhere at random, like the horrible physicist I am :) But the 'analysis' style makes more sense to me of the two, it had never even occurred to me that you could eat corn in the 'algebra' style.

I also think about linear algebra in a very visual way. I'm missing that for a lot of group theory, which was presented to us in a very 'memorise this random pile of definitions' way. Some time I want to go back and fix this... when I can get it to the top of the very large pile of things I want to learn.

That's one of the more useful posts I've read in a while since it gives me a way to consolidate a bunch of other loose thoughts that have been kicking around. Thanks.

I'm on the boring side of all dichotomies in the OP, and the one with corn too. Funnily, my visual imagination is pretty good (mental rotation etc.) I just never seem to use it for math or programming, it's step-by-step all the way.

I hate eating corn on the cob, I don't remember the last time I did it, and I can't even really inner sim doing it. Mathematically I spend a lot of time talking about algebra but am also, I think, better at analysis than other mathematicians would predict based on my reputation.

Ah, that probably needs clarifying... I was using 'analysis' in the sense of 'opposed to synthesis' as one of the dichotomies, rather than the mathematical sense of 'analysis'. I.e. 'breaking into parts' as opposed to 'building up'. That's pretty confusing when one of the other dichotomies is algebra/geometry!

I agree that algebra and (mathematical) analysis are pretty different and I wouldn't particularly lump them together. I'd personally probably lump it with geometry over algebra if I had to pick, but that's likely to be a feature of how I learn and really it's pretty different to either.

Having been a geometer that migrated to computer science via formal logic, I can testify to this division - to some extent.

When I first learnt formal logic and then machine learning, I had the same plodding, 'algebra' approach. But now that I've grasped it better, I've started to develop an intuition in these areas, that can shortcut most of the plodding approach (and it's so much more fun).

I think the difference might be more in the way the ideas are communicated. You can communicate semi-rigorous geometric ideas in a (somewhat) intuitive way, and have other geometers grasp them, at least enough that they can re-create them rigorously if needed. But algebraic ideas have to be more explicit if you want anyone beyond your immediate circle to get them.

See for instance Bourbaki, where the internal discussions were filled with intuition and imagery, but where the written outputs were famously tedious and rigorous.

I'm a math/econ undergrad, I've found that using geometry and imagery to contextualize all my classes is the easiest way for me to really understand a subject.

To use a small example: Learning things like the chain rule or the product rule in calculus became trivial once I learned via this method. However, that is not a way of teaching that is present where I'm learning. I've had little (but not zero) success in finding resources on my own that choose to communicate ideas in this way. Or help me hone my visual-math reasoning skills (1 2). I feel like learning other ways just require too much memorization and doesn't easily slot into my intuition. As a result whenever something doesn't intuitively translate to imagery, I feel like I'm plodding along. Are there books, lectures, sequences, or anything out there that I could use? Anything you could send my way would be really appreciated.

Ok, here’s a 2x2 that captures a lot of the variation in OP:

abstract/concrete x intuitive/methodical.

Intuitive vs. Methodical is what Atiyah, Klein, and Poincare are talking about. Abstract vs Concrete is what Gowers, Rota, and Dyson are talking about.

Abstract and intuitive is like Grothendieck.

Concrete and intuitive is like geometry or combinatorics.

Concrete and methodical is like analysis.

Abstract and methodical — I don’t know what goes in this space.

This seems good. I was definitely getting the sense there were at least two axes, and these seem to capture a lot of it.

Could Abstract/Methodical be something like Russell and Whitehead's Principia Mathematica?

Also, I'm interested that Concrete/Methodical is analysis, given the Corn post. I would have expected it to be Intuitive? (I don't actually do higher math, so I don't know from personal experience.)

Promoted to frontpage.

(You mention being unsure about whether it's a good fit for the frontpage. We've been having a debate about how intuition works lately on LW, and I think a post that brings in a bunch of solid data from a healthy field like mathematics is absolutely appropriate, and really appreciated.)

For this reason, actually, I've curated the post. The other variables include that it's short and readable.

This post is great, and is probably one of my favorite things on LessWrong 2.0 so far. Thank you a lot for writing this, and I am looking forward to reading all the essays in full when I find the time for that.

There are two similar clusters/tensions in arts:

  • visual art: on one hand you have to design the "big picture", with all its equilibia, balances and tensions, on the other hand you have to design the local and fine details, wich is something less imaginative and more formal and technical, with strict rules (for anatomy, shadows,...)
  • creative writing: your story need to have emotional tensions on the scale of the general plot, but need also to be realistic and credible on the scale of more detailed single events and interactions, wich must respect some stricter constrains
  • music composition: you need to design a general theme and mood and then you have to articulate the detailed development of the melodies and rithms which need to observe stricter rules in order to work appropriately

In the chakra system, the 5th (throat) is associated with algebra, while 6th (third eye) is associated with geometry. (Yet another place where this distinction has been noted.) I could probably go off on an intuition-based rant about what this might imply, which should tell you which one I am.

I'd also predict birds prefer games with some variance in them (card games), and frogs prefer deterministic games (chess, go).

Ka-kaw!

The chakra thing sounds right; another way of putting it is that algebra is more verbal & geometry is more visual/spatial. (IMO, analysis is visual/spatial too.)

I'm not sure what my brain is doing when it thinks about analysis but I'm not convinced it's visuospatial.

More concretely, let's suppose I'm trying to analyze the asymptotic behavior of some function which is a sum of terms that have different growth rates, say . What I could be doing, if I were doing this visually, is visualizing the the asymptotic behavior of ("grows fast as gets big") as a curve that curves up really fast, and similarly for ("grows fast as gets small, goes to zero as gets big") as a curve that starts big and gets small.

But I think that's not what I'm actually doing first, although that is a mode of thought I can use and find helpful. I think I am actually working with something like a primitive felt sense of bigness and smallness, which is not about visual tallness (to triangulate, another metaphor for bigness that isn't visual is weight: gets "heavy" and gets "light"). Not sure though, because the visual thought also happens pretty quickly after this and everything is correlated (heavy things are large in my visual field, etc.).

I've referenced the Grothendieck quote in this post many times since it came out, and the quote itself seems important enough to be worth curating it. 

I've also referenced this post a few times in a broader context around different mathematical practices, though definitely much less frequently than I've referenced the Grothendieck quote. 

[-][anonymous]40

Late to commenting on this post, but where would Grigori Perelman, the prover of the Poincare conjecture, fall then? I remember this quote from his biography:

Golovanov, who studied and occasionally competed alongside Perelman for more than ten years, tagged him as an unambiguous geometer: Perelman had a geometry problem solved in the time it took Golovanov to grasp the question. This was because Golovanov was an algebraist. Sudakov, who spent about six years studying and occasionally competing with Perelman, claimed Perelman reduced every problem to a formula. This, it appears, was because Sudakov was a geometer: his favorite proof of the classic theorem above was an entirely graphical one, requiring no formulas and no language to demonstrate. In other words, each of them was convinced Perelman’s mind was profoundly different from his own. Neither had any hard evidence. Perelman did his thinking almost entirely inside his head, neither writing nor sketching on scrap paper. He did a lot of other things—he hummed, moaned, threw a Ping-Pong ball against the desk, rocked back and forth, knocked out a rhythm on the desk with his pen, rubbed his thighs until his pant legs shone, and then rubbed his hands together—a sign that the solution would now be written down, fully formed. For the rest of his career, even after he chose to work with shapes, he never dazzled colleagues with his geometric imagination, but he almost never failed to impress them with the single-minded precision with which he plowed through problems. His brain seemed to be a universal math compactor, capable of compressing problems to their essence. Club mates eventually dubbed whatever it was he had inside his head the “Perelman stick”—a very large imaginary instrument with which he sat quietly before striking an always-fatal blow.

from Perfect Rigor.

These essays had a pretty large impact on how I go about learning mathematics, I always had an easier time when formulas or arguments could be mapped onto visual structure. In-fact, before writing this comment (and in general when constructing arguments) I imagined a mind map containing all the relevant ideas and relations I wanted to portray. I am now (somewhat poorly) attempting to translate my 3-D visual argument into a linear verbal one.

Something else to be noted is visual reasoning and complementary cognitive artifacts seem to go hand in hand. Consider that learning to use an abacus can allow someone to simulate an abacus in their mind and produce the outputs of an abacus without needing to actually have one. A similar thing can be done with a slide rule. This practice can also produce other, positive effects on certain parts of cognition*.

I would be surprised if the skill of being able to construct complementary cognitive artifacts wasn't potentially helpful in many domains. I don't know how one would go about learning this, but it seems like something to consider as having positive value if investigated.

*Those papers are the first things that came up with a google search. So I reserve the right to be wrong about the exact consequences.

This is a very valuable effort in outlining a hypothesis, and using the author’s wide-ranging taste and knowledge to pull loads of sources together. Definitely helped me a bit think about mathematics and thought, and some of my friends too. I've especially thought about that Grothendieck quote a lot.