While reading Bachelard (one of the greatest philosophers of science of all time), I fell upon this fascinating passage:[1]

From now on an axiomatic accompanies the scientific process. We have written the accompaniment after the melody, yet the mathematician plays with both hands. And it’s a completely new way of playing; it requires multiple plans of consciousness, a subconscious affected yet acting. It is far too simple to constantly repeat that the mathematician doesn’t know what he manipulates; actually, he pretends not to know; he must manipulate the objects as if he didn’t know them; he represses his intuition; he sublimates his experience.

Le nouvel esprit scientifique p 52, 1934, Gaston Bachelard

Here Bachelard is analyzing the development of non-euclidean geometries. His point is that the biggest hurdle to discover these new geometries was psychological: euclidean geometry is such a natural fit with our immediate experience that we intuitively give it the essence of geometry. It’s no more a tool or a concept engineered for practical applications, but a real ontological property of the physical world.

Faced with such a psychologically entrenched concept, what recourse do we have? Formalization, answers Bachelard.

For formalization explicitly refuses to acknowledge our intuitions of things, the rich experience we always integrate into our concepts. Formalization and axiomatization play a key role here not because we don’t know what our concepts mean, but because we know it too well.

It’s formalization as suspension of intuition.

What this suspension gives us is a place to explore the underlying relationships and properties without the tyranny of immediate experience. Thus delivered from the “obvious”, we can unearth new patterns and structures that in turn alter our intuitions themselves!

A bit earlier in the book, Bachelard presents this process more concretely, highlighting the difference between Lobatchevsky’s exploration of non-euclidean geometries and proofs by contradictions:

Indeed, we not only realize that no contradiction emerges, but we even quickly feel in front of an open deduction. Whereas a problem attacked through proof by contradiction moves quickly towards a contradiction where the absurd comes about, the deductive creation of the  Lobatchevskian dialectic anchors itself more and more concretely in the reader’s mind.

Le nouvel esprit scientifique p 46-47, 1934, Gaston Bachelard

Of course, formalization has many other uses. But I still find this Bachelardian function enlightening. It not only points at the constructivist nature of our models of the world; it also gives us a concrete tool to realize the perpetual rectification which Bachelard sees as the core process of scientific discovery and progress.

This post is part of the work done at Conjecture.

  1. ^

    Note that I translated all quotes myself — for better or worse — because I’m reading the book in French and Bachelard’s philosophy of science is criminally untranslated anyway.

New to LessWrong?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 10:37 PM

Surprisingly, this echoes to me a recent thought: "frames blind us"; it comes to mind when I think of how traditional agent models (RL, game theory...) blinded us for years to their implicit assumption of no embeddedness. As with non-Euclidean shift, the change has come from relaxing the assumptions.
This post seems to complete your good old Traps of Formalization. Maybe it's a hegelian dialectic: 
I) intuition domination (system 1)
II) formalization domination (system 2)
III) modulable formalization (improved system 1/cautious system 2)
Deconfusion is reached only when one is able to go from I to III.

That definitely feels right, with a caveat that is dear to Bachelard: this is a constant process of rectification that repeats again and again. There is no ending, or the ending is harder to find that what we think.

it comes to mind when I think of how traditional agent models (RL, game theory...) blinded us for years to their implicit assumption of embeddedness.

Most things are embedded, so it's the right assumption to choose, if you have to choose one.

Agreed. It's the opposite assumption (aka no embeddedness) for which I wrote this; fixed.

I like this perspective! The idea of formalization as suspension of intuition reminds me of the story of the "Gruppenpest" in the development of quantum mechanics. The abstraction of groups (as well as representations and matrices) was seen by many as non-physical and unintuitive. But it turned out the resulting abstractions of gauge theories and symmetries were more fundamental objects than their predecessors.[1][2][3][4]

It also reminds me of a view I've been told many times that mathematical formalization/modeling is the process of forgetting details about a problem until its essential character is laid bare. I think it's important to emphasize that formalization is only a partial suspension or redirection of intuition (which seems to be what Bachelard is actually implying), since the goal of formalization typically isn't to turn the process into something doable by a mechanical proof checker. Formalization removes all the "distractions" that block you from seeing underlying regularities, but you still need to leverage your intuition to get a grasp on those regularities. As you say in the post:

What this suspension gives us is a place to explore the underlying relationships and properties without the tyranny of immediate experience. Thus delivered from the “obvious”, we can unearth new patterns and structures that in turn alter our intuitions themselves!

 

  1. ^

    https://www.researchgate.net/publication/234207946_From_the_Rise_of_the_Group_Concept_to_the_Stormy_Onset_of_Group_Theory_in_the_New_Quantum_Mechanics_A_saga_of_the_invariant_characterization_of_physical_objects_events_and_theories

  2. ^

    https://ncatlab.org/nlab/show/Gruppenpest

  3. ^

    https://hsm.stackexchange.com/questions/170/how-did-group-theory-enter-quantum-mechanics

  4. ^

    https://www.math.columbia.edu/~woit/wordpress/?p=191

Upon reading the title I actually thought the article would argue the exact opposite, that formalization affects intuition in a negative way. I like non-eucledian geometry as a particular example where formalization actually helped discovery.

But this is definitely now always true. For instance if you wanted to intuitively understand why addition of naturals is commutative, maybe to build intuition for recognizing similar properties elsewhere, would this formal proof really help?

plus_comm =
fun n m : nat =>
nat_ind (fun n0 : nat => n0 + m = m + n0)
  (plus_n_O m)
  (fun (y : nat) (H : y + m = m + y) =>
   eq_ind (S (m + y))
     (fun n0 : nat => S (y + m) = n0)
     (f_equal S H)
     (m + S y)
     (plus_n_Sm m y)) n
     : forall n m : nat, n + m = m + n

This is as formal as it gets, a 100% machine checkable proof without room for human error.

I think formalization is just a tool that may or may not be helpful depending on your goal, and the real question is how you can tell ahead of time what level of formalization will be helpful?

Completely agree! The point is not that formalization or axiomatization is always good, but rather to elucidate one counterintuitive way in which it can be productive, so that we can figure out when to use it.

Also I just found that you already argued this in an earlier post, so I guess my point is a bit redundant.

Anyway, I like that this article comes with an actual example, we could probably use more examples/case studies for both sides of the argument.

I am not familiar with Bachelard, but I suspect that we understand how humans make progress in understanding the world rather better by now. 

But first, the very idea that formalization would have helped discover non-Euclidean geometries earlier seems counter to the empirical observation that Euclid himself formalized geometry with 5 postulates, how more formal can it get? Compared to the rest of the science of the time, it was a huge advance. He also saw that the 5th one did not fit neatly with the rest. Moreover, the non-Euclidean geometry was right there in front of him the whole time: spheres are all around. And yet the leap from a straight line to the great circle and realizing that his 4 postulates work just fine without the 5th had to wait some two millennia. 

In general, what you (he?) call "suspension of intuition", seems to me to be more like emergence of a different intuition after a lot of trying and failing. I think that the recently empirically discovered phenomenon of "grokking" in ML provides a better model of how breakthroughs in understanding happen. It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing. 

It is a fascinating topic, why in some cases these transitions happen quickly and in other cases take forever, and how they can be intentionally sped up. This is not necessarily grouped by field, either. Progress in fundamental physics was at breakneck speed in the first 30 years of the 20th century, then slowed down, and is at a crawl now, though glimpses of things to come are there again. Biology was stagnant since the times of Galen for a long time, then there was an age of breakthroughs. I suspect this applies to many other fields, as well. Psychology had a leap in Freud, and we are still not very far from his models. 

In retrospect, it does not feel like "formalization explicitly refuses to acknowledge our intuitions of things, the rich experience we always integrate into our concepts" is an essential step. It is more like "after a new concept emerges, it feels like it emerged after someone replaced intuition with formalization", but that was not the actual process that led to the emergence.

It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing.

But in the case of hyperbolic geometry, the accumulation of "data" came from working out the consequence of varying the axioms, right? So I don't think this necessarily contradicts the OP. We have a set of intuitions, which we can formalize and distill into axioms. Then by varying the axioms and systematically working out their consequences, we can develop new intuitions.

Well, hyperbolic geometry was counterintuitive (the relevant 2d manifold does not embed isometrically into 3d Euclidean space), but spherical was not. Euclid had all the knowledge needed to construct spherical geometry. In fact, most of it was constructed experimentally due to the need to describe the celestial sphere and to navigate the seas, it just wasn't connected to the 4 postulates. Once this connection is made and the 5th postulate is proven independent of the first 4, but being a limit of the spherical geometry in the limit of an infinitely large sphere, at least informally, the step toward hyperbolic geometry is quite natural. 

the accumulation of "data" came from working out the consequence of varying the axioms

I don't know if that is how it worked out in this case. It does not seem to be how our understanding advances in general. In natural sciences it tends to be driven by experiment, which is enabled by a mix of knowledge and technology. The revolutionary Hodgkin and Huxley model of action potential propagation depended on advances in technology/electrophysiology (20 um silver electrodes, availability of giant squid axons), in mathematics and EE (circuit analysis) and something else, that allowed them to ask the right questions. It is not clear to me that it was a break from intuition to axioms in this case, seems unlikely.

Yeah, I think the "varying the axioms" thing makes more sense for math in particular, not so much the other sciences. As you say, the equivalent thing in the natural sciences is more like experimentation.

Maybe we can roughly unify them? In both cases, we have some domain where we understand phenomena well. Using this understanding, we develop tools that allow us to probe a new domain which we understand less well. After repeated probing, we develop an intuition/understanding of this new domain as well, allowing us to develop tools to explore further domains.

There is definitely the step of 

develop tools that allow us to probe a new domain which we understand less well. After repeated probing, we develop an intuition/understanding of this new domain

but I don't see how it can be unified with the OP's thesis.

I'm saying the study of novel mathematical structures is analogous to such probing. At first, one can only laboriously perform step-by-step deductions from the axioms, but as one does many such deductions, intuition and understanding can be developed. This is enabled by formalization.

That certainly makes sense. For example, there are quite a few abstraction steps between the Fundamental theorem of calculus and certain commutative diagrams. 

Thanks for your thoughtful comment!

First, I want to clarify that this is obviously not the only function of formalization. I feel like this might clarify a lot of the point you raise.

But first, the very idea that formalization would have helped discover non-Euclidean geometries earlier seems counter to the empirical observation that Euclid himself formalized geometry with 5 postulates, how more formal can it get? Compared to the rest of the science of the time, it was a huge advance. He also saw that the 5th one did not fit neatly with the rest. Moreover, the non-Euclidean geometry was right there in front of him the whole time: spheres are all around. And yet the leap from a straight line to the great circle and realizing that his 4 postulates work just fine without the 5th had to wait some two millennia. 

So Euclid formalized our geometric intuitions, the obvious and immediate shape that make naturally sense of the universe. This use of formalization was to make more concrete and precise some concepts that we had but that were "floating around". He did it so well that these concepts and intuition acquired an even stronger "reality" and "obviousness": how could you question geometry when Euclid had made so tangible the first intuitions that came to your mind?

According to Bachelard, the further formalization, or rather the axiomatization of geometry, of simplifying the apparently simple concepts of points and lines to make them algebraically manipulable, was a key part in getting out of this conceptual constraint.

That being said, I'd be interested for an alternative take or evidence that this claim is wrong. ;)

In general, what you (he?) call "suspension of intuition", seems to me to be more like emergence of a different intuition after a lot of trying and failing. I think that the recently empirically discovered phenomenon of "grokking" in ML provides a better model of how breakthroughs in understanding happen. It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing. 

This strike me as a false comparison/dichotomy: why can't both be part of scientific progress? Especially in physics and chemistry (the two fields Bachelard knew best), there are many examples of productive formalization/axiomatization as suspension of intuition:

  • Bolzmann work that generally started from mathematical building blocks, build stuff from them, and then interpreted them. See this book for more details of this view.
  • Quantum Mechanics went through that phase, where the half-baked models based on classical mechanics didn't work well enough, and so there was an effort at formalization and axiomatization that revealed the underlying structure without as much pollution by macroscopic intuition.
  • The potential function came from a pure mathematical and formal effort to compress the results of classical mechanics, and ended up being incorporated in the core concepts of physics.

I've also found out that on inspection, models of science based on the gathering of a lot of data rarely fit the actual history. Notably Kuhn's model contradicts the history of science almost everywhere, and he makes a highly biased reading of the key historic events that he leverages.

That's an interesting way of thinking about it. I'm reminded of how computers have revolutionized thought over the past century.

Most humans have tended to think primarily by intuition. Glorified pattern-matching, with all the cognitive biases that entails.

Computers, in contrast, have tended to be all formalization, no intuition. They directly manipulate abstract symbols at the lowest level, symbols that humans only deal with at the highest levels of conscious thought.

And now AI is bringining things back around to (surprisingly brittle) intuition. It will be interesting, at least, to see how newer AI systems will bring together formalized symbolic reasoning with pattern-matching intuition.