Epistemic status: exploratory
In 1907, Ross Harrison grew a frog nerve in vitro, inventing modern tissue culture. He extracted cells from the relevant part of a frog embryo, put them in a contraption from bacteriology called a hanging drop, and observed as the cells grew and differentiated into nerves.
Based on his paper and later comments, Harrison only wanted to settle a controversy about the cellular nature of nerves. Where others had to cut frogs at different stages of development and interpret their dead bodies, Harrison could just watch the nerve grow. But he did more than that — he shattered many assumptions about the conditions and necessities of life, by making cells live and grow outside the body.
When I read about it, that’s what I found exciting: Harrison had intervened in the world, in doing so uncovering deep principles of cell biology. This was no passive measure to confirm a hypothesis, but a wrestling with nature to make her reveal her secrets.
I started to see this everywhere: chemical properties revealed through synthesis, natural selection probed by Darwin’s experiments, complexity classes characterized by artificial problems… I realized that many of the experiments and ideas that excited me included wrestling with nature to discover its secrets. And understanding the possibilities and limits of this “scientific wrestling” will definitely prove a powerful tool in any epistemic toolkit, even more so in a field like Alignment where passive observation is hopeless.
So in this post, I discuss five ways I've noticed in which such wrestling can create knowledge, with examples from chemistry, cell biology, complexity theory, evolutionary biology, network architecture, and more.
Thanks to Flora for a great discussion on synthesis and for helping me clarify the point of this post.
Studying hard-to-observe processes through instantiation
Many fundamental processes we want to understand are just hard to observe: what’s happening inside the body (because of opacity), genes (because of scale), natural selection (because of temporal and spatial scale), chemical bonds (because of microscopic scale)... Yet we can reveal these processes by instantiating them in settings where they can be observed.
The already mentioned invention of tissue culture by Ross Harrison served exactly this purpose for internal body processes: to make them happen within a transparent glass enclosure. Follow up work on maintaining cultures alive for longer and longer periods further helped with making the underlying processes move at the speed and scale adapted for observation.
(Culturing Life: How Cells Became Technologies, Hannah Landecker, 2007)
Herein lies the important shift from in vivo to in vitro. In observing the living subject over time and the living process as it happened, the assumptions embedded in histological practice were confounded. One did not have to kill the animal or the tissue to observe the development course or experimental alteration of internal structures and processes. Internal processes could be placed on the exterior, and watched, given the appropriate technical substitution of particular functions of the body: asepsis, fluid, structural support, warmth. In substituting a glass enclosure and a drop of lymph for the body, something opaque was replaced by something transparent, and the enclosure did not have to be opened or halted in order to observe what was going on inside it. In not just taking the animal body apart, but leaving it apart, cellular life that was autonomous, external, and dynamic came into being for biology.
Gregory Mendel’s experiments on pea plants provide another great example. By creating a bare-bones setting and carefully controlling the breeding, Mendel extracted fundamental principles about inheritance and genetics, without ever observing genes.
A few other examples:
- Darwin leveraged knowledge and experiments about breeding when inventing evolutionary theory, because he understood that selection by humans was an instantiation of a more general selection process.
- Chemistry abounds with examples, where reactions were and are used for revealing underlying affinities and bonding mechanisms that couldn’t (and sometimes still can’t) be observed directly.
To summarize, hard-to-observe phenomena can be revealed to the scientific eye by creating controlled and scaled versions of them.
This in turn points to the main risk: that the artificial instantiation comes apart with the natural process. It shouldn't be a problem for my examples above, because they either directly interact with the natural phenomenon (chemistry, inheritance, tissue culture) or instantiate the same general mechanism (breeding). Yet it's good to remember that these are just proxies, especially when interpreting results.
Revealing principles through invention
Not all created proxies look as natural as tissue cultures or inheritance experiments — sometimes their artificiality itself can be productive. The idea is to make something up that directly aims for an important aspect of the phenomenon at hand, without having to deal with the constraints nature puts on its creations.
My first example definitely looks weird, but I think it works: artificial-looking problems in complexity theory. You see, my first impression when I learned about complexity theory years ago was that people got obsessed with irrelevant toy problems, like computing the parity of a binary string or coloring a graph using 3 colors, but only in a certain sequence, and never in some god-forbidden pairs. Some of these looked everything but natural, in the sense that I didn’t expect any programmer to ever have to solve them. How did working on that help with understanding computation and its cost?
What I missed was the paradigm of complexity classes. These are sets of problems which satisfy a certain resource constraint: the problems solvable in polynomial time for example, or the ones taking logarithmic space. And these classes are characterized by their complete problems — problems of the class to which every other problem can be reduced (under the right constraints).
So if a weird problem is complete for some class, studying it actually tells us something about the whole class (for example, proving that a single NP-complete problem can’t be solved in polynomial time would settle P vs NP). And sometimes artificial problems even create a new complexity class, if they end up complete for an unknown and interesting set of problems.
Another example, this time more experimental, is synthesis in chemistry. Many syntheses don’t aim to recreate natural substances, but instead to make something that was never observed, and might never have existed, like molecular hydrogen within buckminsterfullerene.
Why? Many reasons: the fun of creation, practical applications, testing a theory, or helping make further molecules. But in general, synthesis confirms rather than invalidates; it shows what can be done, confirms the design, and reveals what is possible at the extreme.
To summarize, making up new things (from computational problems to molecules) can reveal deep principles of the underlying phenomena by pushing it to its limits or by isolating one aspect of it.
How can it fail? Both examples show the same risk: to focus too much on the creation itself instead of treating it like a tool to investigate the original phenomenon.
Revealing properties through change
What if we’re interested in understanding a concrete natural object or phenomenon, like a molecule or a cell? Then one form of scientific wrestling consists in altering this object to learn how it works.
Still in chemistry for example, synthesis can help differentiate structurally equivalent isomers — molecules with the same composition and the same structure up to specific permutations. The trick is that in some cases, the symmetry can be broken by a synthesis operation (say replacing an hydrogen atom with a bigger group). Depending on the isomer, the group has access to different replacement sites, which lead to different numbers of possible distinct structures outcomes (technically up to symmetry). So counting the number of different isomers after the synthesis reveals the initial structure.
Beyond this example, philosopher of chemistry Joachim Schummer argues that chemistry studies exactly these “changeabiliies” of chemical substances:
(Knowing-Through-Making in Chemistry and Biology, Joachim Schummer, 2021)
However, at its core, chemistry is about chemical properties which are about how one or more chemical substances can react to form one or more other chemical substances (Schummer 1998). That is, all chemical properties are chemical reactivities, or more generally, changeabilities (which is beyond the received philosophy of science). (footnote Schummer, like every philosopher of chemistry, can’t resist a diss on traditional philosophy of science, that is philosophy of physics).
Cell fusion in tissue culture is a version of this scientific wrestling in biology. The natural boundaries and compatibilities of different cells were examined by fusing different somatic cells, then cells from different individuals, then cells from different species. Each success unveiled new insights about the plasticity of the cell, and the fundamental compatibility between cells from vastly different origins.
To summarize, changing natural objects or phenomena can reveal their underlying structure and how they can be combined to make other objects.
Paradoxically, this sounds to me like the least likely to fail, because there is not really a proxy here.
Shattering assumptions through successful design
Nothing dispels an impossibility as well as creating the impossible. This means that designing new objects and phenomena can upend long-held assumptions of the field.
As I mentioned in the introduction, this is precisely what Harrison’s first tissue culture did. And this specific field of study did it again (Oops), at least twice more!
- Harrison’s experiment and follow-up work by Alexis Carrel and others destroyed the early 20th century belief that complex life processes could only happen inside the body.
- Alexis Carrel’s work on making an immortal culture and the 40s advances on freezing and cloning that lead to immortal cell lines like HeLa shifted the assumptions about the lifespan of cells and other biological material.
- Starting in the 60s, the work on cell fusion (including between species) shattered the assumed fundamental differences between individuals and species that immune rejection and sexual infertility implied.
Another example from chemistry is the synthesis of urea from inorganic components, which invalidated vitalist claims that organic molecules were special.
Lastly, I think of all the attempts to push the boundaries of what was considered possible in engineering: building bigger monuments, faster CPUs and GPUs, more resistant materials… I expect that in many cases this sort of pushing the boundaries moved what was considered possible.
To summarize, building what is considered impossible precipitates a reconsideration of long-held assumptions and a release from limiting constraints.
The way I expect this particular scientific wrestling to fail is to keep trying to make what’s actually impossible, which never ends well. This can be addressed by focusing on ideas that haven't been try to death and should provide new information (for example a new technique or an unexplored theoretical path). If you want a perpetual motion machine, you better start with an insight in thermodynamics rather than a random “innovative” design that 2483 people came up with before.
Uncovering constraints through difficulties in design
Sometimes, we end up learning about the world as a consequence of purely practical aims. The point was to solve a concrete problem, to make things work. Yet what parts of the solution work and which don’t can uncover the underlying constraints of the problem, which sometimes are fundamental principles.
Tissue culture is at least partially like that: most of its proponents wanted to make cultures work in order to solve other problems, not to directly probe biological principles (although Carrel wanted to study biological time and operationalize Bergson’s duration, because why not).
Another set of fields where I expect this to happen are engineering disciplines before powerful scientific theories. Nowadays a lot of engineering is based on mathematical models and simulation of the underlying physics or biology; but that’s a recent innovation, and one that isn’t even applied to all subfields (software engineering for example).
Yet I had trouble finding good concrete examples of this.
- When I read about cathedral builders, it sounds like they learned from previous buildings and from what happened during the decades long constructions, but guilds of master builders were notoriously secretive, so I don’t know to what extent they did that.
- Looking quickly at the history of material science, it sounds like the initial formalization started by explaining empirical metallurgical knowledge; but I didn’t invest enough time to be confident of this.
- Barriers for P vs NP in complexity theory, which are theorems about the inability of specific proof techniques to separate P and NP, might count. But it’s debatable and debated whether they’re fundamental insights into computation or just contingent to our approaches.
- And there’s also John Day’s Patterns in Network Architecture which attempts to create a paradigm of network design based on 60 years of building and messing up with networks; I plan to analyze this more in detail for my study of paradigms, but I haven’t done it yet.
To summarize, solving a concrete, practical problem can be a tool for understanding the underlying structure of the solution space and the phenomena at hand, by finding constraints and their extent.
This approach has two main risks that I can see:
- Because it’s motivated by solving practical problems, this approach risks stopping early when things work decently well (one of Day’s claims is that the ARPANET project got things too right too quickly, which resulted in being satisfied and not iterating further on the fundamental design).
- If this approach unearths a constraint, it’s possible that it doesn’t capture an important underlying property, just the inadequacy of current methods. I mentioned above the barriers to P vs NP, which don’t sound that fundamental to me or many complexity theorists.
Here is a bunch of references and topics that might be relevant to scientific wrestling, but which I haven’t explored:
- Ian Hacking’s Representing and Intervening is supposed to be a classic of philosophy of science on intervention and active experiments, but his focus sounds more about physics and realism. So I didn’t prioritize reading it.
- Physics arguably contains examples of scientific wrestling, like the first nuclear reaction, the experiments around electricity, and particle accelerators. But knowing the tendency in philosophy of science to overemphasize physics, I decided to explore different examples.
- Another topic that came to mind was economics, but I already had my hands full with reading on tissue culture and biology.
- More out there, creating to understand makes me think of art, but I plan to investigate the comparative epistemologies of art and science at another time.