After trying to discover why the LW wiki “definition” of Reductionism appeared so biased, I concluded from the responses that it was never really intended as a definition of the Reductionist position itself, but as a summary of what is considered to be wrong with positions critical of Reductionism.

The argument goes like this. “Emergentism”, as the critical view is often called, points out the properties that emerge from a system when it is assembled from its elements, which do not themselves show such a property. From such considerations it points out various ways in which research programmes based on a reductionist approach may distort priorities and underestimate difficulties. So far, this is all a matter of degree and eventually each case must be settled on its merits. However, it gets philosophically sensitive when Emergentists claim that a Reductionist approach may be unable in principle to 'explain' certain emergent properties.

The reponse to this claim (I think) goes like this. (1) The explanatory power of a model is a function of its ingredients. (2) Reductionism includes all the ingredients that actually exist in the real world. Therefore (3) Emergentists must be treating the “emergent properties” as extra ingredients, thereby confusing the “map” with the “territory”. So Reductionism is defined by EY and others as not treating emergent properties as extra ingredients (in effect).

At this point it is important to distinguish “Mind theory” from other fields where Reductionism is debated. In this field, Reductionists apparently regard Emergentism as a form of disguised Vitalism/Dualism - if emergent properties can’t be explained by the physical ingredients, they must exist in some non-physical realm. However, Emergentism can apply equally well to everything from chess playing programs to gearbox vibrations, neither of which involve anything like mysterious spiritual substances, so this can hardly be the whole story. And in fact I would argue that the reverse is the case: Vitalists or “substance Dualists” are actually unconscious Reductionists as well: when they assume an extra ingredient is necessary to account for the things which they believe Physicalism cannot explain, they are still reducing a system to its ingredients. Emergentists by contrast reject premise (1) of the previous paragraph, that the explanatory power of a model is a function of its ingredients. Thus it seems to me that the real difference between Reductionists & Emergentists is a difference over the nature of explanation. So it seems worthwhile looking into some of the different things that can be meant by “explanation”.

For simplicity, let us illustrate this by the banal example of a brickwork bridge. The elements are the bricks and their relative positions. Our reductionist R points out that these are the only elements you need - after all, if you remove all the bricks there is nothing left - and so proposes to become an expert in bricks. Our (Physicalist) Emergentist E suggests that this won’t be of much use without a knowledge of the Arch (an emergent feature). R isn't stupid and agrees that this would be extremely useful but points out that if no expert in Arch Theory is to hand, given the very powerful computer available, such expertise isn’t strictly necessary: it's not an inherent requirement. Simply solving the force balance equations for each brick will establish whether a given structure will fall into the river. Isn’t that an explanation?

Not in my sense, says E, as to start with it doesn’t tell me how the bridge will be designed, only how an existing design will be analysed. So R explains that the computer will generate structures randomly until one is found that satisfies the requirements of equilibrium. When E enquires how stability will be checked. R replies that the force balance will be checked under all possible small deviations from the design position.

E isn’t satisfied. To claim understanding, R must be able to apply the results of the first design to new bridges of different span, but all (s)he can do is repeat the process again every time.

On the contrary, replies R, this being the age of Big Data, the computer can generate solutions in a large number of cases and then use pattern recognition software to extract rules that can be applied to new cases.

Ah, says E, but explaining these rules means hypothesing more general rules from which these rules can be derived, using appropriate Bayesian reasoning to confirm your hypothesis.

OK, replies R, my program has a heuristic feature that has passed the Turing Test. So anything you can do along these lines, it can do just as well.

So using R’s approach, explanation even in E’s most general sense can always be arrived at by a four-stage process: (1) construct a model using the basic elements applicable to the situation, (2) fill a substantial chunk of solution space, (3) use pattern recognition to extract pragmatic rules, (4) use hypothesis generation and testing to derive general principles from the rules. It may be a trivial illustration, but it seems to me that in a broad sense this sort of process must be applicable in almost any situation.

How should we interpret this conclusion? R would say that it proves that “explanation” can be arrived it using a Reductionist model. E would say it proves the inadequacy of Reductionism, since Reductionist steps (1) & (2) have to be supplemented by Integrationist steps (3) & (4): the rules found at step (3) are precisely “emergent features” of the solution space. Moreover, pattern recognition is not a closed-form process with repeatable results. (Is it?) On the other hand the patterns identified in solution space might well be derivable in closed form directly from higher-level characteristics of the system in question (such as constraints in the system).

I would say that the choice of interpretation is a matter of convention, though I own up that I find the Emergentist mind-set more helpful in the fields I have learnt something about. What really matters is a recognition of the huge difference between “providing a solution” and “generalising from solution space” as types of explanation. The “Emergentist” label is a reminder of that difference. But call yourself a “Reductionist” if you like so long as you acknowledge the difference.

It seems to me that the sort of argument sketched here provides useful pointers to help recognize when “Reductionism” becomes “Greedy Reductionism”(A). For example, consider the claim that mapping the Human Connectome will enable the workings of the brain to be explained. Clearly, the mapping is just step (1). Consider the size of the Connectome, and then consider the size of the solution space of its activity. That makes step (1) sound utterly trivial compared with step (2). This leaves the magnitude of steps (3) & (4) to be evaluated. That doesn’t mean the project won’t be extremely valuable, but it puts the time-frame of the claim to provide real “understanding” into a very different light, and underlines the continued value of working at other scales as well.

(A): See e.g. fubarobfusco's comment on my earlier discussion.

New to LessWrong?

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 1:29 AM
[-]TimS11y50

The problem with the label "emergence" isn't that the phenomena does not occur. The problem is when people use the label "emergence" as a semantic stop sign, ending attempts at further explanation.

Airplanes flying through the air are an emergent property of quantum mechanics. That sentence standing alone tells you nothing useful about airplanes or quantum mechanics.


Also, discussion posts don't use Markdown. (I think they use HTML, but don't quote me).

Airplanes flying through the air are an emergent property of quantum mechanics.

Not entirely. The gravity plays its part here. The gravity is not inside the QM.

The sentence is fixed by replacing "quantum mechanics" with "elementary particles and fundamental fields". There isn't a good explanation yet as to how gravity is related to quantum mechanics, but we're pretty sure that they are related.

They do indeed use HTML.

[-]TAG4y10

The problem is when people use the label “emergence” as a semantic stop sign, ending attempts at further explanation.

But there is nothing about "emergence" that makes it uniquely misusable that way.

What someone who says intelligence is an emergent property might mean is that there is not one "weird trick" to it. That is a reasonable thing to assert, even if it is not an explanation. It also not an explanation to say (only) that intelligence is reductionistic, computational,etc. There is a reductionistic style of explanation, but "is reductionistic" is not an explanation.

There might be a problem that some concepts are used as semantic stop signs, but that problem is not restricted to "emergence". There might be a problem that emergence is not applicable to our universe, but that is not a problem with the concept of emergence: one has to at least consider possible ways the universe might be in order to rule out the incorrect ones. There might be a problem with seeking emergentist explanations after they have been definitevely ruled out ... but there is still ongoing debate about that.

The problem is when people use the label "emergence" as a semantic stop sign

Agreed, which is why I was trying to replace it by a "proceed with caution" sign with some specific directions.


One lives & learns - thanks.

[-][anonymous]11y00

I think they use HTML

[This comment is no longer endorsed by its author]Reply

Your treatment seems to paint Reductionism as an epistemic thesis about explanation. That's consistent with some philosophical treatments, but there's also a (merely) ontological thesis that goes by the name "Reductionism". I think EY has an ontological thesis in mind, although I'm very tentative about that.

I think he might have an ontological claim in mind as well, although I can't see how anyone could get at the ontology without going through the epistemology.

In the 4-stage process described in the article, stage 4 (hypothesis generation) is the one that I think provides the most definitive and unbreachable difference between R and E.  The proverbial Newton's apple story offers a good example.  Millions of people had seen apples fall from trees.  But the hypothesis of gravity could only have come to being through inductive, not deductive logic.  Stages 1-3 can be, in principle, written into a computer program.  I can't imagine how a hypothesis generation machine could come up with the conclusion that Newton came up with upon witnessing the apple fall. 

I find your brickwork bridge overly complex. I propose a more simple example (borrowed from Minsky's SOM): How can a box made of six boards hold a mouse when a mouse could just walk away from any individual board? No individual board has any "containment" or "mouse-tightness" on it's own. So is "containment" an emergent property?

Of course, it is the way a box prevents motion in all directions, because each board bars escape in a certain direction. The left side keeps the mouse from going left, the right from going right, the top keeps it from leaping out, and so on. The secret of a box is simply in how the boards are arranged to prevent motion in all directions!

That's what containing means. So it's silly to expect any separate board by itself to contain any containment, even though each contributes to the containing. It is like the cards of a straight flush in poker: only the full hand has any value at all.

"The same applies to words like life and mind. It is foolish to use these words for describing the smallest components of living things because these words were invented to describe how larger assemblies interact. Like boxing-in, words like living and thinking are useful for describing phenomena that result from certain combinations of relationships. The reason box seems nonmysterious is that everyone understands how the boards of a well-made box interact to prevent motion in any direction. In fact, the word life has already lost most of its mystery — at least for modern biologists, because they understand so many of the important interactions among the chemicals in cells. But mind still holds its mystery — because we still know so little about how mental agents interact to accomplish all the things they do."

-Minsky

At this point it is important to distinguish “Mind theory” from other fields where Reductionism is debated. In this field, Reductionists apparently regard Emergentism as a form of disguised Vitalism/Dualism - if emergent properties can’t be explained by the physical ingredients, they must exist in some non-physical realm.

Standard philosophical emergentism is explicitly a form of dualism..

"As a theory of mind (which it is not always), emergentism differs from idealism, eliminative materialism, identity theories, neutral monism, panpsychism, and substance dualism, whilst being closely associated with property dualism. " (WP)

..but standard emergentism has a clause that rogerS omits: emergent properties aren't just higher-level properties not had by their consitutents, they are higher-level properties which cannot be explanatorily reduced to their constituents.

However, Emergentism can apply equally well to everything from chess playing programs to gearbox vibrations, neither of which involve anything like mysterious spiritual substances, so this can hardly be the whole story.

"Emergentism" can only be applied to gearboxes if the irreducbility clause is dropped. The high-level behaviour of a mechanism is always reducible to its the behaviour of its parts, because a mechanism is built up out of parts, and reduction is therefore, literally, reverse engineering.

But being able to offer an uncontentious definition of emergentism does not prove there is nothing contentious about it. It's a kind of inverted straw man.

The high-level behaviour of a mechanism is always reducible to its the behaviour of its parts, because a mechanism is built up out of parts, and reduction is therefore, literally, reverse engineering.

This characterization isn't universally accepted. What you if simple can't anticipate or compute the high-level effect due to the shear complexity and lack of total knowledge? For instance, the experience of pain can alter human behaviour, but the lower-level chemical reactions in the neurons that are involved in the perception of pain are not the cause of the altered behaviour, as the pain itself has causal efficacy. According to the principles of emergence, the natural world is divided into hierarchies that have evolved over evolutionary time (Kim, 1999; Morowitz, 2002). Reductionists advocate the idea of 'upward causation' by which molecular states bring about higher-level phenomena, whereas proponents of emergence accept 'downward causation' by which higher-level systems influence lower-level configurations (Kim, 1999).

Have a read:

Reductionism and complexity in molecular biology

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1299179/

Some highlights

The constituents of a complex system interact in many ways, including negative feedback and feed-forward control, which lead to dynamic features that cannot be predicted satisfactorily by linear mathematical models that disregard cooperativity and non-additive effects...
An additional peculiarity of complex biological systems is that they are open—that is, they exchange matter and energy with their environment—and are therefore not in thermodynamic equilibrium...
In the past, the reductionist agenda of molecular biologists has made them turn a blind eye to emergence, complexity and robustness, which has had a profound influence on biological and biomedical research during the past 50 years.
The number of new drugs that are approved by the US Food and Drug Administration has declined steadily from more than 50 drugs per annum 10 years ago to less than 20 drugs in 2002. This worrying trend has persisted despite continuous mergers and acquisitions in the industry and annual research and development expenditures of approximately US$30 billion. Commentators have attributed this poor performance to a range of institutional causes . . . However, there is probably a more fundamental reason for these failures: namely, that most of these approaches have been guided by unmitigated reductionism. As a result, the complexity of biological systems, whole organisms and patients tends to be underrated (Horrobin, 2001). Most human diseases result from the interaction of many gene products, and we rarely know all of the genes and gene products that are involved in a particular biological function. Nevertheless, to achieve an understanding of complex genetic networks, biologists tend to rely on experiments that involve single gene deletions. Knockout experiments in mice, in which a gene that is considered to be essential is inactivated or removed, are widely used to infer the role of individual genes. In many such experiments, the knockout is found to have no effect whatsoever, despite the fact that the gene encodes a protein that is believed to be essential. In other cases, the knockout has a completely unexpected effect (Morange, 2001a). Furthermore, disruption of the same gene can have diverse effects in different strains of mice (Pearson, 2002). Such findings question the wisdom of extrapolating data that are obtained in mice to other species. In fact, there is little reason to assume that experiments with genetically modified mice will necessarily provide insights into the complex gene interactions that occur in humans (Horrobin, 2003).
Another defect of reductionist thinking is that it analyses complex network interactions in terms of simple causal chains and mechanistic models. This overlooks the fact that any clinical state is the end result of many biochemical pathways and networks, and fails to appreciate that diseases result from alterations to complex systems of homeostasis. Reductionists favour causal explanations that give undue explanatory weight to a single factor.
[-]TAG4y10

The key word is "mechanism", meaning something deliberately and cosnsciously constructed out of parts. No organism is a mechanism in that sense.

Your characterization is far from universally accepted.

See Mechanisms in Science, Stanford Dictionary of Philosophy

https://plato.stanford.edu/entries/science-mechanisms/#ProUndMai

[-]TAG4y10

What I say is valid given my definition of mechanism.

Ok then what in the world do you mean by “ cosnsciously” or “parts”? And why do you think we can’t make biological organisms? Is there something magical about then?

Google Craig Venter “synthetic life”

So emergence cannot be present in a mechanism if I “deliberately” make something but it can be it I make a mistake? So an emergent property is just anything accidental? Is software made from parts? So if I make a software application that has unexpected properties whether they are emergent or not all depends on how conscious I was of them when I set out?

[-]TAG4y10

I don't mean anything exotic or hard to guess. Just soldering components onto a board, that kind of thing.

And why do you think we can’t make biological organisms?

We can't right now.

So emergence cannot be present in a mechanism if I “deliberately” make something but it can be it I make a mistake?

"Emergence" can be present anywhere if you define it broadly enough. It's not much of a win to prove that emergence is ubiquitous by trivialising the term.

"As a theory of mind (which it is not always), emergentism differs from idealism, eliminative materialism, identity theories, neutral monism, panpsychism, and substance dualism, whilst being closely associated with property dualism. " (WP)

As a theory exclusively of the mind, I can see that emergentism has implications like property dualism, but not as a theory that treats the brain just as a very complex system with similar issues to other complex systems.

"Emergentism" can only be applied to gearboxes if the irreducibility clause is dropped. The high-level behaviour of a mechanism is always reducible to its the behaviour of its parts.

My point is that depends if by "behaviour" you mean "the characteristics of a single solution" or "the characteristics of solution space". In the latter case the meaning of "reduction" doesn't seem unambiguous to me.

The practical debate I have in mind is whether multibody dynamics can answer practical questions about the behaviour of gearboxes under conditions of stochastic or transient excitation with backlash taken into account, the point being that the solution space in such an application can be very large.

In the context of the mind-body problem, the contentious claim of emergentists is that mental properties can;t be reduced to physical properties in principal. There could be any number of in practice problems involved in understanding complex systems in terms of their parts. No actual reductionists think that all sciences should be replaced by particle physics, because they understand these in-practice problems. The contentiousness is all about the in-principle issues.

Reducing to "physical properties" is not necessarily the same as to "the physical properties of the ingredients". I would have thought physicalists think mental properties can be reduced to physical properties, but reductionists identify these with the physical properties of the ingredients. I suppose one way of looking at it is that when you say "in principle" the principles you refer to are physical principles, whereas when emergentists see obstacles as present "in principle" when certain kinds of complexity are present they are more properly described as mathematical principles.

Mental events can certainly be reduced to physical events, but I would take mental properties to be the properties of the set of all possible such events, and the possibility of connecting these to the properties of the brain's ingredients even in principle is certainly not self-evident.

Reducing to "physical properties" is not necessarily the same as to "the physical properties of the ingredients".

Well, no, but reducing to the properties of (and some suitable well behaved set of relations between) the smallest ingredients is what reductionists mean by reductionism.

I would have thought physicalists think mental properties can be reduced to physical properties, but reductionists identify these with the physical properties of the ingredients

I would have thought reductionists think they can be reduced and identity theorists think they are already identical.

I suppose one way of looking at it is that when you say "in principle" the principles you refer to are physical principles, whereas when emergentists see obstacles as present "in principle" when certain kinds of complexity are present they are more properly described as mathematical principles.

"in principle" means in the absence of de-facto limits in cognitive and/or computational power.

Mental events can certainly be reduced to physical events, but I would take mental properties to be the properties of the set of all possible such events, and the possibility of connecting these to the properties of the brain's ingredients even in principle is certainly not self-evident.

Errrr...you believe in Token Identity but not Type Identity???