Reply to The Futility of Emergence

In The Futility of Emergence, Eliezer takes an overly critical position on emergence as a theory. In this (short) article, I hope to challenge that view.

Emergence is not an empty phrase. The statements "consciousness is an emergent phenomenon" and "consciousness is a phenomenon" are not the same thing; the former conveys information that the latter does not. When we say something is emergent, we have a well defined concept that we refer to.

From Wikipedia:

emergence is a phenomenon whereby larger entities arise through interactions among smaller or simpler entities such that the larger entities exhibit properties the smaller/simpler entities do not exhibit.

A is an emergent property of X, means that A arises from X in a way in which it is contingent on the interaction of the constituents of X (and not on those constituents themselves). If A is an emergent property of X, then the constituents of X do not possess A. A comes into existence as categorial novum at the inception of X. The difference between system X and its constituent components in regards to property A is a difference of kind and not of degree; X's constituents do not possess A in some tiny magnitude—they do not possess A at all.

Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem

This is blatantly not true; size and mass for example are properties of elementary particles.

You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there's no detailed internal model to manipulate. Those who proffer the hypothesis of "emergence" confess their ignorance of the internals, and take pride in it; they contrast the science of "emergence" to other sciences merely mundane.

I repectfully disagree.

When we say A is an emergent property of X, we say that X is more than a sum of its parts. Aggregation and amplification of the properties of X's constituents does not produce the properties of X. The proximate cause of A is not the constituents of X themselves—it is the interaction between those constituents.

Emergence is testable and falsifiable, emergence makes advance predictions; if I say A is an emergent property of system X, then I say that none of the constituent components of system A possess A (in any form or magnitude).

Statement: "consciousness (in humans) is an emergent property of the brain."

Prediction: "individual neurons are not conscious to any degree.""

Observing a supposed emergent property in constituent components falsifies the theory of emergence (as far as that theory/phenomenon is concerned).

The strength of a theory is not what it can predict, but what it can't. Emergence excludes a lot of things; size and mass are not emergent properties of atoms (elementary physical paticles possess both of them). Any property that the constituents of X possess (even to an astronomically lesser degree) is not emergent. This excludes a whole lot of properties; size, mass, density, electrical charge, etc. In fact, based on my (virtually non-existent knowledge of physics), I suspect that all fundamental and derived quantities are not emergent properties (I once again reiterate that I don't know physics).

Emergence does not function as a semantic stopsign or curiosity stopper for me. When I say consciousness is emergent, I have provided a skeletal explanation (at the highest abstract levels) of the mechanism of consicousness. I have narrowed my search; I now know that consciousness is not a property of neurons, but arises from the interaction thereof. To use an analogy that I am (somewhat) familiar with, saying a property is emergent, is like saying an algorithm is recursive; we are providing a high level abstract description of both the phenomena and the algorithm. We are conveying (non-trivial) information about both phenomena and algorithm. In the former case, we convey that the property arises as a result of the interaction of the constituent components of a system (and is not reducible to the properties of those constituents). In the latter case, we specify that the algorithm operates by taking as input the output of the algorithm for other instances of the problem (operating on itself). When we say a phenomenon is an emergent property of a system, it is analogous to saying that an algorithm is recursive; you do not have enough information to construct either phenomena or algorithm, but you now know more about both than you did before, and the knowledge you have gained is non-trivial.

Before: Human intelligence is an emergent product of neurons firing. After: Human intelligence is a product of neurons firing.

How about this:

Before: "The quicksort algorithm is a recursive algorithm."

After: "The quicksort algorithm is an algorithm."

Before: Human intelligence is an emergent product of neurons firing.

After: Human intelligence is a magical product of neurons firing.

Thi s seems to work just as fine:

Before: "The quicksort algorithm is a recursive algorithm."

After: "The quicksort algorithm is a magical algorithm."

Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?

It seems clear to me that in both cases, the original statement conveys more information than the edited version. I argue that this is the same for "emergence"; saying a phenomenon is an emergent property does convey useful non-trivial information about that phenomenon.

I shall answer the below question:

If I showed you two conscious beings, one which achieved consciousness through emergence and one that did not, would you be able to tell them apart?

Yes. For the being which achieved consciousness through means other than emergence, I know that the constituents of that being are conscious.

Emergent consciousness: A human brain.

Non-emergent consciousness: A hive mind.

The constituents of the hive mind are by themselves conscious, and I think that's a useful distinction.

"Emergence" has become very popular, just as saying "magic" used to be very popular. "Emergence" has the same deep appeal to human psychology, for the same reason. "Emergence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Emergence is popular because it is the junk food of curiosity. You can explain anything using emergence, and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they've taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of "science" but still the same species psychology.

Once again, I disagree with Eliezer. Describing a phenomenon as emergent is (for me) equivalent to describing an algorithm as recursive; merely providing relevant characterisation to distinguish the subject (phenomenon/algorithm) from other subjects. Emergence is nothing magical to me; when I say consciousness is emergent, I carry no illusions that I now understand consciousness, my curiosity is not sated—but I argue—I am now more knowledgebale than I was before; I now have an abstract conception of the mechanism of consciousness; it is very limited, but it is better than nothing. Telling you quicksort is recursive doesn't tell you how to implement quicksort, but it does (significantly) constrain your search space; If you were going to run a brute force search of algorithm design space to find quicksort, you now know to confine your search to recursive algorithms. Telling you that quicksort is recursive, brings you closer to understanding quicksort than if you were told it's just an algorithm. The same is true for saying consciousness is emergent. You know understand more on consciousness than you did before; you now know that it arises categorial novum as a consequence of the interaction of neurons. Describing a phenomenon as "emergent" does not convey zero information, and thus I argue the category is necessary. Emergent is only as futile an explanation as recursion is.

Now that I have (hopefully) established that emergence is a real theory (albeit one with limited explanation power, not unlike describing an algorithm as recursive), I would like to add something else. The above is a defence of the legitimacy of emergence as a theory; I am not of necessity saying that emergence is correct. It may be the case that no property of any system is emergent, and as such all properties of systems are properties of at least one of its constituent components. The question of whether emergence is correct (there exists at least one property of at least one system that is not a property of any of its constituent components (not necessarily consciousness/intelligence)) is an entirely different question, and is neither the thesis of this write up, nor a question I am currently equipped to tackle. If it is of any relevance, I do believe consciousness is at least a (weakly) emergent property of sapient animal brains.

Part of The Contrarian Sequences

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 6:16 AM

My feeling about this is similar to philh's. You absolutely can define "property P of system S is emergent" to mean "S has property P but the constituent parts of S don't", and that definition is not 100% useless -- but (1) it isn't very much use and (2) this is not the only way in which "emergent" gets used, and some of the other ways are worse.

You object to the idea that a definition like yours makes "size" and "mass" emergent properties. Indeed it doesn't make having a size and having a mass emergent; but it does make e.g. having a mass greater than a kilogram or being more than five centimetres across an emergent property of a human brain, just as much as being able to appreciate Mozart is one.

Likewise, it makes being suitable for living in an emergent property of a house, and being able to run Microsoft Windows an emergent property of a computer, and so forth. This definition of "emergent" applies not only to the mysterious and exciting cases like that of consciousness where it can seem hard to conceive how P could arise from the unpromising elements of S, as well as to the mundane and boring cases like that of having mass >1kg where it's perfectly obvious how it does, and my impression is that no one ever actually uses the word "emergent" to describe the latter -- unless it's precisely because they've just given a nice simple definition of "emergence" that happens to cover the mundane as well as the mysterious cases.

Other definitions of "emergence" -- such as one can find e.g. on the Wikipedia page on the subject -- attempt to rule out the boring cases by stipulating that the emergent property must be of a "different kind" from those of the constituent parts of the system, or that there must be "radical novelty", or whatever. These extra requirements are unfortunately unclear and fuzzy in much the way Eliezer complains that "emergent" itself is.

Without such extra requirements, though, to say e.g. that consciousness is an emergent property of the brain tells us no more than that the brain is conscious and its neurons aren't. That's not nothing but it's not exactly news, and attaching the word "emergent" to it doesn't tell us much, and I'm pretty sure most people using the word are trying to convey something more.

So what would be the point of "emergence" with this very broad definition? I can see one kinda-plausible purpose. We might hope to get something out of a very general investigation into how combinations of things acquire properties that the individual things lack, starting with very simple systems and hoping to build up to brains or societies or whatever, and "emergence" -- with something very like your sense -- would be a fine name for the object of this very general investigation.

But, alas, it seems almost certain to me that this very general investigation would be too general, and would not lead to anything very useful. There are so very very many ways in which things combine and interrelate, and it seems unlikely that there's all that much to be said about them at that level of generality. It's not that there's no interesting work to be done that falls under "emergence" as defined; it's that there's so much, and so diverse, that grouping it all under that heading doesn't achieve anything beyond just doing the work for its own sake. Where's the actual theory? I don't see any sign that there is one or ever will be one. But maybe I'm wrong. Do we see e.g. lots of cases of different people at the Santa Fe Institute finding unexpected applications of one another's work in superficially-different fields? That would be some evidence that there's a real subject here.

I'm going to settle on a semi-formal definition of emergence which I believe is consistent with everything above, and run through some examples because I think your post misrepresents them and the emergence is interesting in these cases.

Preliminary definition: a "property" is a function mapping some things to some axis

Definition: a property is called emergent if a group of things is in its domain while the individual things in the group are not in its domain.

This isn't the usual use of "property" but I don't want to make up a nonsense word when a familiar one works just as well. In this case, "weighs >1kg" either isn't a property, or everything is in its domain; I'd prefer to say weight is the only relevant property. Either way this is clearly not emergent because the question always makes sense.

Being suitable for living in is a complicated example, but in general it is not an emergent property. In particular, you can still ask how suitable for living in half a house is; it's still in the domain of the "livability" property even if it has a much lower value. This is true all the way down to a single piece of wood sticking out of the ground, which can maybe be leaned against or used to hide from wind. If you break the house down into pieces like "planks of wood" and "positions on the ground" then I think it's true, if trivial, that livability is an emergent property of a location with a structure in some sense--it's the interactions between those that make something a house. And this gives useful predictions, such as "just increasing quality of materials doesn't make a house good" and "just changing location doesn't make a house good" even though both of these are normal actions to take to make a house better.

Being able to run Microsoft windows is an emergent property of a computer, in a way that is very interesting to me if I want to build a computer from parts on NewEgg, which I've done many times. It has often failed for silly reasons, like "I don't have one of the pieces needed for the property to emerge." Like the end of the housing example, I think this is a simple example where we understand the interactions, but it still is emergent and that emergence again gives concrete predictions, like "improving the CPU's ability to run windows doesn't help if it stops it interacting the right way" which with domain knowledge becomes "improving the CPU's ability to run windows doesn't help if you get a CPU with the wrong socket that doesn't fit in your motherboard."

I think this is useful, and I think it's very relevant to a lot of LW-relevant subjects.

If intelligence is an emergent property, then just scaling up or down processing power of an emergent system may scale intelligence up and down directly, or it might not--depending on the other parts of the system.

If competence is an emergent property, then it may rely on the combination of behavior and context and task. Understanding what happens to competence when you change the task even in practical ways through e.g. transfer learning is the same understanding that would help prevent a paperclip maximizer.

If ability to talk your way out of a box is an emergent property, then the ability to do it in private chat channels may depend on many things about the people talking, the platform being used to communicate, etc. In particular it also predicts quite clearly that reading the transcripts might not be at all convincing to the reader that the AI role-player could talk his way out of their box. It also suggests that the framing and specific setup of the exercise might be important, and that if you want to argue that a specific instance of making it work is convincing, there is a substantial amount of work to do remaining.

This is getting a bit rambly so I'm going to try to wrap it up.

With a definition like this, saying something has an emergent property relies on and predicts two statements:

  1. The thing has component parts

  2. Those component parts connect

These statements give us different frameworks for looking at the thing and the property, by looking at each individual part or at the interactions between sets of parts. Being able to break a problem down like this is useful.

It also says that answering these individual questions like "what is this one component and how does it work" and "how do these two components interact" are not sufficient to give us an understanding of the full system.

Perhaps I'm misunderstanding something, but it seems to me that (1) one can adjust the domain on which any "property" is defined ad lib, (2) in many cases there's a wide range of reasonable domains, (3) whether some property is "emergent" according to your definition is strongly dependent on the choice of domain, and (4) most of the examples discussed in this thread are like that.

Human consciousness is pretty much a paradigmatic example of an emergent property. Is there a function that tells us how conscious something is? Maaaybe, but if so I don't see any particular reason why its domain shouldn't include things that aren't conscious at all (which get mapped to zero, or something extremely close to zero). Like, say, neurons. But if you do that then consciousness is no longer "emergent" by your definition, because then the brain's constituent parts are no longer outside the domain of the function. (Is it silly to ask "Is a neuron conscious"? Surely not; indeed at least part of the point here is that we're pretty sure neurons aren't conscious. And in order to contrast conscious humans with other not-so-conscious things, we surely want to ask the question about chimpanzees, rhesus monkeys, dogs, rats, flies, amoebas -- and if amoebas, why exactly not neurons?)

Size is pretty much a paradigmatic example of a not-interestingly-emergent property. (I hesitate to call anything flatly not-emergent.) Well, OK. So what's the size of a carbon atom? An electron? There are various measures of size we can somewhat-arbitrarily decide to use, but they don't all give the same answer for these small objects and I think it's clearly defensible to claim that there is no such thing as "the size" of an electron. In which case, boom, "size" is an emergent property in your sense.

I don't see how being able to run Windows is emergent in your sense. Can my laptop run Windows? Yes. Can its CPU on its own run Windows? No. Can the "H" key on its keyboard run Windows? No. The natural domain of the function seems to be broad enough to include the components of my laptop. Hence, emergence.

Maybe I am misunderstanding what you mean by "domain"?

I do agree that the fact that a computer may fail to be able to do something we want because of a not-entirely-obvious interaction between its parts is an important fact and has something to do with some notion of "emergence". But I don't see what it has to do with the definition you're proposing here. The relevant fact isn't that ability-to-run-Windows isn't defined for the components of the machine, it's that ability-to-run-Windows depends on interactions between the components. Which is true, and somewhat interesting, but an entirely different proposition. Likewise for your other examples where some-sort-of-emergence is an important fact -- in all of which I agree that the interactions and context-dependencies you're drawing attention to are worth paying attention to, but I don't see that they have anything much to do with the specific definition of emergence you proposed, and more importantly I don't see what the notion of emergence actually adds here. Again, I'm not denying that the behaviour of a system may be more than a naive some of the behaviours of its components, I'm not denying that this fact is often important -- I just think it's not exactly news, and that generally when people talk about something like consciousness being "emergent" they mean something more, and that if we define "emergence" broadly enough to include all these things then it looks to me like too broad a category for it to be useful (e.g.) to think of it as a single coherent field that merits study, rather than an umbrella description for lots of diverse phenomena with little in common.

Honestly I think that comment got away from me, and looking back on it I'm not sure that I'd endorse anything except the wrap up. I do think "from a quantum perspective, size is emergent" is true and interesting. I also think people use emergence as a magical stopword. But people also use all kinds of technical terms as magical stopwords, so dismissing something just on those grounds isn't quite enough--but maybe there is enough reason to say that this specific word is more confusing than helpful.

I'm not sure that I disagree with you here, but I do feel like you largely missed the point of the original essay. A large part of what you're saying seems to be that the word "emergence" doesn't mean to you what Eliezer says it often means; it doesn't function for you in the way that Eliezer says it often functions. Which... okay, but that doesn't mean he's wrong about it functioning that way for others.

In fact, I feel like Eliezer replied to this back in 2009: http://lesswrong.com/lw/113/esrs_comments_on_some_eyoblw_posts/

"I disagree with The Futility of Emergence," says ESR. Yea, many have said this to me. And they go on to say: Emergence has the useful meaning that... And it's a different meaning every time. In ESR's case it's:

> "The word 'emergent' is a signal that we believe a very specific thing about the relationship between 'neurons firing' and 'intelligence', which is that there is no possible account of intelligence in which the only explanatory units are neurons or subsystems of neurons."

Let me guess, you think the word "emergence" means something useful but that's not exactly it, although ESR's definition does aim in the rough general direction of what you think is the right definition...

So-called "words" like this should not be actually spoken from one human to another. It is tempting fate. It would be like trying to have a serious discussion between two theologians if both of them were allowed to say the word "God" directly, instead of always having to say whatever they meant by the word.

Indeed, your understanding of emergence seems not quite the same as ESR's understanding of emergence.

It's worth noting that in the past ten years, the wikipedia article seems to have been edited. The definition Eliezer quoted is no longer present; at a guess, it's been replaced with the more specific definition that you quoted. It may be that the word now has a consistent, non-magical meaning in the popular imagination and the essay is out of date. But your own essay does nothing to convince me of that.

As a single data point: I think DragonGod's definition of emergence is exactly what mine has been, and while I would have agreed with Eliezer's assessment of ESR's comment I think DragonGod's sense that it's the same would convince me the difference is just poor writing.

Indeed, your understanding of emergence seems not quite the same as ESR's understanding of emergence.

Is this really true? I do not see the difference between my understanding of emergence and ESR's. What is the difference between:

The word 'emergent' is a signal that we believe a very specific thing about the relationship between 'neurons firing' and 'intelligence', which is that there is no possible account of intelligence in which the only explanatory units are neurons or subsystems of neurons."

and;

A is an emergent property of X, means that A arises from X in a way in which it is contingent on the interaction of the constituents of X (and not on those constituents themselves). If A is an emergent property of X, then the constituents of X do not possess A. A comes into existence as categorial novum at the inception of X. The difference between system X and its constituent components in regards to property A is a difference of kind and not of degree; X's constituents do not possess A in some tiny magnitude—they do not possess A at all.

?

Do the two models of emergence cause us to anticipate differently? Are the predictions of the two models any different? Is there anything that ESR's model predicts that my model fails to predict, or my model predicts and ESR's model fails to predict?

As far as I can tell, functionally the two models are the same. They predict the same set of experimental results. They constrain anticipations in exactly the same manner. Even in their internal make up, the two models are not different.

ESR's definition of emergence is isomorphic to mine.

Let me guess, you think the word "emergence" means something useful but that's not exactly it, although ESR's definition does aim in the rough general direction of what you think is the right definition...

Nope. There is no vagueness here. ESR and I agree on the meaning of emergence. I explicitly specified what I meant by emergence, and it agrees with what ESR says. There is a definition of emergence on Wikipedia, and I wholly agree with it.

Well, I actually think both definitions are kind of vague and hard to apply, and since they're written somewhat differently my intuition is that they're not describing the same thing by default, and if you think they are then you need to argue it.

Conditioned on "X possesses A", the definitions seem to be "there is no possible account of A in which the only explanatory units are the constituents of X", compared to "the constituents of X do not possess A". I don't think it's obvious that these are the same.

I might guess that ant colonies have properties which are emergent by your definition but not ESR's, but I'm not confident.

All that said, that was a fairly throwaway line, and if I was wrong about it then that doesn't particularly change my feelings.

Well, I actually think both definitions are kind of vague and hard to apply, and since they're written somewhat differently my intuition is that they're not describing the same thing by default, and if you think they are then you need to argue it.

Sorry, but I don't think that's how burden of proof works. You're the one claiming that the definitions are different, so you are the one who needs to defend that claim.

If you feel they are different, then provide an example.

I was wrong about it then that doesn't particularly change my feelings.

What would change your feelings.

I'm going to mostly disengage now, I expect continuing would be a waste of time. But it seems virtuous of me to answer this question, at least:

> What would change your feelings.

One thing that would change my feelings would be if someone (possibly you) looked at Eliezer's original essay and summarized it in a way that seemed true to the spirit, and did the same with yours, and yours felt like it engaged with Eliezer's. I'm aware that this is vague. I think that's pretty inevitable.

I think there is an obvious conflict of interest in me engaging in that particular exercise.