Today's post, The Futility of Emergence was originally published on 26 August 2007. A summary:

 

The theory of "emergence" has become very popular, but is just a mysterious answer to a mysterious question. After learning that a property is emergent, you aren't able to make any new predictions.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Mysterious Answers to Mysterious Questions, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 1:38 AM

After learning that a property is emergent, you have learned nothing that you did not know before.

Agree connotationally but object denotationally. Learning that a property is emergent implies that extrapolating the phenotype from the underlying rules requires nontrivial processing time.

That's a fair point, actually. More generally, to say that a property emerges from the interaction of N nodes in a system, rather than being directly instantiated in one node of that system, is to say something nontrivial. And it's not inane to use "emergent" as shorthand for that.

I hadn't thought of that, and phrased that way, it even suggests a context in which one might care whether a property is emergent: an emergent feature of a program is more difficult to maintain than a directly-coded one.

[-]asr13y40

This is not always true. There are (desired) emergent phenomena that are robust to a lot of minor tweaks to the underlying elements. For instance, neural nets are effective, even independent of the details of the individual neurons.

I would have said that you care whether a phenomenon is emergent because studying individual elements carefully is usually a bad way to understand emergent processes. We understand macroscopic hydrodynamics much better than we understand the precise forces between water molecules.

Just ask a sysadmin.

A denial-of-service attack is an "emergent phenomenon" that doesn't involve any kind of complicated coordination whatsoever. No one request causes the server to be overloaded and time-out a large number of other requests. But tens of thousands of requests all at once can do so.

Good point. I'll edit it.

emergent implies that extrapolating the phenotype from the underlying rules requires nontrivial processing time

Or might even be incomputable?

Maybe, but probably not. Suppose our universe turns out to be Turing-equivalent; does this imply that there are no emergent phenomena?

does this imply that there are no emergent phenomena?

Maybe not, but it makes emergence a bit less useful concept.

Ever since I saw that post, I've wanted specific examples of the supposed idiotic uses of "emergence" as an explanation. EY just alluded to a lot of idiots he's met, but memories of such exchanges are notoriously unreliable in terms of over-emphasizing just how bad people's arguments were. Can anyone cite an otherwise-correct-thinking person who did what EY is describing?

(I just hope people are more willing to call such arguments deluded than they were when I said the same of Marcello's "complexity strategy"...)

My mother, who has a Master's Degree in Biology, and extensive experience in field research (wildlife biology) once talked about the possibility that the way we experienced what we call free will was an "emergent property" during a discussion we had. I pointed out that everything in the brain is a result of neurons firing, and so saying that any process was emergent was just repeating something we already knew. My exact phrase was "correct, but unhelpful".

The book Foundations of Neuroeconomics (less wrong review) which generally seems like a useful book, says that some psychological phenomena might be in principle not amenable to reduction from psychology to lower level science because they are 'emergent'. This is a side point in the introduction, and it doesn't seem to affect the rest of the book.