Almost one year ago, in April 2007, Matthew C submitted the following suggestion for an Overcoming Bias topic:

"How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct [...], while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect—"

I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn't do that topic until I'd started on the Mind Projection Fallacy sequence, which wouldn't be for a while...

But now it's time to begin addressing this question.  And while I haven't yet come to the "materialism" issue, we can now start on "reductionism".

First, let it be said that I do indeed hold that "reductionism", according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.

This seems like a strong statement, at least the first part of it.  General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?

On the other hand, we are never going back to Newtonian mechanics.  The ratchet of science turns, but it does not turn in reverse.  There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.

"To hell with what past civilizations thought" seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.

And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.

I once met a fellow who claimed that he had experience as a Navy gunner, and he said, "When you fire artillery shells, you've got to compute the trajectories using Newtonian mechanics.  If you compute the trajectories using relativity, you'll get the wrong answer."

And I, and another person who was present, said flatly, "No."  I added, "You might not be able to compute the trajectories fast enough to get the answers in time—maybe that's what you mean?  But the relativistic answer will always be more accurate than the Newtonian one."

"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."

"If that were really true," I replied, "you could publish it in a physics journal and collect your Nobel Prize." 

Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider.  Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.

But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC.  A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.

So is the 747 made of something other than quarks?  No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747.  The map is not the territory.

Why not model the 747 with a chromodynamic representation?  Because then it would take a gazillion years to get any answers out of the model.  Also we could not store the model on all the memory on all the computers in the world, as of 2008.

As the saying goes, "The map is not the territory, but you can't fold up the territory and put it in your glove compartment."  Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory.  The scale of a map is not a fact about the territory, it's a fact about the map.

If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions.  Better predictions than the aerodynamic model, in fact.

To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift.  There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings.  It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.

"What?" cries the antireductionist.  "Are you telling me the 747 doesn't really have wings?  I can see the wings right there!"

The notion here is a subtle one.  It's not just the notion that an object can have different descriptions at different levels.

It's the notion that "having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.

It's not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought.  Rather we, for our convenience, use different simplified models at different levels.

If you looked at the ultimate chromodynamic model, the one that contained only elementary particle fields and fundamental forces, that model would contain all the facts about airflow and lift and wing positions—but these facts would be implicit, rather than explicit.

You, looking at the model, and thinking about the model, would be able to figure out where the wings were.  Having figured it out, there would be an explicit representation in your mind of the wing position—an explicit computational object, there in your neural RAM.  In your mind.

You might, indeed, deduce all sorts of explicit descriptions of the airplane, at various levels, and even explicit rules for how your models at different levels interacted with each other to produce combined predictions—

And the way that algorithm feels from inside, is that the airplane would seem to be made up of many levels at once, interacting with each other.

The way a belief feels from inside, is that you seem to be looking straight at reality.  When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about belief.

So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level.  The airplane is too large.  Even a hydrogen atom would be too large.  Quark-to-quark interactions are insanely intractable.  You can't handle the truth.

But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces.  You can't handle the raw truth, but reality can handle it without the slightest simplification.  (I wish I knew where Reality got its computing power.)

The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.

This, as I see it, is the thesis of reductionism.  Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.  Understanding this on a gut level dissolves the question of "How can you say the airplane doesn't really have wings, when I can see the wings right there?"  The critical words are really and see.

71

158 comments, sorted by Highlighting new comments since Today at 10:17 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This denial that "higher level" entities actually exist causes a problem when we are supposed to identify ourselves with such an entity. Does the mind of a cognitive scientist only exist in the mind of a cognitive scientist?

The belief that there is a cognitive mind calling itself a scientist only exists in that scientist's mind. The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist.

3TheAncientGeek3yThat observation runs headlong into the problem, rather than solving it.
0entirelyuseless3yExactly. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist." Let's reword that. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING 'undecatillion swarms of quarks' not having any beliefs, with a belief that there is a cognitive mind calling itself a scientist that only exists in the undecatillion swarms of quarks's mind." There seems to be a logic problem there.
1rkyeun3yComposition fallacy. Try again.
1entirelyuseless3yNope. There is no composition fallacy where there is no composition. I am replying to your position, not to mine.
1Max Hodges7moAnswering the question of who is experiencing the illusion [of self] or interpreting the story is much more problematic. This is partly a conceptual problem and partly a problem of dualism. It is almost impossible to discuss the self without a referent in the same way that is difficult to think about a play without any players. Second, as the philosopher Gilbert Ryle pointed out, in searching for the self, one cannot simultaneously be the hunter and the hunted, and I think that is a dualistic problem if we think we can objectively examine our own minds independently, because our mind and self are both generated by the brain. So while the self illusion suggests an illogical tautology, I think this is only a superficial problem. -Bruce Hood

One minor quibble; how do we know there is any most basic level?

Agreed. Why would we believe a quark is not "emergent"? Could be turtles all the way down....

1[anonymous]11yBecause a level being more basic means it's made of (or described by, if you're not a patternist) fewer bits of information, and the only way there can be less than 1 bit is if there's nothing at all.

Levels are an attribute of the map. The territory only has one level. Its only level is the most basic one.

Let's consider a fractal. The Mandelbrot set can be made by taking the union of infinitely many iterations. You could think of each additional iteration as a better map. That being said, either a point is in the Mandelbrot set or it is not. The set itself only has one level.

0[anonymous]9yInteresting analogy!
1Ronny8yBecause things happen, if there was no most basic level, figuring out what happens would be an infinite recursion with no base case. Not even the universe's computation could find the answer.
1Basil Marte1yThere isn't, and the article is committing a type error. The terrain isn't a map, reality isn't a model/theory. Unless you are using a model to approximate the behavior of a system that is of exactly the same kind, i.e. using a computational model to approximate another computational thingy, in which case you could indeed have the model that exactly coincides with what it is to describe. This may even be useful, e.g. in cryptography. But this is an edge case.

Yet something in the real world makes it tractable to create the "map" -- to find those hidden class variables which enable Naive Bayes.

Our brain and senses are made out of fundamental particles too, and the image of a plane with wings is the result of the interaction between the fundamental particles out there with the fundamental particles in us.

So I would I say the plane image is an effect not a primary, but that does not make it any less real than the primary. It is a real thing, just as real, that just happens to be further down the chain of cause and effect.

Reductionism does have a caveat, and this is "a fact about maps" and not "a fact about the territory": the real world level can be below the algorithm. Example: a CD. A chromodynamic model would spend immense computing resources simulating the heat and location and momentum and bonds of a slew of atoms (including those in the surrounding atmosphere, or the plasticizer would boil off). In reality there are about four things that matter in a CD: you can pick it up, it fits into a standard box, it fits into a standard reader tray, and when... (read more)

3bigjeff510yI think the point is that the model of four elements we use to describe the CD is also contained within the chromodynamic model - the four elements are a less accurate abstraction of the chromodynamic model, even if we don't recognize it as such when we used the more abstract model. In the same way, Newtonian Mechanics is a less accurate abstraction of Special Relativity. Therefore, no matter how precise Newtonian Mechanics is, it does not match up exactly with reality. Because it is an abstraction, it contains inaccuracies. The SR version of the same process will always be more accurate than the NM version, though the SR version is also probably not completely accurate. I don't think that is true. For Lisp to mean anything to any machine, it must first be compiled into the machine language of that particular machine. Because this process is fundamentally different for different types of machines, the way the same Lisp behaves on each machine will be highly dependent on its specific translation into machine language. In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine. The difference may not be enough to take any note of, but it is still there. This is the similar to calculating the trajectory of an artillery shell with Newtonian Mechanics vs Special Relativity. The difference between the two will be so small that it is almost unmeasurable, but there will definitely be a difference between them.
0[anonymous]9yI am going to have to disagree here. A given Lisp will require a Bounded-Tape Turing Machine of tape size N and head state count M and symbol table Q. If the ARM processor running Windows NT can supply that, Lisp is possible. If a x86 running Unix can supply that, Lisp is possible. If Lisp behaves differently from the mathematical ideal on any machine, that means the machine is incapable of supplying said turing machine. "If the Lisp is untrue to the Specification, that is a fact about the Implementation, not the Mathematics behind it."
0DSimon9yWhat about the speed of operation? The specification does not set any requirements for this, and so two different Lisp implementations which differ in that property can both be correct yet produce different output.
0[anonymous]9yEven if it runs at one clock cycle per millenia, it would still theoretically be able to run any given program, and produce exactly the same output. The time function is also external to the LISP implementation; it is a call to the OS, so the output that is printing the current time doesn't count.
1DSimon9yI think we may have to taboo "output", as the contention seems to be about what is included by that word.
1[anonymous]9yGiven a program P consisting of a linear bit-pattern, that is fed into virtual machine L, and produces a linear bit-pattern B to a section of non-local memory location O. During the runtime of P on L, the only interaction with non-local memory is writing B to O. There is no bits passed from non-local memory to local memory. For all L If and only if L is true to the specification then for any P there is only one possible B. * P is the lisp program source code, which does not read from stdin, keyboard drivers, web sockets or any similar source of external information. * L is a LISP (virtual) machine. * B is some form of data, such as text, binary data, images, etc. * O is some destination could be stdout, screen, speakers, etc.
2DSimon9yAh, ok, I find nothing to disagree with there. Looking back up the conversation, I see that I was responding to the word "behavior". Specifically, bigjeff5 said: To which you responded: So it comes down to: does the "behaviour" of a Lisp implementation include anything besides the output? Which effectively comes down to what question we're trying to answer about Lisp, or computation, or etc. The original question was about whether a Lisp machine needs to include abstractions for the substrate it's running on. The most direct answer is "No, because the specification doesn't mention anything about the substrate." More generally, if a program needs introspection it can do it with quine trickery, or more realistically just use a system call. Bigjeff5 responded by pointing out that the choice of substrate can determine whether or not the Lisp implementation is useful to anybody. This is of course correct, but this is a separate issue from whether or not the Lisp abstraction needs to include anything about its own substrate; a Lisp can be fast or slow, useful or useless, regardless of whether or not its internal abstractions include a reference to or description of the device it is running on.
1laofmoonster7yIs it fair to call the CD data a map in this case? (Perhaps that's your point.) The relationship is closer to interface-implementation than map-territory. Reductionism still stands, in that the higher abstraction is a reduction of the lower. (Whereas a map is a compression of the territory, an interface is a construction on top of it). Correct lisp should be implementation-agnostic, but it is not implementation-free.

This is a situation where a lot of confidence seems appropriate, though of course not infinite confidence. I'd put the chance that Eliezer is wrong here at below one percent.

9Perplexed10yI really have no idea what Eliezer being wrong on this would mean. Is the subject matter of this posting the nature of the territory or is it advice on the best way to construct maps? What conceivable observations might cause you to revise that 1% probability estimate up to, say, 80%? As I see it, reductionism is not a hypothesis about the world; it is a good heuristic to direct research.
6ata10yI take the main thesis as being summed up by this sentence around the end: Specific non-reductionist hypotheses, in the extremely unlikely event that any are supported by evidence, could cast doubt on reductionism. We'd need to find a specific set of circumstances under which reality appears to be computing the same entities at multiple levels simultaneously and applying different laws at each level, or we'd need to find fundamental laws that talk about non-fundamental objects. For example, if the Navy gunner were actually correct that you need to use Newtonian mechanics instead of relativity in order to get the right answer when computing artillery trajectories (given the further unlikely assumption that we couldn't find a simpler explanation for this state of affairs than "physical reductionism as a whole is wrong").
1Perplexed10yOk, let me try to construct an example of a non-reductionist hypothesis. Eliezer says that it would be a claim that higher levels of simplified multilevel models are out there in the territory. So, as a multi-level model, let us take (low-level) QCD+electroweak, (mid-level): nucleons, mesons, electrons, neutrinos, photons; (high-level): atomic theory with 92 kinds of atoms + photons. Now as I understand it, reductionism forbids me to believe that photons and electrons - entities which exist in higher level models - are actually out there in the territory. What am I doing wrong here? Could you maybe give me an example of a hypothesis which a reductionist ought to disbelieve?
0ata10yAs I understand it, photons and electrons are identified as elementary particles in the Standard Model. Wouldn't that be considered the lowest level?
3Perplexed10ySure, they exist in both the lowest (so far) level and in the next level up. But Eliezer wants to forbid things at "higher levels of simplified multilevel models" from existing out there in the territory. If that doesn't include electrons in this example, then I don't know what it includes. I don't understand exactly what it is that is forbidden. Is it type errors - confusing map entities with territory entities? Is it failing to yet be convinced by what someone else thinks is the best low-level model? Is it somehow imagining that, say, atoms still exist in the territory while simultaneously imagining that atoms are made of more fundamental things which also exist in the territory? I seems to me that the definition of reductionism that Eliezer has given is completely useless because no one sane would proclaim themselves as non-reductionists. He is attacking a straw-man position, as far as I can see.
9taryneast10yAFAICS, he is not "forbidding" a plane's wing from existing at the level of quark. He's just saying that "plane's wing" is a label that we are giving to "that bunch of quarks arranged just so over there". This as opposed to "that other bunch of quarks arranged just so over there" that we call "a human". That the arrangement of a set of quarks does not have a fundamental "label" at the most basic level. The classification of the first bunch o' quarks (as separate from the second) is something that we do on a "higher level" than the quarks themselves.
1bigjeff510yYou're confusing the map and the territory. The territory is only quarks (or whatever quarks may be made of). There is nothing else, it's just a big mass of quarks. The map is the description of this bunch of quarks is human, while that bunch is an airplane. There was a time when physicists thought that earth, air, water, and fire were the reality - that they were fundamental. Then they discovered molecules, and they thought those were fundamental. Then they discovered atoms, and thought those were fundamental. Etc. on down until the current (I think, I'm not a physicist) belief that quarks are fundamental. At no point did reality change. Reality did not change when we discovered rocks were made up of molecules - the map was simply inaccurate. The reality was that rocks were always made up of molecules. The same when we discovered that molecules were made of atoms. It was always true, our map was simply not as accurate as we thought it was. You could quite accurately say the map is wrong because it does not perfectly reflect reality, but the map is extremely useful, so we should not discard it. We should simply recognize that it is a map, it is not the territory. It's a representation of reality, it is not what is real. We know Newtonian Mechanics is a less accurate map than Special Relativity, but it is more useful than SR in many cases because it doesn't have the detail cluttering up the map that SR has. Yeah, it's less precise, but for calculating the trajectory of an artillery shell it is more than good enough. The different levels are maps, there is only one territory.
2DanielLC10yIt's also leptons.
2Sniffnoy10yIn short, you seem to be confusing {A} with A.
1Perplexed10yToo short. But intriguing. Please explain.
2Sniffnoy10yWhat I mean is, your objection doesn't hold water because raw objects at lower levels can always be put in a wrapper to be made suitable for use at a higher level. E.g. if we consider an elementary particles level, and a general-particles-which-for-now-we-will-consider-as-sets-of-particles-level (yes, I realize this almost certainly does not actually work in actual physics), then in the higher level we have proton={up_1, up_2, down}, and electron_H={electron_L}. But for most purposes the distinction between electron and {electron} is irrelevant, so we elide it. Your point seems to me analogous to the statement "But 2 can't be the rational number {...,(-4,-2),(2,1),(-2,-1),(4,2),...}, it's the integer {...(1,-1),(2,0),(3,1),...}!"
4Perplexed10yAh! Good point. And now that it is explained, good analogy. I still have some reservations about Eliezer's approach to reductionism/anti-holism and his equation of the idea of "emergence" with some kind of mystical mumbo-jumbo. But this is a complicated subject and philosophers of science much more careful than myself have addressed it better than I can. Thank you, though, for pointing out that my argument in this thread can be refuted so easily simply by taking Eliezer a little less literally. Electrons at one level reduce to electrons at a lower level. But the two uses of the word 'electron' in the above sentence refer to different (though closely related) entities. As closely related as A and {A}. You are right. Cool.
0timtyler10yStrong emergence [http://en.wikipedia.org/wiki/Emergence#Strong_and_weak_emergence] is mystical mumbo-jumbo. I don't think scientists should waste too much of their terminology on that sort of thing, though.
0timtyler10y"Reductionism" has come to have two meanings: "Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents." * http://en.wikipedia.org/wiki/Reductionism [http://en.wikipedia.org/wiki/Reductionism] This post is about the second meaning. But that meaning is silly, useless, and redundantly duplicates other terms for such nonsense - such as reducibility and irreducibility. We should kill off that meaning - and reclaim the meaning of the term that is useful and sensible. Posts like this one - which use the second meaning - are part of the problem.
0simplicio10yWhy is it silly to say that higher level phenomena reduce, in principle, to ontologically fundamental particle fields?
0timtyler10yThis discusssion is about the term "reductionism" - which is obviously some kind of philosophy about "reducing" things - but the cited definitions differ on the details of exactly what the term means. The first meaning just states the obvious, IMO. Also, other terms have that kind of nonsense covered. There is no need to overload the perfectly useful and good term "reductionism" with something that is only useful for the refutation of nonsense. It just causes the type of mix-up that you see in this thread.
0simplicio10yI understand, I just don't get why you object to reductionism as exemplified by the second definition. It seems to me a fairly reasonable philosophical position.
0timtyler10yI object to that terminology because it overloads a useful term which is used for something else without having a good excuse for doing so. Call the idea that invisible pixies push atoms around "irreducibility" - or something else - anything! IMO, "Reductionism" and "Holism" should be reserved for the Hofstadter-favoured sense of those words - or you have a terminological mess: http://i93.photobucket.com/albums/l76/orestesmantra/MU.jpg [http://i93.photobucket.com/albums/l76/orestesmantra/MU.jpg]
1simplicio10yOh, I see. Thanks for clarifying.
0Perplexed10yYou are confusing me, Tim. Above you seemed to be criticizing the usefulness of the second meaning. Now, you seem to be criticizing the usefulness of the first. Which do you find useless: the label for a methodology, or the label for a hypothesis about the possibility of hierarchical explanations?
0timtyler10ya) - good; b) - not needed. (Ref for a and b: http://en.wikipedia.org/wiki/Reductionism [http://en.wikipedia.org/wiki/Reductionism]) Reductionism and Holism should be the names of strategies for analysing complex sysytems by reducing them to the interactions of their parts - or considering them as high-level entities - respectively. The other terminology - the kind used in this post - is very bad. People should not overload such useful terminology - unless there really is no other way.
2Perplexed10yOne windmill I try to avoid attacking is the dictionary. I would suggest you spend a few extra syllables and refer to a. as "methodological reductionism" and b. as "philosophical (or ontological) reductionism". I understand the badness of needless overloading, but I'm not sure I agree that b. is "useless" simply because its validity is obvious to you. Would you also advocate abandoning the term "atheism"? My problem with philosophical reductionism is I don't know whether it is a claim about the territory or a convention about maps. If it is a claim about the territory, I certainly remain unconvinced, having not yet glimpsed the territory.
0timtyler10yOne can't just let dictionary authors rule language. When they get scientific things wrong, responsible individuals should put up a fight. Look at what is happening to "epigenesis" - for example. Or "emergence".
1timtyler10yThat is likely to lead off topic. If the atheists and agnostics could sit down and decide what those terms actually meant, it would certainly help. Meanwhile, call me an adeist.

When an image you are looking at is altered due to viewing it through a pane of coloured glass, you don't suddenly start calling it "the map" instead of "the territory."

So why is it, when it passes through our eyes and brain it suddenly becomes "the map," when the brain is made of the same fundamental stuff (quarks etc.) as the glass?

0Perplexed10yI would say that the stuff making up "the map" is not stuff inside the brain. Instead, it is stuff inside the mind, and the mind is "emergent from" the brain (or, if you prefer, the mind "reduces to" the brain). The neurons in the brain reduce (through several levels) to brain quarks. The map ideas in the mind also reduce to brain quarks, but they do so in an odd way. I choose to label that kind of oddness "emergence", but the local powers-that-be seem to disapprove of this terminology.
2taryneast10yThe image that you see contains far less information than the original actual stuff that makes up the original "image and coloured glass" objects that exist in front of you. That is why the image in your head is map, not territory. You also have "territory" that makes up your head... but that doesn't mean that everything represented inside you little piece of territory is also territory. After all, you can store a map in your glovebox. Does the glovebox turn a map of England into England itself, simply because a glovebox is part of the territory?

Our brain and senses are made out of fundamental particles too, and the image of a plane with wings is the result of the interaction between the fundamental particles out there with the fundamental particles in us.

Ian C - are you claiming that there are no maps, just lots of territory, some of which refers to other bits of territory? While probably accurate, this doesn't seem very useful if we're trying to understand minds. I don't think Eliezer ever claims that maps are stored in the glove compartments of cars in the car park, just outside The Territory. ... (read more)

Ben Jones - yes, I'm saying there's just lots of territory. I think it's useful to understanding minds, because (if correct) it means they don't work by making an internal mirror of reality to study, but rather they just "latch on" to actual reality at a certain point. The role of the brain in that case would not be to "hold" the internal mirror copy, but to manipulate reality to make it amenable to latching.

I always found Hofstadter's take on the issue illuminating.

Disappointingly, dictionaries and encyclopaedias today seem to have defined reductionism and holism away from Hofstadter's usage - to the detriment of both of the terms involved.

Ian - if minds don't create their own distinct internal maps, but simply 'latch on' to what's actually there, then how do explain the fact that maps can be wrong? In fact, how do you explain any two people holding two opposed beliefs?

Sensory perception isn't like a photograph - low-resolution but essentially representative. It's like an idiot describing a photograph to someone who's been blind all their life. This is why we get our maps wrong, and that is why it's useful to think in terms of map and territory - so that we can try and draw better ones.

Ben Jones: "if minds don't create their own distinct internal maps, but simply 'latch on' to what's actually there, then how do explain the fact that maps can be wrong? In fact, how do you explain any two people holding two opposed beliefs?"

Different people have different eyes, nervous systems and brains, so the causal path from the primary object to the part of reality in their brain to which they are latching on can be different.

I agree sensory perception is not like a photograph, but I don't think it's like an idiot trying to explain to us. I... (read more)

0Perplexed10yWhen you first mentioned "latching" my initial reaction was as negative and incredulous as Ben Jones's was. Now I recognize that this idea is Kripke's - he explains intensionality as a chain of causal links between territory and map. I see why Kripke went that way, but the whole enterprise turns my stomach. Where is Descartes when we need him? Intensionality carries no mystery in a model where map is distinct from territory, with no attempt being made to embed map in territory. It only becomes problematic when naive reductionism demands that our models must capture the act of modeling. And then we proceed to tie ourselves completely in knots when we imagine that this bit of self-reference contains the secret of consciousness. Can't we just pretend that our minds reside outside the physical universe when discussing epistemology? It makes things much simpler. Then we can discuss the reductionist science of cognition by allowing some minds back into the universe to serve as objects of study. :)

At present, we cannot generate accurate quantum mechanical descriptions of atoms more complex than hydrogen (and, if we fudge a bit, helium). Any attempt to do so, because of the complexity and intractability of the equations evolved, produces results that are less accurate than our empirically-derived understanding.

Even if we ignore the massive computational problems with trying to create a QM model of an airplane, such a model is guaranteed to be less accurate than the existing higher-order models of aerodynamics and material science.

We presume that our... (read more)

I'm surprised that this point is controversial enough that Eliezer felt the need to make a post about it, and even more surprised that he's catching heat in the comments for it. This "reductionism" is something I believe down to the bone, to the extent that I have trouble conceptualizing the world where it is false.

8kremlin8yAfter talking to some non-reductionists, I've come to this idea about what it would mean for reductionism to be false: I'm sure you're familiar with Conway's Game of Life? If not, go check it out for a bit. All the rules for the system are on the pixel level -- this is the lowest, fundamental level. Everything that happens in conway's game of life is reducible to the rules regarding individual pixels and their color (white or black), and we know this because we have access to the source code of Conway's Game, and it is in fact true that those are the only rules. For Conways' Game to be non-reductionistic, what you'd have to find in the source code is a set of rules that override the pixel-level rules in the case of high-level objects in the game. Eg "When you see this sort of pixel configuration, override the normal rules and instead make the relevant pixels follow this high-level law where necessary." Something like that. It's an overriding of low-level laws when they would otherwise have contradicted high-level laws.

The essential idea behind reductionism, that if you have reliable rules for how the pieces behave then in principle you can apply them to determine how the whole behaves, has to be true. To say otherwise is to argue that the airplane can be flying while all its constituent pieces are still on the ground.

But if you can't do a calculation in practice, does it matter whether or not it would give you the right answer if you could?

And there goes Caledonian again, completely misrepresenting Eliezer's claims.

His arguments are completely baseless. Of course it would be very, very, very hard to make a QM model of an airplane, and attempting it now would fail miserably - Eliezer wouldn't dispute that.

But to say that a full-fledged QM model would be guaranteed to be less accurate than current models is downright preposterous.

Caledonian's job is to contradict Eliezer.

I'm surprised that this point is controversial enough that Eliezer felt the need to make a post about it, and even more surprised that he's catching heat in the comments for it. This "reductionism" is something I believe down to the bone, to the extent that I have trouble conceptualizing the world where it is false.

Seconded.

I suppose the next post is on how a non-reductionist universe would overwhelmingly violate Occam's Razor?

1taryneast10yHmmm... from my understanding, Occam's Razor is not actually a Law, just an overwhelmingly useful Heuristic. Thus, I'm not sure that "violating" Occam's Razor means more than just saying that something is "far less likely". I don't believe it can be used to prove that a non-reductionist universe is "not true".

Caledonian's job is to contradict Eliezer.

Not even that -- it's as if he and other commenters (e.g. Unknown in this case) are simply demanding that Eliezer express his points with less conviction.

If you think Eliezer is wrong, say so and explain why. Merely protesting that he is "confident beyond what is justified", or whatever, amounts to pure noisemaking that is of no use to anyone.

Slighlty off-topic. I am a bit new to all this. I am a bit thick too. So help me out here. Please.

Am I right in understanding that the map/territory analogy implies that the map is always evaluated outside the territory?

I guess, I'm asking the age old Star Trek transporter question. When I am beamed up, which part of which quark forms the boundary between me and Scotty.

I wish I knew where Reality got its computing power. Hehe, good question that one. Incidentally, I'd like to link this rather old thing just in case anyone cares to read more about reality-as-computation.

Ian C - well put. My point is that since there is, at least, some distortion between mind and world (hence this very blog), it's useful to think in terms of map and territory. At the simplest level, it stops us confusing the two. If you have a wrong belief, saying 'my mind is part of reality!' doesn't make it any less wrong. Agreed?

I don't believe there's the outside world, and then an idiot distortion layer, and then our unfortunate internal model.

That was exactly the situation I found myself in at about 3am on Sunday morning.

Ben Jones: "If you have a wrong belief, saying 'my mind is part of reality!' doesn't make it any less wrong. Agreed?"

I agree that there is a difference between the object in the mind and the object in the world, but I wouldn't call it distortion any more than a chair is a distortion of the table next to it. They are both just different parts of reality. But if your mind can only be aware of the chair then you must discover the table by deduction, which is what someone trying to "correct" the chair would do also. So yes, I guess it makes... (read more)

I agree that there is a difference between the object in the mind and the object in the world, but I wouldn't call it distortion any more than a chair is a distortion of the table next to it.

But the chair isn't seeking to imitate the table. That's one thing that minds do that nothing else does - form abstract representations. It's not magic, but it's a pretty impressive trick for a couple of pounds of quivering territory.

Besides, you've already acknowledged that the mental concept has a causal link with the object itself. Chairs aren't causally linked to t... (read more)

Ben Jones: "But the chair isn't seeking to imitate the table."

But the mind isn't seeking to imitate reality either. The mind seeks to provide awareness of reality, that is all. In taking the data of the senses and processing it only following the laws of cause and effect, it achieves this goal (because the output of the pipeline remains reality).

The idea that it is trying to imitate (and the associated criticisms like map, territory and distortion) come from looking at the evolved design after the fact and assuming how it is supposed to work without taking a wide enough view of all the ways awareness of reality could be implemented.

'I wish I knew where Reality got its computing power.'

Assume Reality has gotten computing power and that it makes computations. Computation requires time. Occurrence would require the time required for the occurrence plus the time necessary for Reality to make the computation for that occurrence. The more complex the occurrence, either more computing power or longer computation time, or both. Accounting for that seems a challenge that can not be overcome.

Alternatively, let's assume Reality did not get computing power and that it does not make computations.... (read more)

But to say that a full-fledged QM model would be guaranteed to be less accurate than current models is downright preposterous.

No, it follows directly from our inability to simulate 'complex' atoms. If we can't represent the basic building blocks of matter correctly, how are we supposed to represent the matter?

A correct model of physics would, given enough computational power, allow us to perfectly simulate everything in reality, on every level of reality. QM is known not to be correct; it is in fact known to be incorrect in the ultimate sense. It is merely the most correct model we possess.

"However, reductionism is incapable of explaining the real world."

Is that the argument against Reductionism? That there are things it can't, as yet, explain? That's the same position the Intelligent Design people put forward. Your post is a big fat Semantic Stop Sign.

No, we don't understand protein folding yet. Precedent suggests that one day, we probably will, and it probably won't be down to some mystical emergent phenomenon. It'll be complicated, subtle, amazing, and fully explicable within the realms of reductionist science.

A quick Google search turns up:

But the crystal growth depends strongly on temperature (as is seen in the morphology diagram). Thus the six arms of the snow crystal each change their growth with time. And because all six arms see the same conditions at the same times, they all grow about the same way.... If you think this is hard to swallow, let me assure you that the vast majority of snow crystals are not very symmetrical.

It's not that reductionism is wrong, but rather that it's only part of the story. Additional understanding can be gleaned through a bottom-up, emergent explanation which is orthogonal to the top-down reductionist explanation of the same system.

It is important to take seriously the reality of higher level models (maps). Or alternatively to admit that they are just as unreal, but also just as important to understanding, as the lower level models. As Aaron Boyden points out, it is not a foregone conclusion that there is a most basic level.

Reductionism IS the bottom-up, emergent explanation. It tries to reduce reality to basic elements that together produce the phenomena of interest - you can't get any more emergent than that.

From the Wikipedia definition for "reductionism":

"Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents."

and

"The limit of reductionism's usefulness stems from emergent properties of complex systems which are more common at certain levels of organization."

Rafe, do you mean that as a criticism? Because usefulness and reality are very different things. There are two things that can make a reductionist model less useful:

  1. It requires much more computational power. This has been discussed already.
  2. Because even modest mistakes at lower levels can have drastic effects at higher levels.

Both, you'll notice, are practical problems pertaining to the model, and don't invalidate the principle.

So human brains are themselves models of reality.

Do you have a deterministic view of the world, i.e. believe reality is there, independently of our existence or of our interactions with it?

Have you ever wondered what is information, at the physical level.. what is it that our brains are actually modelling?

Simply because particles are the smallest things does not mean they are the only things. Particles are defined by how they act. How a particle will act can only be determined by taking into account the particles surrounding it. And to fully examine those particles, their surrounding particles must be examined. And so on and so forth...

As you move up in scale, new rules and attributes emerge that do not exist at the smaller scales. You can speculate about whether or not these new things might have been deduced as possibilities from quantum laws. But short o... (read more)

Wockyman: It's not that they're the smallest, as such.

Yes, how a particle acts is affected by those around it. But the idea is that if you know the basic rules, then knowing those rules, plus which particles are where around it lets you predict, in principle, given sufficient computational power, stuff about how it will act. In other words, the complicated stuff that emerges arises from the more basic stuff.

Think of it this way: You know cellular automatons? Especially Conaway's Game of Life? Really simple rules, just the grid, cells that can be on and off... (read more)

But the way physics really works, as far as we can tell, is that there is only the most basic level - the elementary particle fields and fundamental forces.

To clarify (actually, to push this further): there is only one thing (the universe) - because surely breaking the thing down into parts (such as objects) which in turn lets you notice relations between parts (which in turn lets you see time, for example) -- surely all that is stuff done by modelers of reality and not by reality itself? I'm trying to say that the universe isn't pre-parsed (if that makes any sense...)

0byrnema11yAs modelers of reality, we parse the world into fundamental particles and forces. You would claim that these distinctions are ultimately inherent features of the model and not necessarily defining reality. I understand that a person might look at a car and see "mode of transportation" while another way of looking at the car is as a "particular configuration of quarks", in which case the distinction between a car and a tree does seem arbitrarily modeler-dependent. But I would not go so far as to say that reality itself is featureless. Where would you begin to argue that there are no inherent dichotomies? Even if there is only one type of thing 'x', our reality (which is, above all, dynamic) seems to require a relationship and interaction between 'x' and ' ~x'. I'd say, logically, reality needs at least two kinds of things.
0xrchz11yLogic can only compel models. You seem to be saying "Let x denote the universe. ~x is then a valid term. So ~x must denote something that isn't x, thus there are two things!" There are surface problems with this such as that x may not be of type boolean, and that you're just assuming every term denotes something. But the important problem is simpler: we can use logic to deduce things about our models, but logic doesn't touch reality itself (apart from the part of reality that is us). What do you mean by "reality is dynamic"? Have you read Timeless Physics [http://lesswrong.com/lw/qp/timeless_physics/]?
-2byrnema11ySo I infer from the above that you have no logical arguments to support that reality is "one thing". I would think only an agnostic position on the nature of reality would be consistent with the nihilist stance you are representing.

Reductionism is great. The main problem is that by itself it tells us nothing new. Science depends on hypothesis generation, and reductionism says nothing about how to do that in a rational way, only how to test hypotheses rationally. For some reason the creative side of science -- and I use the word "creative" in the generative sense -- is never addressed by methodology in the same way falsifiability is:

http://emergentfool.com/2010/02/26/why-falsifiability-is-insufficient-for-scientific-reasoning/

We are at a stage of historical enlightenment ... (read more)

8Jack11yReally? I think of reductionism as maybe the greatest, most wildly successful abductive tool in all of history. If we can't explain some behavior or property of some object it tells us one good guess is to look to the composite parts of that thing for the answer. The only other strategy for hypothesis generation I can think of that has been comparably successful is skepticism (about evidence and testimony). "I was hallucinating." and "The guy is lying" have explained a lot of things over the years. Can anyone think of others?
4JGWeissman11yYou may be interested in Science Doesn't Trust Your Rationality [http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/], in which Eliezer suggests that science is a way of identifying the good theories produced by a community of scientists who on their own have some capacity to produce theories, and that Bayesian rationality is a systematic way of producing good theories. Oh, and Welcome to Less Wrong! [http://lesswrong.com/lw/b9/welcome_to_less_wrong/] You have identified an important point in your first few comments, and I hope that is predictor of good things to come.
0whowhowho8yAn automated theory generator would be worth a nobel.
2TheOtherDave8ySo, the introduction of "automated" to this discussion feels like a complete nonsequitor to me. Can you clarify why you introduce it?
0whowhowho8yIf you have a "systematic" way of "producing" something, (JGWeissman) surely you can automate it.
0TheOtherDave8yAh. OK, thanks for clarifying.
1[anonymous]8yI could call a procedure "systematic" even if one of the steps used a human's System 1 as an oracle, in which case it'd be hard to automate that as per Moravec's paradox.
0whowhowho8yI would not call such a procedure systematic. Who would? Here's a system for success as an author: first have a brilliant idea...it reads like a joke, doesn't it?
1[anonymous]8yI wasn't thinking of something that extreme; more like the kind of tasks people do on Mechanical Turk.
-2whowhowho8yIs there anything non systematic by that definition? In what way does it promote Bayesianism to call it systematic?
2TheOtherDave8yWell, I have no idea if it "promotes Bayesianism" or not, but when someone talks to me about a systematic approach to doing something in normal conversation, I understand it to be as opposed to a scattershot/intuitive approach. For example, if I want to test a piece of software, I can make a list of all the integration points and inputs and key use cases and build a matrix of those lists and build test cases for each cell in that matrix, or I can just construct a bunch of test cases as they occur to me. The former approach is more systematic, even if I can't necessarily automate the test cases. I realize that your understanding of "systematic" is different from this... if I've understood you, if I can't automate the test cases then this approach is not systematic on your account.
-1whowhowho8yCan there be a scattershot or intuitive scientific method?
1TheOtherDave8yWell, first of all, we should probably clarify that the original claim was that Bayesian rationality was a systematic way of producing good theories, and therefore presumably was meant to contrast with scattershot or intuitive ways of producing good theories, rather than to contrast with a scattershot or intuitive scientific method... just in case any of our readers lost track of the original question. But to answer your question... I wouldn't think so, in that an important part of what X needs to have before I'm willing to call X a scientific method is a systematic way of validating and replicating results. That said, I would say it's possible for a scientific method to embed a scattershot or intuitive approach to producing theories. Indeed, the history of the scientific method as applied by humans has done this pretty ubiquitously.
1whowhowho8yThat just makes matters worse. Bayes might systematically allow you judge the relative goodness of various theories, once they have been produced,, but it doesn't help at all in producing them. You can't just crank the handle on Bayes and get relativity
1TheOtherDave8yI'm not sure what you mean by "worse" here. To my mind, challenging the original claim as false is far superior to failing to engage with it altogether, since it can lead to progress. In that vein, perhaps it would help if you returned to JGWeissman's original comment [http://lesswrong.com/lw/on/reductionism/1q93] and ask them to clarify what makes Bayesian rationality "a systematic way of producing good theories," so you can either learn from or correct them on the question.
2[anonymous]8ySee TheOtherDave [http://lesswrong.com/lw/on/reductionism/8epl]. See E.T. Jaynes calling certain frequentist techniques “ad-hockeries”. EDIT: BTW, I didn't have Bayesianism in mind when I replied to this ancestor [http://lesswrong.com/lw/on/reductionism/8ekh] -- I should stop replying to comments without reading their ancestors first.
1private_messaging8yIt feels like you use 'questions' a lot more than usual, and it looks very much like a rhetorical device because you inject counter points into your questions. Can you clarify why you do it? (see what I did there?) Sidenote: Actually, questions are often a sneaky rhetorical device - you can modify the statement in the way of your choosing, and then ask questions about that. You see that in political debates all the time.
0Vaniver8yAgreed that questions can be used in underhanded ways, but this example does seem more helpful at focusing the conversation than something like: That could easily go in other directions; this makes clear that the question is "how did we get from A to B?" while sharing control of the topic change / clarification.
0TheOtherDave8ySure, I'd be happy to: because I want answers to those questions. For example, whowhowho's introduction of "automated" did in fact feel like a nonsequitor to me, and I wanted to understand better why they'd introduced it, to see whether there was some clever reasoning there I'd failed to follow. Their answer to my question clarified that, and I thanked them for the clarification, and we were done. You asked a question. I answered it. It really isn't that complicated. That said, I suspect from context that you mean to imply that you did something sneaky and rhetorical just then, just as you seem to believe that I do something sneaky and rhetorical when I ask questions. If that's true, then no, I guess I don't see what you did there. Yes. So are statements.
2shminux8yHere is one, completely automated [http://snarxiv.org/].
0Morendil11yAgreed: we need more posts on abductive reasoning specifically.

Probably no one will ever see this comment, but.

"I wish I knew where reality got its computing power."

If reality had less computing power, what differences would you expect to see? You're part of the computation, after all; if everything stood still for a few million meta-years while reality laboriously computed the next step, there's no reason this should affect what you actually end up experiencing, any more than it should affect whether planets stay in their orbits or not. For all we know, our own computers are much faster (from our perspective) than the machines on which the Dark Lords of the Matrix are simulating us (from their perspective).

3Perplexed10yIf reality were computed in reverse chronological order, what differences would you expect to see? Suppose our universe was produced by specifying some particular final state, and then repeatedly computing predecessor states according to some deterministic laws of nature. Would we experience time backward? Or would we still experience it forward (the reverse of the direction of the simulation) because of some time assymetry in the physical laws or in the entropy of the initial vs final states? Everyone always assumes that the simulation will proceed "foreward". Is that important? I honestly don't know.
3imaxwell10yYou can go one step further. If folks like Barbour are correct that time is not fundamental, but rather something that emerges from causal flow, then it ought to be that our universe can be simulated in a timeless manner as well. So a model of this universe need not actually be "executed" at all---a full specification of the causal structure ought to be enough. And once you've bought that, why should the medium for that specification matter? A mathematical paper describing the object should be just as legitimate as an "implementation" in magnetic patterns on a platter somewhere. And if it doesn't matter what the medium is, why should it matter whether there's a medium at all? Theorems don't become true because someone proves them, so why should our universe become real because someone wrote it down? If I understand Max Tegmark correctly, this is actually the intuition at the core of his mathematical universe hypothesis [http://en.wikipedia.org/wiki/Mathematical_universe_hypothesis] (Wikipedia, but with some good citations at the bottom), which basically says: "We perceive the universe as existing because we are in it." Dr. Tegmark says that the universe is one of many coherent mathematical structures, and in particular it's one that contains sentient beings, and those sentient beings necessarily perceive themselves and their surroundings as "real". Pretty much the only problem I have with this notion is that I have no idea how to test it. The best I can come up with is that our universe, much like our region of the universe, should turn out to be almost but not quite ideal for the development of nearly-intelligent creatures like us, but I've seen that suggested of models that don't require the MUH as well. Aside from that, I actually find it quite compelling, and I'd be a bit sad to hear that it had been falsified. Interestingly enough, a version of the MUH showed up in Dennis Paul Himes' (An Atheist Apology)[http://www.cookhimes.us/dennis/aaa.htm [http://www.co
-1Perplexed10yI'm pretty sure that the idea has occurred to just about everyone who has wondered whether the meanings of the intransitive verb "to exist" in mathematics and philosophy might have anything in common. Tegmark deserves some credit though for writing it down.
1Bruno Mailly2yFrom the inside we can't judge the relative speed or power, but we can judge the efficiency. And it's abysmal : the jumps from quarks to particles to atoms to molecules to cells to animals to stars to galaxies each throw orders of magnitude around like it's nothing. What could this possibly tell us ? * Reality just has that much resource. * The result of our reality was not designed. * The lords of the matrix are not very bright.

Sounds like one of the central tenants of discordianism. There is no such thing as wings, identity, truth, the concept of equality. These are all abstract concepts that exist only in the mind. "Out there" in "True" reality, there is only chaos (not necessarily of the random kind, just of the meaningless/purposeless kind).

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.

Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extr... (read more)

0thomblake9yA belief is true when it corresponds to reality. Or equivalently, "X" is true iff X. In the map/territory distinction, reality is the territory. Less figuratively, reality is the thing that generates experimental results. From The Simple Truth [http://yudkowsky.net/rational/the-simple-truth]:
1DSimon9yI don't follow why you claim that reductionism and realism are incompatible. I think this may be because I'm very confused when I try to figure out, from context, what you mean by "realism", and I strongly suspect that that's because you don't have a definition of that word which can be used in tests for updating predictions, which is the sort of thing LWers look for in a useful definition. Basically, I'm inclined to agree with you when you say: This is a really good reason in my experience for not getting into long discussions about "But what is reality, really?"
0DSimon9yActually, this may be a good point for me to try to figure out what you mean by "realism", because here you seem to have connected that word to some but not all strategies of problem-solving. Can you give me some specific examples of problems which the mind tends to use realism in solving, and problems where it doesn't?
1Tuukka_Virtaperko9yI got "reductionism" wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn't take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I'm trying to check whether my approach is deemed intelligible here. "Realism" is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken. An example of a problem whose solution does not need to involve realism is: "John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?" Possible answers would be: "He thinks his brother is cool" or "He wants to annoy his brother" or "He doesn't emulate his brother, they are just very similar". Of course you could just brain scan John. But if you really knew John, that's not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do. In the John problem, there's no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can't take any physical brain scanner with you in a dream, so you can't brain scan John. But you can analyze John's behavior with the same criteria a
2DSimon9yYou seem to be overthinking this. Reductionism is "merely" a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed: An AI that can use reductionism can say "Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash", and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like "Man walking a dog", directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update. If you've ever refactored a common element out in your code into its own module, or even if you've used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.
0Tuukka_Virtaperko9yOkay. That sounds very good. And it would seem to be in accordance with this statement: If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it's not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It's hardly a philosophical statement at all, which is good. I would say that "the notion of higher levels being out there in the territory" is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning. RP doesn't yet actually include reduction. It's about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn't yet have any kind of an algorithm part. I'm not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He's not interested of the metaphysical part of the theory, and even said he doesn't want to know too much about it. :) I'm not guaranteeing RP can be used for anything at all, but it's interesting.

One way of tracing the uhm, data I guess might be to say, we see, naively, a chair. And know that underneath the chair out there is, at the bottom level we're aware of, energy fields and fundamental forces. And those concepts, like the chair, correspond to a physics model, which is in turn a simplification/distillation of vast reams of recorded experimental data into said rules/objects, which is in turn actual results of taking measurements during experiments, which in turn are the results of actual physical/historical events. So the reductionist model - fields and forces - I think is still a map of experimental results tagged with like, interpretations that tie them together, I guess.

[This comment is no longer endorsed by its author]Reply
0Voltairina9yEr, I guess I should say its strictly /not/ an attempt at a simplified description, but a minimal description which can still account for everything...

Whatever the bottom level of our understanding of the map, even a one-level map is still above the territory, so there're still levels below that which carry back to, presumedly, territory. We find some fields-and-forces model that accounts for all the data we're aware of. But, its always going to be possible - less likely the more data we get - that something flies along and causes us to modify it. So, if we wanted to continue the reductionistic approach about the model we're making about our world, stripping away higher level abstractions, we'd say that ... (read more)

0Voltairina9yLike, I can draw a picture of a face in increasingly finer and finer detail down to "all the detail I see" but its still going to contain unifying assumptions - like a vector representation of a face, versus the data, which may be pixellated - made up of specific individual measurement events. Or I can show a chart of where and how all the nerves are excited in my eyes, which are the 'raw data' level stuff that I have access to about what's 'out there', for which the simplest explanation is most probably a face. Actually its kind of interesting to think of it that way because a lot of our raw mental data is 'vectored' already. But, whenever we do a linear regression of a dataset, that's also a reduction-to-a-vector of something.

This post, represents for me, the typical LW response to something like the Object Oriented Ontologies of Paul Levi Bryant and DeLanda. These Ontologies attempt to give things like numbers, computations, atoms, fundamental particles, galaxies, higher level laws, fundamental laws, concepts, referents of concepts, etc. equal ontological status. They, hence, are strictly against making a distinction between map and territory, there is only territory, and all things that are, are objects.

I'm a confident reductionist, model/reality (bayesian), type guy. I'm no... (read more)

Does the reductionist model give different predictions about the world than the non-reductionist model? If so, are any easily checked?

Solomonoff Induction, in so much as it is related to interpretations at all, rejects 'many worlds interpretation' because valid (non falsified) code strings are the ones whose output began with the actual experimental outcome rather than list all possible outcomes, i.e. are very much Copenhagen - like.

Has this point ever been answered? If we are content with the desired output appearing somewhere along the line - as opposed to the start - then the simplest theory of everything would be printing enough digits of pi, and our universe would be described somewhere down the line.

2Eliezer Yudkowsky8ySolomonoff induction is about putting probability distributions on observations - you're looking for the combination of the simplest program that puts the highest probability on observations. Technically, the original SI doesn't talk about causal models you're embedded in, just programs that assign probabilities to experiences. Generalizing somewhat, for QM as it appears to humans, the generalized-SI-selected hypothesis would be something along the lines of one program that extrapolated the wavefunction, then another program that looked for people inside it and translated the underlying physics into the "observed data" from their perspective, then put probabilities on the sequences of data corresponding to integral squared modulus. Note that you also need an interface from atoms to experiences just to e.g. translate a classical atomic theory of matter into "I saw a blue sky", and an implicit theory of anthropics/sum-probability-measure too if the classical universe is large enough to have more than one copy of you.
1Kawoomba8yThanks for this. I'll mull it over.
1private_messaging8yHere's a rebuttal: http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/c89ymip [http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/c89ymip] .
2whowhowho8yIt isn't at all clear why all that would add up to something simpler than a single world theory
9Eliezer Yudkowsky8ySingle-world theories still have to compute the wavefunction, identify observers, and compute the integrated squared modulus. Then they have to pick out a single observer with probability proportional to the integral, peek ahead into the future to determine when a volume of probability amplitude will no longer strongly causally interact with that observer's local blob, and eliminate that blob from the wavefunction. Then translating the reductionist model into experiences requires the same complexity as before. Basically, it's not simpler for the same reason that in a spatially big universe it wouldn't be 'simpler' to have a computer program that picked out one observer, calculated when any photon or bit of matter was moving away and wasn't going to hit anything that would reflect it back, and then eliminated that matter.

This website is doing amazing things to the way I think every day, as well as occasionally making me die of laughter.

Thank you, Eliezer!

2wedrifid7yBut you got better [http://www.youtube.com/watch?v=xzYO0joolR0].

"having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory

Why do we distinguish “map” and “territory”? Because they correspond to “beliefs” and “reality”, and we have learnt elsewhere in the Sequences that

my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.

Let’s apply that test. It isn’t only predictions that apply at different levels, so do the results. We can have right or... (read more)

[-][anonymous]7y 0

"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."

[extreme steelman mode on]

By “relativity” he must have meant the ultrarelativistic approximation, of course.

[extreme steelman mode off]

:-)

Should one really be so certain about there being no higher-level entities? You said that simulating higher-level entities takes fewer computational resources, so perhaps our universe is a simulation and that the creators, in an effort to save computational resources, made the universe do computations on higher-level entities when no-one was looking at the "base" entities. Far-fetched, maybe, but not completely implausible.

Perhaps if we start observing too many lower-level entities, the world will run out of memory. What would that look like?

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.

Less Wrongs "The Futility of Emergence" article argues against using the word "emergence", claiming that it provides no additional information. The argument went that literally everything is an emergent property, since everything can be boiled down to ... (read more)

Minsky writing in Society of Mind might bring some light here (paraphrasing):

How can a box made of six boards hold a mouse when a mouse could just walk away from any individual board? No individual board has any "containment" or "mouse-tightness" on it's own. So is "containment" an emergent property?

Of course, it is the way a box prevents motion in all directions, because each board bars escape in a certain direction. The left side keeps the mouse from going left, the right from going right, the top keeps it from leaping ... (read more)

This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

The higher levels could have been, though. The fact that we have high-level abstractions in our heads does not by itself mean that there is nothing corresponding to them in the territory. (To make that argument is a version the fallacy that since there is a form of probability in the map, there can be none in the territory).

Tangential to the main point: one hypothesis for why the artillery gunner thought that "General relativity gives you the wrong answer", is that maybe he had an experience with a software which could either run "Newtonian mode" or "GR mode", and the software had to make approximations for the relativistic calculation to be roughly tractable (which might be nonetheless useful for roughly solving problems where relativistic effects matter, but would only reduce accuracy for non-relativistic situations).

Now, the "GR mode" (with approximations) would be a diffe... (read more)