This is really my attempt at approaching eliminative materialism, and probably reading Paul Churchland or Daniel Dennett's papers would be better for you to get the point. I'm just writing to organize my thoughts.

Background

There are three big problems in science: universe, life, and consciousness. There is a good theory of the universe on the macro and micro scale, and the problem of its origin. They are not the final word, but we have a good sense of any future updated theories would be like: mechanistic, mathematical, probably using real, complex, and discrete numbers.

A theory of life is still in the works, though there are encouraging attempts. The physical construction of life and the descriptive theory of life is now complete except in the details. We know that it would be something made of evolution, thermodynamics, chemistry, and of course, mathematics. The engineering theory of life is still greatly missing. We do not know how to create life, at most we can fork the genetic code and do little modifications and mixings. We don't even know if a robot is alive.

A theory of consciousness is in an even earlier stage. There are some basic studies of the description of consciousness, and there are dozens of hazy philosophical theories that needs to be made quantitative using future data.

Consciousness-free

One problem with consciousness is its paradoxical qualities, creating questions that seem to both be compelling and deformed:

  • "Why am I me instead of someone else?"
  • "If Pinkie is copied, which one is the real Pinkie?"
  • "Is the feeling of blue same for everyone?"
  • "How does one freely choose?"

Now compare them with analogous questions from universe and life:

  • "Why is this rock this rock instead of that rock?"
  • "If this book is copied, which one is the real book?"
  • "Is this website the same website on every computer?"
  • "How does a slime mold decide which way to go?"

The analogous questions lose their mystery and becomes mundane, confused, or fascinating but also scientifically analyzable.

Possibly the problem is with the understanding of conscious itself, which is too confused. I propose to remove consciousness from explanations of life behavior (human or not) as much as possible. If it can be fully removed, then the problem of consciousness is solved. If it can't be fully removed, then it concentrates the effort for solution.

A sketch

As a sketch of how such a removal might be done, consider a fully physical explanation of how humans talk, which is currently infested with consciousness. The standard account is that there is a consciousness that feels something, then formulates that into words and sentences, then expresses them. Unconscious speaking is considered nonsense, meaningless, noise. This doesn't have to be.

The Heptapods

The Heptapods from Story of Your Life (Ted Chiang, 1998) are an example of a "free-will-free" form of life. Their language has determinism baked into it, just as human language has free will baked into it.

What kind of universe could produce two kinds of life such that one is deterministic in language, but the other is free in language? And in such proximity too, such that they can actually meet each other and share the same physical space and physical laws?

To answer such questions in a physics way, one would use a physics of language. What is a language according to a physicist?

Physical linguistics

What is a deterministic language, and what is a free-will language? How would a description of free-will emerge in a deterministic system such as our universe? And most importantly, how does a universal language, a symbolic system that can models the physical world that it is in, emerge in a deterministic world?

This is analogous to the problem of zombie language: I once read that in a world with only philosophical zombie humans, human languages would probably have not evolved to talk about consciousness and inner experiences because there is no such nonexistent thing. This argument is dumb, since human languages already talk about many nonexistent things, but it points at an interesting question: how would a deterministic system evolve a language that talks about things happening in it?

Physical self-referential science

In the same spirit, what kind of deterministic universe would have little bundles of matter inside of it that behaves roughly the same as some other patches of this universe? We call these little bundles of matter "computers running physical simulations", or "a human brain thinking about science", or maybe even "a lion brain thinking about which way an antelope is probably going to go next".

If such explanations can be done in detail, that would be a self-reference in physics: a physical system (our universe) containing a substantial description of itself (the explanation), as well as an account for why it is likely for the description to exist in the first place (the explanation about why a physical world is likely to contain its own description).

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 12:03 PM

I agree that "Why is this rock this rock instead of that rock?" is a good place to start, even if they aren't perfectly analogous. Now, it isn't entirely clear what is being asked. The first question that we could be asking is: "Why is this rock the way that it is instead of the way that rock is?", in which case we could talk about the process of rock formation and the rock's specific history. Another question we could be asking is, "Why is this rock here at this time instead of that rock?" and again we'd be talking about history and how people or events moved it. We could even make anthropic arguments, "This rock isn't a million degrees because if it were that hot it would not longer be a rock" or "This rock isn't a diamond and this is unsurprising as they are rare". Here we'd be asking, "Given a random rock, why are we most likely to be observing certain characteristics?"

One difference with the human example is that the human is asking the question, "Why am I me instead of someone else?" So you can also reason about your likely properties on the basis of being the kind of being who is asking that question. Here the question is interpreted as, "Why is the entity asking this question this entity instead of another entity?".

Another issue which becomes clearer is the symmetry. Barrack Obama might ask, "Why am I me instead of the Pope?" whilst at the same time the Pope asks, "Why am I me instead of Barrack Obama?". So even if you had been someone else, you might very well of been asking the same question. I think this ties well into the notion of surprise. Let's suppose a million people receive a social security number and you receive 235,104. You might argue, "How surprising, there was only a one in a million chance of receiving this number!". However you could have said this regardless of which number you'd been given, so it isn't that surprising after all.

Another question that could be asked is, "Why is my consciousness receiving the qualia (subjective experience) from this physical body?" In this case, the answer depends on your metaphysics. Materialists would say this is a mistaken question as qualia don't exist. Christianity might say it's because God chose to attach this soul to this body. Other spiritual theories might have souls floating around which inhabit any body which is free (although this raises questions such as: what if no soul chooses to inhabit a body and which soul gets to inhabit which body). Lastly, there's theories like property dualism where the consciousness is a result of the mental properties of particles so that the consciousness corresponding to any one particular body couldn't be attached to anyone else without breaking the laws of the universe. So as described in my post Natural Structures and Definitions, this last interpretation is one of those questions that is conditionally meaningful to ask.

I really like the analogous questions translation. That was illuminating. Thanks for writing this up.

I'm glad you liked it. I was expecting some harsh words like "that's nothing new" or "that's nihilistic and thus immoral".

Both eliminative materialism and reductionism can acknowledge that consciousness is not necessary for explanation and seek a physical explanation. But while eliminativists conclude that there is no such thing as consciousness, reductionists say we simply would have discovered that consciousness is different from what we might have initially thought and is a physical phenomenon. Is there a reason you favor the former?

One might think eliminativism is metaphysically simpler but reductionism doesn’t really posit more stuff, more like just allowing synonyms for various combinations of the same stuff.

Reductionism seems much more charitable. If you can interpret someone either as talking falsely nearly all the time or as often speaking truth, even if some of what they said would need to be revised, I think you’d need a compelling reason to attribute the false claims.

Reductionism also seems necessary to make sense of our values, which often makes essential reference to consciousness. How would an eliminativist make sense of suffering being bad if there’s no such thing as conscious suffering? Strictly speaking, a classical hedonic utilitarian who is an eliminative materialist seems committed to the view that nothing really matters and everything is permitted.

I am not fully committed to eliminative materialism, just trying to push it as far as possible, as I see it as the best chance at clarifying what consciousness does.

As for the last paragraph, if your analysis is correct, then it just means that a classical hedonic utilitarian + eliminative materialist would be a rare occurrence in this world, since such agents are unlikely to behave in a way that keeps itself existing.

If the project of eliminative materialism is fully finished, it would completely remove value judgments from human language. In the past, human languages refer to the values of many things, like the values of animals, plants, mountains, rivers, and some other things. This has progressively narrowed, and now in Western human language, only the values of biological neural networks that are carried in animal bodies are referred to. If this continues, this could lead to a language that does not refer to any value, but I don't know what it would be like.

The Heptapod language seems to be value-free, and describes the past and the future in the same factual way. The human languages describes only the past factually, but the future valuefully. A value-free human language could be like the Heptapod language. In the story Story of Your Life, the human linguist protagonist who struggled to communicate with the Heptapods underwent a partial transformation of mind, and sometimes sees the past and future in the same descriptive, value-free way. She mated with her spouse and conceived a child, who she knew would die in an accident. She did it not because of a value calculation. An explanation of "why she did it" must instead be like

  • On a physical level, because of atoms and stuff.
  • On a conscious level, because that's the way the world is. To see the future and then "decide" whether to play it out or not, is not physically possible.
[-]TAG5y10

If the project of eliminative materialism is fully finished, it would completely remove value judgments from human language.

Because values are intrinsically non physical? Because agents dont have preferences? Because agents dont want to talk about preferences?

In a language consistent with deterministic eliminative materialism, value judgments don't do anything, because there are no alternative scenarios to judge about.

I am not sure about nondeterministic eliminative materialism. Still, if consciousness and free will can be eliminated, even with true randomness in this world, value judgments still seem to not do anything.

[-]TAG5y10

Suppose I build a deterministic agent which has a value function in the most literal sense, ie. it has to call the function to get the values of various alternative actions in order to make a decision about which to perform. Would you still say it has no use for value judgements?

An agent, an entity that acts, cannot say "what will be, will be", because it makes decisions, and because the decisions it makes are a component of the future. If it does not know the decision it will make before it makes it, it will be in a state of subjective uncertainty about the future. Subjective uncertainty and objective deyetminism are quite compatible.

I think it is possible that you are being misled by fictional evidence. In Arrival, the Heptapods knowledge of the future is a straightforward extension of a fixed future, but everything we know indicates considerable barriers between determinism and foreknowledge

One might think eliminativism is metaphysically simpler but reductionism doesn’t really posit more stuff, more like just allowing synonyms for various combinations of the same stuff.

I don't think Occam's razor is the main justification for eliminativism. Instead, consider the allegory of the wiggin: if a category is not natural, useful, or predictive, then in common English we say that the category "isn't real".

[-]TAG5y10

A category made up of 1 the Statue of Liberty 2 The current Pope and 3 my toothbrush, for all its insane bagginess, and poor fit to reality is made up of things which themselves exist. So it's much too hasty to conclude lack of reality from poor fit.

Yes, I do think consciousness is such a category. The OP mentions, under the heading of consciousness, issues of what I would call personal identity and qualia. I can't think of any reason why having the one would grant you the other.