1

Let us consider a world where it is well known that light is a wave, characterized by wavelength. The white color is known to be a mix of many different waves as shown by a prism separating sunlight into a rainbow. Physicists measure and catalog the wavelengths of different colors such as red [620–750 nm], green [495–570 nm] and yellow [570–590 nm].

Across the street from the physicists is a painters' guild. They also deal a lot in colors and know well that mixing red and green yields yellow. They even demonstrated it to physicists, with a system of two prisms, baffles and mirrors. And yet the theory explaining how mixing two pure waves, one of 700nm and one of 500nm, results in a wave of 580nm remains elusive. Many different non-linear effects were hypothesized, but none was confirmed.

2

To us, this mystery is of course completely transparent. We now know well that a sensation of color is determined not by the frequency, but by a ratio of activation of 3 different types of photoreceptor cone cells in the eye retina [0]. Each of those cell types is sensitive to large overlapping swaths of frequencies. So a pure yellow and combination of pure red and green just happen to produce identical activation patterns.

To reach this understanding we had to look away from light itself and into the mechanism, we used to observe the light - our eyes and brains.

3

It seems to me that something similar happens with attempting to explain consciousness. We have some strong intuitions about what is and what isn't conscious and there are attempts to model those intuitions [1] [2]. I feel that there can't be a theory of consciousness without a good look at the mechanism that generates those intuitions - our brain.

4

Humans evolved as a social organism and the ability to function in a group was critical for survival. An "agent detector" that can distinguish a fellow human from inanimate objects and animals can be an extremely usable adaptation. (This detector can further usefully distinguish friend from foe, aka ingroup/outgroup, but this is beyond the scope of current text). And the intuition that someone or something is conscious is merely this object detector being activated in our brain. And the study of consciousness (like the study of color) should mostly produce answers about our detectors than about the conscious object themselves.

In a world with a wide variety of agents, calling someone "conscious", just like calling someone/something "attractive" or "tasty", reveals as much about yourself as about what you describe.

References

[0] https://en.wikipedia.org/wiki/Cone_cell

[1] https://www.lesswrong.com/posts/wqDRRx9RqwKLzWt7R/nonperson-predicates

[2] https://en.wikipedia.org/wiki/Integrated\_information\_theory

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 8:47 AM

Welcome to LW! First I was like "hmm, new user, first post is about consciousness, expectations low" but this is actually pretty good.

Not sure our detector of other-consciousness is very informative. People got into relationships with frickin ELIZA. But examining our detector of self-consciousness is definitely the right way.

Thanks! Delurking after a long time to try and give something useful back to the community.

I would count this success of ELIZA as evidence towards this detector being quite a simple heuristic. Not sure if that is what you meant by "not informative".

Actually, this simplicity, together with complexity of prediction modules the detector activates (as per Said Achmiz's comment below), could be contributing to the confusion surrounding it.

I think it goes like this.

Intuition 1: I know I'm conscious

Intuition 2: This thing seems to behave like me

Conclusion: This thing is probably conscious

You're right that intuition 2 could come from a pretty simple detector. But that doesn't say much about where intuition 1 comes from.

I think intuition 1 would be correct by definition, since colloquial meaning of the word "conscious" is close to "having the same introspective experience as me". With "experience" roughly being "information reaching the story-building module of the brain"

Possibly related to this is Dennett’s notion of the intentional stance. The idea is that when we describe something as a goal-directed agent, what that cashes out to is the fact that it is—for us!—computationally/cognitively less expensive to model that entity/object/phenomenon/something as an agent, than it is to model it as an artifact designed for some purpose (the design stance) or simply in terms of physical laws (the physical stance). In other words, modeling that entity as an agent lets us make better predictions about the entity’s behavior than the other two ways of modeling. As in the OP, this is at least partly a fact about us and not about the entity in question (because the reason it’s inexpensive for us to model agents is that we can make efficient use of our mental modules which evolved to do just that).

(Also, seconding cousin_it’s comment.)

Yes, using the right tools for explaining and predicting is what recognizing a fellow intelligence boils down to. Other than that, humans have no more "inherent intelligence" than other natural phenomena. That's why something like the Integrated information theory has no chance of success.

The mystery of consciousness isn't how it can feel, but rather why something that can feel exists at all. It's not emergence, it's radical emergence. We already have models of how the brain works, but there is no hint as to how the arrangement of physical matter produces subjective experience.