Minimum computation and data requirements for consciousness.

by daedalus2u 9y23rd Aug 201083 comments

-13


Consciousness is a difficult question because it is poorly defined and is the subjective experience of the entity experiencing it. Because an individual experiences their own consciousness directly, that experience is always richer and more compelling than the perception of consciousness in any other entity; your own consciousness always seem more “real” and richer than the would-be consciousness of another entity.

Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness. However there must be certain computational functions that must be accomplished for consciousness to be experienced. I am not attempting to discuss all computational functions that are necessary, just a first step at enumerating some of them and considering implications.

First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can't be conscious that it is an entity. To rephrase Descartes, "I perceive myself to be an entity, therefore I am an entity."  It is possible to be an entity and not perceive that one is an entity. This happens in humans but rarely. Other computation structures may be necessary also, but without an ability to recognize itself as an entity an entity cannot be conscious.

All pattern recognition-type is inherently subject to errors, usually of type 1 (false positive) or type 2, (false negative). In pattern recognition there is an inherent trade-off of false positives for false negatives. Reducing both false positives and false negatives is much more difficult than trading off one for the other.

I suspect that detection of external entities evolved before the detection of internal entities; for predator avoidance, prey capture, offspring recognition and mate detection. Social organisms can recognize other members of their group. The objectives in the recognition of other entities are varied, and so the fidelity of recognition required is varied also. Entity detection is followed by entity identification and sorting of the entity into various classes so as to determine how to interact with that entity; opposite gender mate: reproduce, offspring: feed and care for, predator: run away.

Humans have entity detectors and those detectors exhibit false positives (detecting the entity (spirit) of the tree, rock, river and other inanimate objects, pareidolia) and false negatives (not recognizing a particular ethnic group as fully human). Evolution tends to put more of a bias toward false positives, a false alarm in detecting a predator is a lot better than a non-detection of a predator.

How the human entity detector works is not fully understood, I have a hypothesis which I outline in my blog post on xenophobia.

I suggest that when two humans meet, I think they unconsciously do the equivalent of a Turing Test, with the objective in being to determine if the other entity is “human enough”. Essentially they try to communicate, and if the error rate is too high (due to non-consilience of communication protocols), then xenophobia is triggered via the uncanny valley effect. In the context of this discussion, the entity detector defaults to non-human (or non-green beard (see below)).

I think the uncanny valley effect is an artifact of the human entity detector (which evolved only to facilitate survival and reproduction of those humans with it). Killing entities that are close enough to be potential competitors and rivals but not so close that they might be kin is a good evolutionary strategy; the functional equivalent of the mythic green beard gene (where the green beard gene causes the expression of a green beard and the compulsion to kill all without a green beard). Because humans evolved a large brain recently, the “non-green-beard-detector” can't be instantiated by neural structures directly and completely specified by genes, but must have a learned component. I think this learned component is the mechanism behind cultural xenophobia and religious bigotry.

Back to consciousness. Consciousness implies the continuity of entity identity over time. The individual that is conscious at time=1 is self-perceived to be “the same” individual as is conscious at time=2. What does this actually mean? Is there a one-to-one correspondence between the two entities? No, there is not. The entity at time=1 will evolve into different entities at time=2 depending on the experience-path the entity has taken.

If a snap-shot of the entity at time=1 is taken, duplicated into multiple exact copies and each copy is allowed to have different experiences, at time=2 there will be multiple different entities derived from the first original. Is any one of these entities “more” like the original entity? No, they are all equally derived from the original entity, they are all equally as much like the original. Each of them will have the subjective experience that they are derived from the original (because they are), each one will have the subjective experience that they are the original (because as far as each one knows, they are the original) and the subjective experience that all the others must therefore be imposters.

This seeming paradox arises because of the resolution the human entity detector has. Can that detector distinguish between extremely similar versions of entities? To do pattern recognition, one must have a pattern for comparison. The more exacting the comparison, the more exacting the compared to pattern must be. In the limit a perfect comparison requires an exact and complete representation; in the limit it takes a complete 100% fidelity emulation of the entity (so as to be able to do a one-to-one comparison). I think this relates to what Nietzsche was talking about when he said:

“Whoever battles with monsters had better see that it does not turn him into a monster. And if you gaze long into an abyss, the abyss will gaze back into you.”

To be able to perceive anything, one must have data for the pattern recognition necessary to detect what ever is being perceived. If the pattern recognition computational structures are unable to identify something, that something cannot be perceived. To perceive the abyss, you must have a mapping of the abyss inside of you. Because humans have self-modifying pattern recognition structures, those structures self-modify to become better at detecting what ever is being observed. As you stare into the abyss, your brain becomes more abyss-like to optimize abyss detection.

With the understanding that to detect an entity, one must have pattern recognition that can recognize that entity, then the reason that there is the appearance of continuity of consciousness is seen to be an artifact of human pattern recognition. Human entity detection necessarily compares the observed entity with a reference entity. When that reference entity is the self, there is always a one-to-one correspondence with what is observed to be the self and the reference entity (which is the self), so there is always the identification of the observed entity as the self. It is not that there is actual continuity of a self-entity over time, rather there is the illusion of continuity because the reference is changing exactly as the entity is changing. There are some rare cases where people feel “not themselves” (depersonalization disorder) where they think they are a substitute, or dead, or somehow not the actual person they once were. This dissociation sometimes happens due to extreme traumatic stress, and there is some thought that it is protective; dissociate the self so that the self is not there to experience the trauma and be irreparably injured by that trauma (this may not be correct). This may be what happens during anesthesia.

I think this solves the “problem” of uploading; how can the uploaded entity be identical to the non-uploaded entity? The actual answer is that it can't be, but that doesn't matter if the uploaded entity “feels” or “believes” it is the same entity as the non-uploaded entity, then as far as the uploaded entity is concerned, it is. I appreciate that this may not be a satisfactory solution to anyone but the uploaded entity because it implies that continuity of consciousness is merely an illusion, essentially a hallucination caused by a defect in the entity detection pattern recognition. The data from the non-uploaded entity doesn't really matter. All that matters is does the uploaded entity “feel” it is the same.

I think that in a very real sense, those who seek personal immortality via cryonics or via uploading are pursuing an illusion. They are pursuing the perpetuation of the illusion of self-entity continuity. The same illusion those who believe in an immortal soul are pursuing. The same illusion ancient Egyptians persued via mummification.  If the entity is to be self-identical for perpetuity, then it cannot change. If it cannot change, then it cannot have new experiences. If it has new experiences and changes, then it is not the same entity that it was before those changes.

In terms of an AI; the AI can only be conscious if it has an entity detector that detects itself and uses itself as the pattern for that detection. It can only be conscious about aspects of itself that its entity detector has access to. For example humans are not conscious of the data processing that goes on in the visual cortex. Why? Because the human entity detector does not attempt to map that conceptual space. If the AI entity detector doesn't map part of its own computational equipment, then the AI won't be conscious of that part of its own data processing either.

A recipe for friendly AI might be to program the AI to use the coherent extrapolated volition of a select group of humans as its reference entity for entity detection. In effect, that may be what some cultures are already trying to accomplish through ancestor and hero worship; attempting to mold future generations by holding up ideals as examples. That may be analogous to what EY was getting at in discussing what he wants to protect.  If the AI were given a compulsion to become ever more like the CEV of its reference entity, there are limits to how much it could change. 

That might be a better use for the data that some humans want to upload to try and achieve personal immortality. They can't achieve personal immortality because the continuity of entity identity is an illusion. Selecting which humans to use would be tricky, but if their coherent extrapolated volition could be “captured”, combined and then used as the reference entity for the AI, it might be a good idea. The AI would then be no worse (and no better) than the sum of those individuals. Of course how to select those individuals is the tricky part. Anyone who wants to be selected is probably not suitable. The mind-set that I think is most appropriate is that of a parent nurturing their child, not to live vicariously through the child, but for the child's benefit. A certain turn-over per generation would keep the AI connected to present humanity but allow for change.  We do not want individuals who seek to acquire power by clawing their way to the top of the social power hierarchy by forcing others down (in a zero-sum manner). 

I think allowing “wild-type” AI (individuals who upload themselves) is probably too dangerous, and is really just a monument to their egotistical fantasy of entity continuity.  Just like the Pyramids, but a pyramid that could change into something fatally destructive (uFAI). 

There are some animals that “think” and act like they are people, some dogs that have been completely human acclimated. What has happened is that the dog is using a “human-like” self representation as the reference for its entity pattern recognition, but because of the limited cognitive capacities of that particular dog, it doesn't recognize that the humans it observes are different than itself. An AI could be designed to think that it was human (once we knew how to actually design any AI, designing it to think it was human would be easy).

Humans can do this too (emulate another entity such that they think they are that entity), I think that is in essence what Stockholm Syndrome causes. Under severe trauma, following dissociation and depersonalization, the self reforms, but in a pattern that matches, identifies with, and bonds to the perpetrator of the trauma. The traumatized person has attempted to emulate the “green-beard persona” to avoid death and abuse being perpetrated upon them by the person with the “green beard”.

This may be the solution to Fermi's Paradox.  There may be no galaxy spanning AIs because by the time civilizations can accomplish such things they realize that continuity of entity identity is an illusion and have grown beyond wanting to spend effort on illusions. 

-13