algekalipso

Comments

Neural Annealing: Toward a Neural Theory of Everything (crosspost)

The histogram of CSHW amplitudes seems to have very little information content, while the entire matrix of just-noticeable-differences of our experience seems to have a whole lot of information. If CSHWs are so important to determine a "brain state", where is all the missing information?

Two points here. First, according to the theory -as Mike points out- the overall "mood" of the state is largely encoded in the low frequency harmonics, while the higher frequency ones are more important for semantic information. In a sense, you can think of the lower frequency harmonics as creating a set of buckets in which to put, juggle, and recombine the information provided by the higher frequency harmonics. Hence, while the specific information content of the experience might require a very fine level of resolution, both the valence and the broad information-processing steps might not. And second, there is more to the CSHWs than just the histogram of amplitudes. There is also a matrix of phase-locking relations between them, which increases the overall information content by a large amount.

Neural Annealing: Toward a Neural Theory of Everything (crosspost)

I'd mention that Steven Lehar foreshadowed the paradigm in his Directional Harmonic Theory of neurocomputation. I recommend reading his book "The Grand Illusion" for abundant phenomenological data in favor of this flavor of neurocomputation.

Subagents, introspective awareness, and blending

Definitely. I'll probably be quoting some of your text in articles on Qualia Computing soon, in order to broaden the bridge between LessWrong-consumable media and consciousness research.

Of all the articles linked, perhaps the best place to start would be the Pseudo-time Arrow. Very curious to hear your thoughts about it.

Subagents, introspective awareness, and blending

Sure! It is "invariance under an active transformation". The more energy is trapped in phenomenal spaces that are invariant under active transformations, the more blissful the state seems to be (see "analysis" section of this article).

Subagents, introspective awareness, and blending

Really great post!

Andrés (Qualia Computing) here. Let me briefly connect your article with some work that QRI has done.

First, we take seriously the view of a "moment of experience" and study the contents of such entities. In Empty Individualism, every observer is a "moment of experience" and there is no continuity from one moment to the next; the illusion is caused by the recursive and referential way the content of experience is constructed in brains. We also certainly agree that you can be aware of something without being aware of being aware of it. As we I will get to, this is an essential ingredient in the way subjective time is constructed

The concept of blending is related to our concept of "The Tyranny of the Intentional Object". Indeed, some people are far more prone to confusing logical or emotional thoughts for revealed truth; introspective ability (which can be explained as the rate at which awareness of a having being aware before happens) varies between people and is trainable to an extent. People who are systematizers can develop logical ontologies of the world that feel inherently true, just as empathizers can experience a made-up world of interpersonal references as revealed true. You could describe this difference in terms of whether blending is happening more frequently with logical or emotional structures. But empathizers and systematizers (and people high on both traits!) can, in addition, be highly introspective, meaning that they recognize those sensations as aspects of their own mind.

The fact that each moment of experience can incorporate informational traces of previous ones allows the brain to construct moments of experience with all kinds of interesting structures. Of particular note is what happens when you take a psychedelic drug. The "rate of qualia decay" lowers due to a generalization of what in visual phenomenology is called "tracers". The disruption of inhibitory control signals from the cortex leads to the cyclical activation of the thalamus* and thus the "re-living" of previous contents of experience in high-frequency repeating patterns (see "tracers" section of this article). On psychedelics, each moment of experience is "bigger". You can formalize this by representing each moment of experience as a connected network, where each node is a quale and each edge is a local binding relationship of some sort (whether one is blending or not, may depend on the local topology of the network). In the structure of the network you can encode the information pertaining to many constructed subagents; phenomenal objects that feel like "distinct objects/realities/channels" would be explained in terms of clusters of nodes in the network (e.g. subsets of nodes such that the clustering coefficient within them is much larger than the average clustering coefficient of different subsets of nodes of similar size). As an aside, dissociatives, in particular, drastically change the size of clusters, which phenomenally is experienced as "being aware of more than one reality at once".

You can encode time-structure into the network by looking at the implicit causality of the network, which gives rise to what we call a pseudo-time arrow. This model can account for all of the bizarre and seemingly unphysical experiences of time people report on psychedelics. As the linked article explains in detail, how e.g. thought-loops, moments of eternity, and time branching can be expressed in the network, and emerge recursively from calls to previous clusters of sensations (as information traces).

Even more strange, perhaps, is the fact that a long rate of qualia decay can give rise to unusual geometry. In particular, if you saturate the recursive calls and bind together a network with a very high branching factor, you get a hyperbolic space (cf. The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes).

That said, perhaps the most important aspect of the investigation has been to encounter a deep connection between felt sense of wellbeing (i.e. "emotional valence") and the structure of the network. From your article:

For instance, you might notice sensations in your body that were associated with the emotion, and let your mind generate a mental image of what the physical form of those sensations might look like. Then this set of emotions, thoughts, sensations, and visual images becomes “packaged together” in your mind, unambiguously designating it as a mental object.

The claim we would make is that the very way in which this packaging happens gives rise to pleasant or unpleasant mental objects, and this is determined by the structure (rather than "semantic content") of the experience. Evolution made it such that thoughts that refer to things that are good for the inclusive fitness of our genes get packaged in more symmetrical harmonious ways.

The above is, however, just a partial explanation. In order to grasp the valence effects of meditation and psychedelics, however, it will be important to take into account a number of additional paradigms of neuroscience. I recommend Mike Johnson's articles: A Future for Neuroscience and The Neuroscience of Meditation. The topic is too broad and complex for me to cover here right now, but I would advance the claim that (1) when you "harmonize" the introspective calls of previously-experienced qualia you up the valence, and (2) the process can lead to "annealing" where the internal structure of the moments of experience are highly-symmetrical, and for reasons we currently don't understand, this appears to co-occur in a 1-1 fashion with high valence.

I look forward to seeing more of your thoughts on meditation (and hopefully psychedelics, too, if you have personal experience with them).

*The specific brain regions mentioned is a likely mechanism of action but may turn out to be wrong upon learning further empirical facts. The general algorithmic structure of psychedelic effects, though, where every sensation "feels like it lasts longer" will have the downstream implications on the construction of the structure of moments experience either way.

State your physical account of experienced color

I have seen this argument before, and I must confess that I am very puzzled about the kind of mistake that is going on here. I might call it naïve functionalist realism, or something like that. So whereas in "standard" naïve realism people find it hard to dissociate their experiences with an existing mind-independent world, they then go on to perceive everything as "seeing the world directly, nothing else, nothing more." Naïve realists will interpret their experiences as direct, unmediated, impressions of the real world.

Of course this is a problematic view, and there killer arguments against it. For instance, hallucinations. However, naïve realists can still come back and say that you are talking about cases of "misapprehension", where you don't really perceive the world directly anymore. That does not mean you "weren't perceiving the world directly before." But here the naïve realist has simply not integrated the argument in a rational way. If you need to explain hallucinations as "failed representations of true objects" you don't, anymore, need to in addition restate one's previous belief in "perceiving the world directly." Now you end up having two ontologies instead of one: Inner representations and also direct perception. And yet, you only need one: Inner representations.

Analogously, I would describe your argument as naïve functionalist realism. Here you first see a certain function associated to an experience, and you decide to skip the experience altogether and simply focus on the function. In itself, this is reasonable, since the data can be accounted for with no problem. But when I mention LSD and dream, suddenly that is part of another category like a "bug" in one's mind. So here you have two ontologies, where you can certainly explain it all with just one.

Namely, the green is a particular qualia, which gets triggered under particular circumstances. Green does not refer to the wavelength of light that triggers it, since you can experience it without such light. To instead postulate that this is in fact just a "bug" of the original function, but that the original function is in and of itself what green is, simply adds another ontology which, when taken on its own, already can account for the phenomena.

State your physical account of experienced color

With the aid of qualia computing and a quantum computer, perhaps ;-)

State your physical account of experienced color

Both you and prase seem to be missing the point. The experience of green has nothing to with wavelengths of light. Wavelengths of light are completely incidental to the experience. Why? Because you can experience the qualia of green thanks to synesthesia. Likewise, if you take LSD at a sufficient dose, you will experience a lot of colors that are unrelated to the particular input your senses are receiving. Finally, you can also experience such color in a dream. I did that last night.

The experience of green is not the result of information-processing that works to discriminate between wavelengths of light. Instead, the experience of green was recruited by natural selection to be part of an information-processing system that discriminates between wavelengths of light. If it had been more convenient, less energetically costly, more easily accessible in the neighborhood of exploration, etc. evolution would have recruited entirely different qualia in order to achieve the exact same information-processing tasks color currently takes part in.

In other words, stating what stimuli triggers the phenomenology is not going to help at all in elucidating the very nature of color qualia. For all we know, other people may experience feelings of heat and cold instead of colors (locally bounded to objects in their 2.5D visual field), and still behave reasonably well as judged by outside observers.

State your physical account of experienced color

Quantum mechanics by itself is not an answer. A ray in a Hilbert space looks less like the world than does a scattering of particles in a three-dimensional space. At least the latter still has forms with size and shape. The significance of quantum mechanics is that conscious experiences are complex wholes, and so are entangled states. So a quantum ontology in which reality consists of an evolving network of states drawn from Hilbert spaces of very different dimensionalities, has the potential to be describing conscious states with very high-dimensional tensor factors, and an ambient neural environment of small, decohered quantum systems (e.g. most biomolecules) with a large number of small-dimensional tensor factors. Rather than seeing large tensor factors as an entanglement of many particles, we would see "particles" as what you get when a tensor factor shrinks to its smallest form.

[...]

Once this is done, the way you state the laws of motion might change. Instead of saying 'tensor factor T with neighbors T0...Tn has probability p of being replaced by Tprime', you would say 'conscious state C, causally adjacent to microphysical objects P0...Pn, has probability p of evolving into conscious state Cprime' - where C and Cprime are described in a "pure-phenomenological" way, by specifying sensory, intentional, reflective, and whatever other ingredients are needed to specify a subjective state exactly.

You are hitting the nail in the head. I don't expect people in LessWrong to understand this for a while, though. There is actually a good reason why the cognitive style of rationalists, at least statistically, is particularly ill-suited for making sense of the properties of subjective experience and how they constrain the range of possible philosophies of mind. The main problem is the axis of variability of "empathizer vs. systematizer." LessWrong is built on a highly systematizing meme-plex that attracts people who have a motivational architecture particularly well suited for problems that require systematizing intelligence.

Unfortunately, recognizing that one's consciousness is ontologically unitary requires a lot of introspection and trusting one's deepest understanding against the conclusions that one's working ontology suggests. Since LessWrongers have been trained to disregard their own intuitions and subjective experience when thinking about the nature of reality, it makes sense that the unity of consciousness will be a blind spot for as long as we don't come up with experiments that can show the causal relevance of such unity. My hope is to find a computational task that consciousness can achieve at a runtime complexity that would be impossible with a classical neural networks implemented with the known physical constraints of the brain. However, I'm not very optimistic this will happen any time soon.

The alternative is to lay out specific testable predictions involving the physical implementation of consciousness in the brain. I recommend reading David Pearce's physicalism.com, which outlines an experiment that would convince any rational eternal quantum mind skeptic that indeed the brain is a quantum computer.

State your physical account of experienced color

I am super late to the party. But I want to say that I agree with you and I find your line of research interesting and exciting. I myself am working on a very similar space.

I own a blog called Qualia Computing. The main idea is that qualia actually plays a causally and computationally relevant role. In particular, it is used in order to solve Constraint Satisfaction Problems with the aid of phenomenal binding. Here is the "about" of the site:

Qualia Computing? In brief, epiphenomenalism cannot be true. Qualia, it turns out, must have a causally relevant role in forward-propelled organisms, for otherwise natural selection would have had no way of recruiting it. I propose that the reason why consciousness was recruited by natural selection is found in the tremendous computational power that it afford to the real-time world simulations it instantiates through the use of the nervous system. More so, the specific computational horse-power of consciousness is phenomenal binding –the ontological union of disparate pieces of information by becoming part of a unitary conscious experience that synchronically embeds spaciotemporal structure. While phenomenal binding is regarded as a mere epiphenomenon (or even as a totally unreal non-happening) by some, one needs only look at cases where phenomenal binding (partially) breaks down to see its role in determining animal behavior.

Once we recognize the computational role of consciousness, and the causal network that links it to behavior, a new era will begin. We will (1) characterize the various values of qualia in terms of their computational properties, and (2) systematically explore the state-space of possible conscious experiences.

(1) will enable us to recruit the new qualia varieties we discover thanks to (2) so as to improve the capabilities of our minds. This increased cognitive power will enable us to do (2) more efficiently. This positive-feedback loop is perhaps the most important game-changer in the evolution of consciousness in the cosmos.

We will go from cognitive sciences to actual consciousness engineering. And then, nothing will ever feel the same.

Also, see: qualiacomputing.com/2015/04/19/why-not-computing-qualia/

I'm happy to talk to you. I'd love to see where your research is at.

Load More