In “Freedom under naturalistic dualism” I have carefully argued that consciousness is radically noumenal, that is, it is the most real (perhaps the only real) thing in the Universe, but also totally impossible to be observed by others (non-phenomenal). In my view this strongly limits our knowledge on sentience, with important consequences for animalism, that I will comment in this post.

We have direct access to our own stream of consciousness and given our physical similarity with other humans and the existence of language, we can confidently accept the consciousness of other humans and their reporting of their mental states. 

Under physicalist epiphenomenalism (which is the standard approach to the mind-matter relation), the mind is super-impressed on reality, perfectly synchronized, and parallel to it. Understanding why some physical systems make an emergent consciousness appear (the so called “hard problem of consciousness”) or finding a procedure that quantify the intensity of consciousness emerging from a physical system (the so called “pretty hard” problem of consciousness) is impossible: the most Science can do is to build a Laplace demon that replicates and predicts reality. But the even the Laplacian demon is impotent to assess consciousness; in fact, regarding Artificial Intelligence we are in the position of the Laplace demon: we have the perfectly predictive source code, but we don’t know how to use this full scientific knowledge of the system for consciousness assessment. 

In my view Integrated Information Theory (IIT) is the best theory of consciousness available, because it recognizes that a theory of consciousness can only be the formalization (ideally by mathematical axiomatization) of our previous intuitions. The testing of any theory of consciousness can only be done on a very limited “circle of epistemic trust”: the set of beings so similar to us that we  can accept their consciousness as obvious and that can report to us so we can compare predictions with pseudo-observations (that is, trustable accounts of experience; I call reports on states of consciousness “pseudo-observations” because the only full observations of consciousness that can be made are those of the own states of consciousness). Beyond humans, our understanding of other minds decays exponentially. We don’t know and we really cannot knowWhat Is It Like to Be a Bat”.

Moral weights depend on intensity of conscient experience. Surprisingly, moral weight estimates often suggest some degree of conservation of consciousness: when you examine the tables, you take 10 animals with a brain of 100 grams, and their moral weight is around that of one animal of 1 kg. For me this is absurd. The organization of the matter in larger and more complex structures is what (likely) creates consciousness. The maximum amount of consciousness you can make with 1.2 kg of biological matter is that of a human brain, by a large margin.

That is, for me it is obvious (remember, “obvious” is the most I can say in a world of noumenal consciousness: no observations are available) that consciousness intensity grows far more than linearly in the number of nodes/connections/speed of the underlying neural network: it is strongly super-additive. Any plausible estimate its intensity shall recognize the only real intuition we share: that consciousness is related to complexity, and scale economies on consciousness are large.

In my view it is likely that large vertebrates can feel direct physical pain with intensity commensurate to that of humans, because we have both large and complex brains, and pain and pleasure are very simple functions. I can accept some “saturation” of the super-additivity of sentience regarding pain for large vertebrates.  In any case, the deep extension of the moral circle (beyond the large vertebrates) implies a relatively clear measure of brain complexity and some hypotheses about the relation between that measure and sentience intensity. 

The easy world of “one man, one vote” that ethicists are used to for very similar (human) beings cannot be extended. Before the extension of the moral circle, we need at least clear and distinct (i.e. quantitative) hypotheses on brain complexity and size and consciousness. 

Unlike John M. Keynes, I am totally for being as precisely wrong as possible.  

New Comment
13 comments, sorted by Click to highlight new comments since:
[-]TAG20

Under physicalist epiphenomenalism (which is the standard approach to the mind-matter relation), the mind is super-impressed on reality, perfectly synchronized, and parallel to it.

Under dualist epiphenomenalism, that might be true. Physicalism has it either that consciousness is non existent rather than causally idle (eliminitavism), or identical to physical brain states (and therefore sharing their causal powers).

Understanding why some physical systems make an emergent consciousness appear (the so called “hard problem of consciousness”) or finding a procedure that quantify the intensity of consciousness emerging from a physical system (the so called “pretty hard” problem of consciousness) is impossible:

You could have given a reason why.

You cannot know more than  Laplace's demon, and the demon cannot assess consciousness. It is analyzed in detail in "Freedom under Naturalistic dualism".

[-]Ann20

"Moral weights depend on intensity of conscient experience." - Just going to note that I've no particular reason to concede this point at the moment, so don't directly consider the next question a question of moral weight; I'd rather disassociate it first:

Is there ... any particular reason to expect intensity of conscious experience to grow 'super-additively', such that a tiny conscious mind experiences 1 intensity units, but a mind ten times as large experiences (since you reject linear, we'll step up to the exponential) 1024 intensity units? Given our general inability to exist as every mass of brain, what makes this more intuitive than no, marginal, or linear increase in intensity?

Personally, I would be actively surprised to spend time as a lower-brain-mass conscious animal and report that my experiences were (exceptionally) less intense. Why do our intuitions differ on this?

It depends on the type of animal. It might well be that social animals feel pain very differently than non-social animals.

The Anterior Cingulate Cortex plays a key role in the emotional response to pain, part of what makes pain unpleasant.

https://www.perplexity.ai/search/Find-evidence-supporting-_ZlYNrCuSSK5HNQMy4GOkA 

Not all mammals have an Anterior Cingulate Cortex. For birds, there is an analogous structure, Nidopallium Caudolaterale, that has a comparable function but is present primarily in social birds. 

I'm not saying that other animals don't respond to pain, but the processing and the association of pain with social emotions (which non-social animals presumably lack) is missing. 

[-]Ann32

That certainly seems distinct from brain mass, though (except that it takes a certain amount to implement in the first place). I'd expect similar variation in feeling pain by becoming different neurologies of human; I know there are many reported variations in perception of felt pain inside our species already.

Indeed. Women are known to report higher pain sensitivity than men. It also decreases with age. There are genes that are known to be involved. Anxiety increases pain perception, good health reduces it. It is possible to adapt to pain to some degree. Meditation is said to tune out pain (anecdotal evidence: I can tune out pain from, e.g., small burns).

I mostly agree in the fact that while conscience intensity is the ontological basis of moral weights, there are other relevant layers. On the hand conscience looks to be some function of integrated information and computation in a network.

IIT for example suggests some entropic combinatorial measure, that very likely would explode.

In any case we are trapped in our own existence, so inter subjective comparison is both necessary and mostly depending on intuition.

Because in the limit your intuition is that the experience of an electron is inexistent. The smaller the brain, the closer to inanimate matter.

[-]Ann10

But that's in the limit. A function of electron = 0, ant = 1, cockroach = 4, mouse = 300 fits it just as well as electron = 0, ant = 1, cockroach = 2, mouse = 2^75, as does electron = 0, ant = 100, cockroach = 150, mouse = 200.

What about an IPhone? It looks similar to a ant in terms of complexity; Less annoying too…

[-]Ann10

Suppose my intuition is that the 'conscious experience' of 'an iPhone' varies based on what software is running on it. If it could run a thorough emulation of an ant and have its sensory inputs channeled to that emulation, it would be more likely to have conscious experience in a meaningful-to-me way than if nobody bothered (presuming ants do implement at least a trivial conscious experience).

(I guess that there's not necessarily something that it's like to be an iPhone, by default, but the hardware complexity could theoretically support an iAnt, which there is it is something that it's like to be?)

This is also my intuition: the intensity of experience depends on the integrated information flow or the system and the nature of the experience depends on the software details.

Then iPhones have far more limited maximum intensity experience than ants, and ants maximum experience intensity is only a fraction of that of a mouse.

Moral weights depend on intensity of conscient experience.

Wow, that seems unlikely.  It seems to me that moral weights depend on emotional distance from the evaluator.  For some, they're able to map intensity of conscious experience to emotional sympathy (up to a point; there are no examples and few people who'll claim that somthing that thinks faster/deeper than them is vastly more important than them).