Note: This is meant as an accessible introduction to an interesting, novel, and not widely-known idea about digital sentience primarily due to Andrés Gómez Emilsson and David Pearce. See links at the bottom for the original arguments on which this summary is based. I do my best to portray them as accurately as possible.

Note 2: We use terms like consciousness, sentience, and experience interchangeably. These each refer to intrinsic “what-it's-like”-ness. 

Part 1: Reimagining the Problem

tl;dr: The debate about digital sentience is often entirely framed in terms of the algorithms that a computer executes, implicitly assuming that consciousness is computational. If we instead assume that consciousness is the inherent nature of the physical, then we can rule out systems that cannot create unified, bound experiences from being sentient without having to make reference to the algorithms they execute.

Introduction

It seems like people who think about whether modern digital computers can have subjective experience often run into the two following contradictory ideas: 

First, that computers can’t be conscious, since they are simply wires and logic gates sending 1s and 0s according to the user’s instruction. It feels like there is something about biological brains that is fundamentally different, as demonstrated by how strange it sounds prima facie to say that a computer can experience pain or pleasure. 

But secondly, most people think that consciousness must come from the brain, and the brain is entirely controlled by physical laws. Therefore we should be able to specify how the brain processes information, and emulate its algorithms on a sufficiently powerful computer. Then, in theory, such a computer would also emulate the brain’s conscious experience. 

Of course, these are just intuitions, not rigorous arguments. Yet they both feel quite compelling. Given that these conclusions are contradictory, at least one of the arguments must be incorrect. In order to understand more precisely how one of these arguments can be invalid, it is useful to formulate and label their assumptions more formally. 

The first intuition relies on two assumptions: that computers are simply wires and logic gates (A1), and that wires and logic gates cannot have subjective experience (A2). The second relies on the three assumptions: that human consciousness comes from the brain (B1), that the human brain can be entirely described by laws of physics (B2), and that a computer can simulate how a brain processes information (B3). 

Given that at least one of the arguments is invalid, it seems as if one of these assumptions must be incorrect. That may be the case. Alternatively, with a shift in perspective on how consciousness fits into the physical world, we can instead argue that each of the assumptions is valid, and instead it’s that the conclusion of the second intuition (that computers can have subjective experience) does not follow from the assumptions. In particular, we will see how (B3) is insufficient for arguing that the computer experiences consciousness. This then opens a pathway to arguing against computer sentience without contradicting any of the seemingly plausible assumptions.

To understand how this works, first we need to go into detail on the shift in perspective—a theory called non-materialist physicalism. 

Non-materialist Physicalism

David Pearce presents the case for non-materialist physicalism from a starting point of two plausible assumptions.

The first assumption is that we only have access to the external world through the subjective lens of consciousness; an objective external world cannot be perceived directly. This entails that the complex world we see and interact with only directly exists as contents of consciousness. The important implication of this assumption is that we experience multiple objects in a unified consciousness, as opposed to experiencing individual “pixels of experience,” an idea we will henceforth refer to as “mind dust.” This binding of objects in consciousness is why consciousness is useful, and thus when we talk about consciousness, we refer to this unified experience. 

The second assumption is often called physicalism, the idea that our world is entirely reducible to the laws of physics. There is no non-physical entity with causal power, and nothing is missing from the picture of reality given by physical laws. This assumption is important because it means that consciousness must be able to be described at the level of the mathematical laws of physics, and all truths about consciousness must in principle be deducible from such laws. 

Both of these assumptions seem compelling to most people, although plenty of philosophers disagree with them. Many arguments for and against them can be found elsewhere online. 

From these assumptions, most people then run into the hard problem of consciousness: how does consciousness emerge in physical systems? Yet the existence of this problem relies on one additional often-unstated assumption: that the nature of the physical is non-experiential. That is, what the laws of physics are describing are fields of insentience

Non-materialist physicalism drops this assumption in favor of the intrinsic nature argument, which assumes that the laws of physics describe fields of sentience.[1] That is, that the intrinsic nature of the physical is experiential. This assumption invalidates the hard problem of consciousness but presents a new problem, often called the phenomenal binding problem: why are we more than “mind dust”? How do we experience a unified consciousness, containing multiple phenomenal objects? 

We will address the question of how and why biological minds are capable of binding, and whether computers can in theory do it as well in the next part of this sequence. We first revisit the second argument from the introduction from a non-materialist physicalism perspective to see in more detail how it is invalid. 

Returning to the Argument

Once we assume that consciousness is the intrinsic nature of the physical, the conclusion that computers can have subjective experience does not follow from (B1), (B2), and (B3) since it incorrectly views consciousness as computational—as the result of the execution of certain algorithms in the brain. 

This cannot be the case, since much of what the brain does relies on executing algorithms within our consciousness. When we make conscious decisions, we compare sensory representations of each option that we are able to hold within our consciousness. In this way, conscious experience is the substrate utilized for running the algorithms, and the information processing that happens within conscious experience cannot itself explain consciousness. 

To make this distinction more clear, consider the equivalent argument with the brain replaced by a quantum computer. The second intuition above is like arguing that a quantum computer is quantum because of the algorithms it executes, and equivalently, if we were to run those algorithms on a classical computer, it would become quantum. This, of course, is not the case—a computer being quantum is concerning how algorithms are implemented, and is not the result of certain algorithms, just as consciousness is a substrate in which algorithms are implemented, rather than the result of certain algorithms.

To be clear, this alone does not address the conclusion of the second intuition, that computers can have conscious experience. And moreover, it does not negate an argument of that form; we can still make determinations about a system having subjective experience by studying the algorithms it executes. For example, it seems clear that a simple lookup table does not have the internal representations required to have any sort of consciousness—no matter what the input-output mapping is. It is just that this functionalist approach is quite difficult to reason about in more generality, especially in the case of a complex neural network. The non-materialist physicalism worldview presents a much more tractable approach to the problem, namely, studying systems at the implementation level and finding necessary conditions for phenomenal binding. Systems that don’t meet these conditions can then be ruled out as sentient. 

In the next part of the sequence, we will see how and why biological brains create bound experiences, and investigate why digital computers cannot. 


These ideas come from:

https://qualiacomputing.com/2015/04/19/why-not-computing-qualia/

https://qualiacomputing.com/2022/06/19/digital-computers-will-remain-unconscious-until-they-recruit-physical-fields-for-holistic-computing-using-well-defined-topological-boundaries/

https://magnusvinding.com/2021/02/15/conversation-with-david-pearce/

https://www.hedweb.com/quora/index.html

Digital Sentience: Can Digital Computers Ever “Wake Up”?

  1. ^
New Comment