I’ve claimed that p-zombies require exponentially more resources than phenomenally conscious agents and used this to justify that AGI will necessarily be phenomenally conscious. I’ve previously supported this claim by pointing to a related result in integrated information theory showing that p-zombies constructed as feed-forward networks are exponentially larger than behaviorally equivalent conscious networks, but now I’d like to see if I can get a similar result in noematology.

NB: This account is provisional and not sufficiently well-founded to constitute a proof given my assumptions. That said, it should be precise enough to be falsifiable if wrong.

First, some notation. Given an agent A, let a behavior be a pair {x, A(x)} where x is the world state before the agent acts and A(x) is the world state after. For example, if when holding a teacup A takes a sip of tea, then x is “holding a cup of tea” and A(x) is “taking a sip of tea”. If we have a second agent, Z, we may wish to ask if, given the same initial world state x, Z(x)=A(x), viz. Z and A exhibit the same behavior when presented with x. In an obvious sense this question is meaningless because A and Z cannot be said to observe the exact same reality nor is there to any epistemically valid method within phenomenalism by which we may directly compare two world states, but given an observer agent O, we can consider if A(x)=Z(x) with respect to O’s experience of A(x) and Z(x) and this is what we will mean when we ask if A exhibits the same behavior as Z.

Now let A be a phenomenally conscious agent and Z a p-zombie of A, meaning that Z and A cannot be distinguished based on observed behavior and Z is not phenomenally conscious (although it is, of course, cybernetic). More formally, we are supposing that A and Z agree on the set of behaviors X such that for all {x, y} in X, y=A(x)=Z(x). I then claim that Z is exponentially larger than A with the cardinality of X, where by “larger” I mean physically made of more stuff.

The short explanation of my reasoning is that Z must do with only the ontic what A does with ontology. Unfortunately this hides a lot of my understanding about what phenomenal consciousness does and how it works, so instead I’ll explain by relating A and Z to computational models that can describe them and use those to show that Z must be exponentially larger than A.

Since Z is not phenomenally conscious it cannot create information within itself because it has no feedback process with which information could be created (if it did it would then be phenomenally conscious and no longer a p-zombie). Computationally this describes the class of computers we call deterministic finite automata or DFAs. A, on the other hand, can create information via feedback; this gives it “memory” which we can model as a Turing machine with a finite tape (the tape must be finite since it is embodied in the real world rather than a theoretical construct). Other models are possible, but these should suffice for our purposes.

Since Z is a DFA, it must have at least one state for each world state x it encounters to ensure the correct mapping from x to y for each {x, y} in X, thus Z has at least O(|X|) states. This puts a lower bound on the size of Z, so to show that Z is exponentially larger than A in terms of |X| we need to show that A has an upper bound on its size of O(lg|X|) states and tape length. We will do this by constructing a phenomenally conscious version of Z by converting it from a DFA to a Turing machine. It’s easier to understand how to convert a finite Turing machine into a DFA, though, so we’ll do that first and then reverse the process to show that A is no larger than O(lg|X|).

Since A is a finite Turing machine it can be converted into a DFA by an algorithm analogous to loop unrolling that requires creating at least one state in the DFA for each possible configuration of the tape. Let us denote this DFA-ized version of A as A’. This means that, given that A has a tape of length n, we need A’ to contain at least 2^n states. Supposing A consists of a state machine of fixed sized independent of |X| and always encodes information about each world state x on the tape, then A will need a tape of at least length lg|X| to encode every x, and A’ will need to contain at least 2^(lg|X|)=|X| additional states to replace the tape. This gives us a lower bound on the size of Z since A’ has O(|X|) states, so Z has O(|X|) states.

Reversing this algorithm we can construct a Turing machine Z’ from Z by using the tape to encode the O(|X|) states of Z on at least O(lg|X|) of tape where Z’ otherwise consists of a constant-sized finite state machine that emulates Z. Since Z’ is a Turing machine it meets our requirements for phenomenal consciousness and so gives a lower bound on the size of A. A has a natural upper bound of O(|X|) since A can be simply constructed as Z with a tape, so A has between O(lg|X|) and O(|X|) states and tape length.

This unfortunately does not give us the desired result that A has an upper size bound of O(lg|X|) and only says that A is no larger than Z and Z is at most exponentially larger than A in terms of |X|. Nonetheless I believe it to be true that A also has at most O(lg|X|) states and tape length because this is what we empirically see when constructing, for example, less phenomenally conscious state machine solutions to programming problems rather than more phenomenally conscious versions using nested loops or recursion. I also suspect it’s possible to prove an upper bound lower than O(|X|) closer to O(lg|X|) for A but have not found such a proof in the literature nor proved this myself. There may alternatively be a weaker relationship between the sizes of A and Z that has the same consequence that Z must be much larger than A but is quantitatively less than exponential.

(If you’re interested in collaborating with me on this please reach out since this would be a nice result to have locked down for when I publish my work on formally stating the AI alignment problem! I believe it’s possible, but I’m rusty enough on automata theory that the effort I would have to put in the prove this is high. If you already have automata theory fresh in your mind then this is likely more straightforward.)

Finally we need to connect this notion of number of states and tape length to physical size. This is straight forward, but to be explicit about it each state or tape position needs to be embodied in physical stuff. Even assuming a single transistor or a single bit of storage is all that is needed for each state or tape position, this means we can use our size calculations to estimate and compare the relative physical sizes of A and Z by setting up a correspondence between theoretical state and tape constructs and physical computer implementations. This let’s us conclude at least a weak version of what we set out to prove: that a p-zombie Z of a phenomenally conscious agent A is no smaller than A in terms of |X| as measured in physical stuff, is probably much larger than A, and is exponentially larger than A if we can prove that A has at most O(lg|X|) states and tape length.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 7:56 PM

What do you mean by "phenomenally conscious"? You seem to be suggesting that any automaton with memory is "phenomally conscious", which would does violence to the usual use of the word "conscious" and would make even a vending machine "phenomally conscious".

I wrote a series of post explaining this, the last of which is here. You are correct that I would possibly consider a vending machine phenomenally conscious, which is why I am often careful to talk about "phenomenal consciousness" rather than "consciousness" in the naive sense.