NB: Originally posted on Map and Territory on Medium, so some of the internal series links go there.

Last time we performed a reduction of the phenomenon of conscious self experience and through it discovered several key ideas. To refresh ourselves on them:

  • Things exist ontologically as patterns within our experience of them.
  • Things exist ontically as clusters of stuff within the world.
  • Things exist ephemerally though chains of experiences creating the perception of time.
  • Through ephemeral existence over time, things can feed back experiences of themselves to themselves, making them cybernetic, and in so doing create information.
  • Things can exist within information, those things can experience themselves, and it’s from those information things that ontology, and thus consciousness, arises.

We covered all of these in detail except the last one. We established that the feedback of things created from the information of feedback gives rise to ontology by noting that information things have ontological existences that transcend their ontic existences even as they are necessarily manifested ontically. From there I claimed that, since people report feeling as if they experience themselves as themselves, consciousness depends on and is thus necessarily created by the ontological experience of the self. Unfortunately this assumes that our naive sense of self could not appear any other way, and in the interest of skepticism, we must ask, can we be sure there is not some more parsimonious way to explain consciousness that does not depend on ontological self experience?

I think not, but some philosophers disagree. Consider the idea of p-zombies: philosophical “zombies” that are exactly like “real” people in all ways except that they are not really conscious. Or consider John Searle’s Chinese room, where a person who cannot understand Chinese nevertheless is able to mimmic a native Chinese speaker via mechanistic means. In each of these cases we are presented with a world that looks like our own except that consciousness is not necessary to explain the evidence of consciousness.

Several responses are possible, but the one I find most appealing is the response from computational complexity. In short, it says that p-zombies and Chinese rooms are possible by unrolling the feedback loops of ontological self experience, but this requires things that we think are conscious, like people, to produce exponentially more entropy, be exponentially larger, or run exponentially slower than what we observe. Given that people are not exponentially hotter, larger, or slower than they are, it must be that they are actually conscious. Other arguments similarly find that things that theoretically look like conscious entities while not being conscious are not possible without generating observable side-effects.

So if it is the case that reports of feeling as though consciousness includes experiencing the self as the self describe a necessary condition of consciousness, then ontological self experience must be necessary to consciousness. This is not to say it is a sufficient condition to explain all of consciousness, though, since that would require explaining many details specific to the way consciousness is embodied, so we properly say that ontological self experience explains phenomenal consciousness rather than the phenomenon of consciousness in general. Nevertheless there is much we can do with our concept of phenomenal consciousness that will take us in the direction of addressing AI alignment.

Qualia and Noemata

To begin, let’s return to our reduction of {I, experience, I} in light of our additional understanding. We now know that when we say “I experience myself” we really mean “I experience myself as myself”, so it seems our normalized phenomenon should be {I, experience, I as I}. We could have instead written this as {I, experience as I, I} since it is through self experience that the I sees the ontic self as an ontological thing, but the former notation is useful because it exposes something interesting we’ve been assuming but not yet explored: that the subject of a phenomenon can experience an ontological thing as object. Yet how can it be that a thing that exists only within experience can become the object of experience when experience happens between two ontic things?

The first part of the answer you already know: the ontological existence of a thing necessitates ontic manifestation. Like with the computer document, a thing might have an ontological existence apart from its ontic existence, but ontological existence implies ontic existence since otherwise there is no stuff to be the object of any experience, and for it to be otherwise would be to suppose direct knowledge of ontology, which we already ruled out by choosing empiricism without idealism. Thus an ontological thing is also an ontic thing, the ontic thing can be the object of experience, and so the ontological thing can be the object of experience. But only understanding that ontological things can be the object of experience in this way fails to appreciate how deeply ontology is connected to intentionality.

Notice that in order to talk about a phenomenon as an intentional relation we must identify the subject, experience, and object. That is, we, ourselves phenomenological subjects, see a thing that we call subject, see a thing that we call object, and see them interacting in some way that we can reify as a thing that we call an experience, i.e. we see the members of the intentional relation ontologically. If we don’t do this we fail to observe the phenomenon as an intentional relation and thus as a phenomenon, because if we fail to see the phenomenon as ontological thing we have no knowledge of it as a thing and can only be affected by it via direct experience of the ontic in the same way rocks and trees are affected by phenomena without knowing they exist or are being affected by experiences. This means that the object of experience, insofar as the subject can consider it the object of experience, has ontological existence by virtue of being the object of experience, even if that ontological existence is not or cannot be seen by the subject of the experience. Thus of course “I as I” can be the object of I’s experience because we have already proved it so by considering the possibility that it is.

So if “I as I” can be the object of the I’s experience of itself, why bother to think of the phenomenon this way rather than as {I, experience as I, I}? As way of response, consider how the I comes to have ontological existence: the I experiences the ontic I, this creates a feedback loop of experience over time that allows the creation of information, and then an ontic thing that the I can experience emerges from that information. That information-based, ontic thing carries with it the influence of the ontic I — just as the bit expressing the state of the throttle in the steam engine is created via the governor’s feedback loop and carries with it the influence of the reality of the steam engine’s configuration — so it is causally linked to the I’s ontic existence. We then call this “I as I” because it is the thing through which the I experiences itself as a thing by the I making the phenomenon of experience of the self as ontic thing the object of experience. Thus by thinking of the phenomenon of self experience as {I, experience, I as I} we see it has another form, namely {I, experience, {I, experience, I}}.

This highlights the structural difference between consciousness and cyberneticness. A cybernetic thing, as far as it is cybernetic, only experiences its ontic self directly via its feedback loops over itself. A conscious thing, though, can also experience its ontic self indirectly through feedback loops over the things created from the information in its cybernetic feedback loops. It’s by nesting feedback loops that the seed of phenomenal consciousness is created, and we give the things created by these nested feedback loops and the phenomena that contain them special names: noemata and qualia, respectively.

“Noema” is Greek for both thought and the object of thought, and the project of phenomenology was started when Husserl saw what we have now seen by building on Brentano’s realization that mental phenomena, also known as qualia, are differentiated from other physical phenomena by having noemata as their objects. That noemata are phenomena themselves, and specifically the phenomena of cybernetic things, was to my knowledge first well understood by Hofstadter, although Dretske seems to have been the first to take a stance substantially similar to mine, because it required the insights of control theory and the other fields that make up cybernetics to understand that mental phenomena are not something special but a natural result of nested feedback. This is also why early phenomenologists, like Husserl, tended towards idealism, while later ones opposed it: without an understanding of cybernetics it was unclear how to ground phenomenology in physical reality.

To reach the fairly unorthodox position I’ve presented here, it took an even deeper knowledge of physics to trust that we could make intentionality as central to epistemology as we have and maintain an existentialist stance. As such you may be left with the feeling that, while none of what I have presented so far is truly novel, I have left unaddressed many questions about this worldview. Alas my goal is not to provide a complete philosophical system but address a problem using this philosophical framework, so I have explained only what I believe is necessary to that end. We move on now to less well trod territory.

Noematology

Having identified noemata as the source of consciousness, our view of consciousness is necessarily noematological, i.e. it is based on an account of noemata. This invites us to coin “noematology” as a term to describe our study of the phenomena of consciousness through the understanding of noemata we have just developed. It also conveniently seems little used and so affords us a semantic greenfield for our technical jargon that avoids some of the associations people may have with related terms like “qualia”, so we take it up in the spirit of clarity and precision.

Noematology, despite being newly minted, already contains several results. The first of these follows immediately from the way noemata arise. Noemata, being simply the result of nested feedback, appear everywhere. That is to say, noemata are so pervasive that our theory of phenomenal consciousness is technically panpsychic. Specifically, since all things are cybernetic, all things must also contain in their self experiences information out of which things emerge, and those information things must themselves be cybernetic insofar as they are things, thus they are noemata, hence all things must be phenomenally conscious. Of course not all things are equally phenomenally conscious just as not all things are equally cybernetic: some things produce more and more used noemata than others, just as some things produce more and more used information than others. Ideas like integrated information theory act on this observation to offer a measure of consciousness that let’s us say, for example, that mammals are more conscious than trees and that rocks have a consciousness measure near zero. Integrated information theory also shows how the panpsychism of phenomenal consciousness is vacuous: it’s true, but only because it pushes most of the things one might like to claim via panpsychism out of the realm of philosophy and into the realm of science and engineering. Put another way, consciousness may be everywhere in everything, but it’s still hard to be conscious enough for it to make much of a difference.

That the manifestation of consciousness, especially the consciousness of things like humans, is complicated gives purpose to noematology because it helps us see insights that are normally occluded by implementation details. For example, that noemata are created by the nesting of feedback loops within feedback loops immediately implies the existence of meta-noemata created by the nesting of feedback loops within noemata. And if this nesting can be performed once, it can be performed many times until there is not enough negentropy left to produce even one bit of information from an additional nesting. These multiple orders of noemata can then be used to explain the qualitative differences observed during human psychological development, and that higher-order noemata, which we might also call the expression of higher-order consciousness, are necessary to create qualia like tranquility and cognitive empathy, but these topics are beside our current one. For now we turn our attention to the relationship between noemata, axias, and ethics because it will ground our discussion of AI alignment.

Axiology, Ethics, and Alignment

Philosophy is composed of the study of several topics. Naturally, there is some disagreement on what those topics are, what to call them, and how they relate, but I tend to think of things in terms of epistemology, ontology, and axiology — the study of how we know, the study of what we know, and the study of why we care. All three are tightly intertwined, but if I had to give them an ordering, it would be that epistemology precedes ontology precedes axiology. That is, our epistemological choices largely determine our ontological choices and those in turn decide our axiological choices. Thus it should come as no surprise that I had to address epistemology and ontology before I could talk about axiology.

Of course the irony is that we actually investigate philosophy the other way around because first we ask “why?” by wanting to know, then we ask “what?” by knowing, and only finally can we ask “how?” by considering the way we came to know. The map, if you will, is drawn inverted relative to the orientation of the territory. So in some ways we have been studying axiology all along because axiology subsumes our founding question — why? — but in other ways we had to hold off talking about it until we had a clear understanding of how and what it means to ask “why?”. With that context, let’s now turn to axiology proper.

Axiology is formally the study of axias or values just as ontology is the study of ontos or being and epistemology is the study of episteme or knowledge. An axia then is something of value that we care about, or put another way, since it’s the object of a phenomenally conscious experience of caring, it’s a noema to which we ascribe telos or purpose, so we might think of axiology as teleological noematology, but to bother to think of something is to give it sufficient telos that it was thought of rather than not, so in fact all noemata we encounter are axias by virtue of being thought of. Non-teleological noemata still exist in this view, but only so long as they remain unconsidered, thus for most purposes noematology and axiology concern the same thing, and the choice of which term to use is mostly a matter of whether we wish to emphasize traditional axiological reasoning or not.

To make this concrete, consider the seemingly non-teleological, value-free thought “this is a pancake”. Prior to supposing the existence of the pancake there could have been a thought about the pancake which was valueless because it existed but was not the object of any experience, but as soon as it was made object it took on purpose by being given the role of object in an intentional relation by the subject experiencing it. From there the subject may or may not assign additional purpose to the thought through its experience of it, but it at least carries with it the implicit purpose of being the object of experience. As with thoughts of pancakes, so too with all thoughts, thus all thoughts we encounter are also values.

Within this world of teleological noemata we now consider the traditional questions of axiology. To these I have nothing special to add other than to say that, when we take noemata to be axias, most existing discussions of preferences, aesthetics, and ethics are unaffected. Yet I am motivated to emphasize that noemata are axias because it encourages a view of axiology that is less concerned with developing consistent systems of values and more concerned with accounts that can incorporate all noemata/axias. This is important because the work of AI alignment is best served by being maximally conservative in our assumptions about the sort of conscious thing we need to align.

For example, when working within AI alignment, in my view it’s best to take a position of moral nihilism — the position that no moral facts exist — because then even if it turns out moral facts do exist we will have built a solution to alignment which is robust to not only uncertainty about moral facts but to the undesirability of moral facts. That is, it will be an alignment solution which will work even if it turns out that what is morally true is contrary to human values and thus not what we want an aligned AI to do. Further, if we assume to the contrary that moral facts do exist, we may fail to develop a sufficiently complete alignment solution because it may depend on the existence of moral facts and if we turn out to be mistaken about this such a solution may fail catastrophically.

Additionally, we may fail to be sufficiently conservative if we assume that AI will be rational or bounded-rational agents. Under MIRI’s influence the assumption that any AI capable of posing existential risk will be rational has become widespread within AI safety research via the argument that any sufficiently powerful AI would instrumentally converge to rationality so that it does not get Dutch booked or otherwise give up gains, but if AGI were to be developed first using machine learning or brain emulation then we may find ourselves in a world where AI is strong enough to be dangerous but not strong enough to be even approximately rational. In such a case MIRI’s agent foundations research program might not be of direct use because it makes too strong of assumptions about how AI will reason, though it would likely offer useful inspiration about how to align agents in general. In the event that we need to align non-rational AI, addressing the problem from axiology and noematology may prove fruitful since it makes fewer assumptions than decision theory for rational agents.

Even if we allow that an AI capable of posing an existential threat would be rational, there is still the axiological question of how to combine the values of humans to determine what it would mean for an AI to be aligned with our specific values. To date there have been some proposals, and it may be this problem can be offloaded to AI, but even if we can ask AI to provide a specific answer we still face the metaethical questions of how to verify if the answer the AI finds is well formed and how to ensure the AI will find a well formed answer. In view of this we might say that alignment asks one of the questions at the heart of metaethics — how do we construct an ethical agent? — and solving AI alignment will necessarily require identify a “correct” metaethics. In this case the study of AI alignment is inseparable from axiology and, in my view, noematology, so these are important lenses through which to consider AI alignment problems in addition to decision theory and machine learning.

These are just among some of the topics for which we wish to address with our noematological perspective. And there are of course many topics outside AI alignment on which it touches, some of which I have already explored on this blog and others which I have considered only in personal conversations or during meditation. I will have more to say on these topics in the future, especially as they relate to AI alignment, but for now this completes our introduction to existential phenomenology and noematology. On to the real work!

New Comment