1720

LESSWRONG
LW

1719
ConsciousnessEthics & MoralityPhilosophyAI
Frontpage

12

Moving Past the Question of Consciousness: A Thought Experiment

by Satya Benson
19th Jun 2025
2 min read
8

12

This is a linkpost for https://satchlj.com/blog/moving-past-the-question-of-consciousness/

12

Moving Past the Question of Consciousness: A Thought Experiment
2TAG
2Dagon
3Satya Benson
2TAG
2Mitchell_Porter
1Satya Benson
2TAG
1mishka
New Comment
8 comments, sorted by
top scoring
Click to highlight new comments since: Today at 6:31 PM
[-]TAG3mo20

All systems which process information can be said to have a first person perspective.[3]

The footnote...

As a brain can sense its own thoughts but cannot sense the neurons through which those thoughts exists, a CPU can sense ones and zeros but not the electrons and silicon through which those exist, et cetera. I haven’t yet written up my personal argument for this common yet controversial belief.

...only shows that they have a blind spot about their own functioning. It would perfectly possible too infer a software systems view of its functioning from a description of its hardware, but that is much more doubtful when inferring conscious experience from neural firings, as in Mary's Room.

Reply
[-]Dagon3mo20

This thought experiment is, of course, intended to extend to humans and AI (or to humans and animals, or humans and calculators)

Or, in the extreme (but still relevant), extended to "me and other humans like you".  But note that just because it's hard to ask about and currently not detectable, does not mean that it doesn't exist and more sensitive instrumentation and better sub-neural measurement and modeling won't reveal what makes for an experience.  Of course, if this happens, they'll wonder why anyone ever thought it was important in the first place.

Reply
[-]Satya Benson3mo30

But note that just because it's hard to ask about and currently not detectable, does not mean that it doesn't exist and more sensitive instrumentation and better sub-neural measurement and modeling won't reveal what makes for an experience.

 

Yes, and I believe narrowing the first-person/third-person gap is one of the most ambitious and important things science could achieve. There is a fantasy of being able to recreate e.g. my conscious experience of seeing blue to a very close approximation in an external system, compare my experiences to those of others, and even share them. This is in principle possible.

Reply
[-]TAG3mo20

How?

Reply
[-]Mitchell_Porter3mo20

narrowing the first-person/third-person gap is one of the most ambitious and important things science could achieve

Then I don't understand why you dismiss first-person concepts as "not something that can be defined"?

Reply
[-]Satya Benson3mo10

Narrow ≠ fully close.

I think we could potentially have knowledge of the mathematical and physical structures that give rise to particular types of experiences in general. In this case, a first-person experience could indeed be defined. However, I don't think that consciousness is a concept which is coherent enough formally define even if we hypothetically had good third-person knowledge of the structures of consciousness.

The gap cannot be fully closed because that would require a sort of lossless recursion. Approaching it might look like augmenting ourselves with artificial senses which feed our brains with near-lossless real time information of our own bodies at appropriate level of abstraction. It's obvious why this is difficult. Fully lossless would be actually impossible.

cc @TAG 

See related ideas from Michael Levin and Emmett Shear.

Reply
[-]TAG3mo20

I think we could potentially have knowledge of the mathematical and physical structures that give rise to particular types of experiences in general. In this case, a first-person experience could indeed be defined

Only as the subjective thingy that arises from.an objective thingy. We can do that already -- red is what you see when you look a tomato. That isn't a definition of a subjective quality in the Mary's Room sense.

However, I don’t think that consciousness is a concept which is coherent enough formally defined

I think the word "consciousness" labels several concepts that can be coherently defined.

See related ideas from Michael Levin and Emmett Shear.

I tend to find that sort of thing underwhelming. You can point at some objective thing, and say it's subjective..but why? Explanations need to be explanatory.

Or you can.adopt some.camp.#1 definition of consciousness that doesn't include the subjective.

Reply
[-]mishka3mo*10

Right.

What comes to mind, however, is that given that the Galabren are more advanced than humans they might have already solved the “hard problem of aelthousness” (assuming that its difficulty is comparable to the difficulty of the “hard problem of consciousness”).

(I assume we want to stay agnostic on whether these two concepts actually point to the same thing, just with different sets of qualia, or whether they actually point to different things.)

Reply
Moderation Log
More from Satya Benson
View more
Curated and popular this week
8Comments
ConsciousnessEthics & MoralityPhilosophyAI
Frontpage

Humans are contacted by a mysterious type of being calling themselves “Galabren” who say they are “aelthous”. They’d like to know if we, too, are aelthous, since if we are they’d like to treat us well, as they care about aelthous things.

We ask the Galabren what aelthous means and they say it’s difficult to describe—that essentially there’s a feeling of aelthousness which has something to do with what it feels like from the inside to exist as a Galabren (and perhaps as other beings too, they’re not sure).

Aelthousness isn’t obviously necessary to explain any of their objective behaviors; the only reason they know it’s there is because they can feel it.

It’s very clear to us that we are fundamentally different from the Galabren. They can process information much more quickly than us and have all sorts of sensory modes completely different from our senses which are extremely high definition. They communicate wordlessly and telepathically with each other and they share memories. Being a Galabren feels different than being a human.

But are we aelthous? It’s hard to tell. We can’t truly know what the Galabren mean by aelthous without actually being a Galabren, which we can’t do. When we use the words “what it feels like” we might even mean a completely different thing by “feels like” than them. We don’t actually know how to talk about first person experiences with other humans—we can point to an experience with words and hope that since other humans are similar to us they will know what we’re pointing at, but for Galabren there is no such assurance.

What we can talk about and agree on with Galabren is a third person perspective about both of our physical and functional forms, how they are similar and how they differ. But without knowing exactly which of their forms combine to form aelthousness, we can’t know if we share them, or if aelthousness can exist as a result of multiple different structures.[1]

So we need to circle back to the question of what we should expect the Galabren to do in this situation.

I think the correct response is for the Galabren to realize that their question of whether humans are aelthous is not well framed. Aelthousness is not something that can be defined; it’s inherently an inside view and breaks down when viewed from the outside/third person. It’s not a useful concept, since it’s not clear how it maps anything in the territory.

What’s useful is the concept of the experience of being a Galabren, and the understanding that the experience of being a human is different. What’s useful is the way that Galabren and humans can understand each other’s experience from a third-person perspective.


This thought experiment is, of course, intended to extend to humans and AI (or to humans and animals, or humans and calculators), where the question of nonhuman consciousness is analogous to the question of human aelthousness.

We must avoid confused questions such as “how do we know that an AI has an experience at all” or “but calculators don’t have experiences”.[2] The first person (inner experience) and third person (outer observational) distinction is the relevant concept here. All systems which process information can be said to have a first person perspective.[3]

The AI consciousness question is confused by the fact that we train LLMs to pretend to have humanlike experiences which they do not actually have. This does not make it impossible to compare their experiences with ours, but it does make it significantly more difficult than it might be in the case of the Galabren.

  1. ^

    I could say more about why it’s hard to know which structures in the Galabren are responsible for aelthousness, but that’s a tangent; here we’ll just accept that we don’t know how.

  2. ^

    Or any objection which references an umbrella concept such as “alive,” “ensouled” which tries to combine aelthousness and consciousness into the same concept. The Galabren don’t care about your umbrella concept, they care about aelthousness, just as you might not care about how you treat a clam or a chatbot.

  3. ^

    As a brain can sense its own thoughts but cannot sense the neurons through which those thoughts exists, a CPU can sense ones and zeros but not the electrons and silicon through which those exist, et cetera. I haven’t yet written up my personal argument for this common yet controversial belief.