I agree that both of these responses try to get around the hypothetical a little, but I think they’re both really sensible practical suggestions and I strongly agree with where you landed.
Interesting thought.
I explicitly left zombies out of the post since zombie possibility is contentious and my intuition around their moral status is much less clear.
You might enjoy reading through one of the papers I linked in the post by Joshua Shepherd where he lands on a view in which conscious is not necessary for moral status. The thought is crystallised by thinking about robots (which unlike zombies are not exact duplicates of humans) and he notes that the intuition is still unclear:
From the paper:
> Imagine that you are an Earth scientist, eager to learn more about the makeup of these robots. So you capture a small one—very much against its protests—and you are about to cut it open to examine its insides, when another robot, its mother, comes racing up to you, desperately pleading with you to leave it alone. She begs you not to kill it, mixing angry assertions that you have no right to treat her child as though it were a mere thing, with emotional pleas to let it go before you harm it any further. Would it be wrong to dissect the child? (2019, 28).
Kagan ofers a non-necessitarian judgment: ‘I fnd that I have no doubt whatsoever that it would be wrong to kill (or, if you prefer, to destroy) the child in a case like this. It simply doesn’t matter to me that the child and its mother are “mere” robots, lacking in sentience... For you to destroy such a machine really would be morally horrendous’ (28).
In response, Kriegel (forthcoming) ofers the opposite judgment:
> No matter how many experiential terms the vignette is surreptitiously peppered with (“desperately,” “angry,” “emotional”), and how many automatized projections it counts on from what such behavior in conscious beings indicates about their likely experiential state, one would have to be seriously confused to think that one is in any way harming a collection of metal plates by intervening in the metal’s internal organization (forthcoming).
When cases generate sharply conficting judgments across a set of very sharp philosophers, it can be difcult to know how to proceed.
This is a great post, but I think the argument anchors too much on valence which is a questionable requirement and the thrust of your argument goes through without it.
Concretely, imagine a philosophical Vulcan which is a creature exactly like a human with rich conscious experience but no valence. Would it be permissible to kill 5 vulcans to save 1 human? This isn’t obvious to me at all. Intuitively the fact that vulcans have rich inner conscious experience means their lives have intrinsic value, even if this experience isn’t valenced.
To be sure, I think you can just modify your argument to avoid mentioning valence. Roughly,
I agree that the terminology is useful to bracket metaphysical discussion of LLM mental states but I’d just caution us as a community to use the term ‘quasi-belief’ really carefully. Specifically, I could see it being employed to import heavyweight metaphysical assumptions that aren’t justified or are lightly argued for.
Concretely, there are two potential ways to use it:
I think 1) is totally fine and is the intended usage. 2) is only fine if it’s backed up with some solid argument.
To be sure, your post and the Chalmers paper use it correctly as 1) but I could see its meaning slipping to 2) as it gets more widely deployed.
A crux seems to be that with chroma world, I think experience is of differencebetween chroma, assuming chromatic realism (real quiddites). Whereas the Russelian monist wants the experience to be of identity of internal chroma. This is perhaps a disagreement on the phenomenology / epistemology relationship.
I agree this is the crux. The Russellian Monist wants to say there’s an intrinsic component to experience which is not exhausted by the structure.
I’d also recommend this paper on Russellian Monism by Chalmers. And to add Frankish to your list of illusionists alongside Dennett to explore e.g. this paper against panpsychism.
From my side you’ve motivated me to read more of Sellars’ work which I wasn’t familiar with before this exchange.
I think the chroma toy-example is nicely illustrative and I’m happy to grant most of it with a clarification: The environmental chroma stand in some relations which allow for information to be transferred, but we don’t actually have access to the quiddities in the environment because those quiddities are not constituting your internal states.
I don't think in this world, the experience of color is found in the chroma within. You could imagine a strange situation where an alien was in a room, and the whole room including the alien's body got a north-ish / equ-ish chroma swap instantaneously. They wouldn't notice a thing. If you just changed the chroma within them, they would notice. If you just changed the chroma in the room outside their body, they would also notice. That indicates that the experience is more "of" relations between chroma, rather than of the chroma within.
I agree there’s no noticeable difference (provided you also rotate their memories of the previous internal state as part of the global rotation). But this is exactly the type of internal quiddity permutation that Russellian Monists think matters metaphysically even if it doesn’t make a noticeable information-based difference in the structure.
I’m fully aligned that our behaviour and utterances are determined by informational content coming from the environment. The map gets optimised for the territory by way of information transfer/optimisation and the brain reads the map. I imagine it like a neural net where the information flows into the network and fixes the relations between nodes in the structure — if you moved the nodes around in relation to each other you’d get different experiences as the structure changes. But in my view if you permuted the nodes themselves you’d also get a change in what’s constituting the structure so the intrinsic part of the experience would change.
I think where we differ is that I don’t think the structure provides the whole story. It fixes an equivalence class of possible categorical assignments e.g. RGB or R’G’B’ or XYZ but each of these assignments is a modally potent metaphysical possibility that could have obtained.
At the meta level:
I’m not able to carve out as much time/energy as I’d like to keep up my end of the exchange over the next few weeks so I might take the opportunity to gracefully tap out and make this my last reply (or maybe one more if you have any loose ends you’d like to close off.)
On the whole, I’ve really appreciated how much time and effort you’ve taken to model my point of view and then crash test it. The whole exchange has forced me to clarify the view in my mind and highlighted some pressure points that I need to continue thinking about and further read up on.
I also stick by my original comment that the exchange has been a small update towards illusionism for me. Previously I held that illusionism lacked some coherence points, but I think your “many worlds” interpretation where all orbits are real smooths over some of the coherence issues in my mind and makes for an avenue worth exploring.
Anyway, thanks for the exchange!
I'm onboard with pretty much your whole picture about how we acquire content. Rich information from the environment passes through the optic nerve through an information bottleneck and then gets reconstructed by the brain which suggests there's some representation going on.
I'm even happy to grant (most of) your steelman of Russellian Monism. There's structuralism about physics and underlying quiddities which serve as the relata. The brain then represents the incoming information stream in a certain way to generate conscious experience.
Where we differ is how the quiddities enter the story. The quiddities "in the world" don't need to travel from the photons all the way through the optic nerve to get represented by the brain. I agree this doesn't make any sense. Rather, information travels through the network and is realised by the quiddities in the neuronal substrate. The visual system builds a world-tracking representation (like in normal cog-sci) and the Russellian move is just to say the states that instantiate the structure have an intrinsic/qualitative nature.
To give a concrete example, in prosopagnosia I might lose the ability to recognise a face as “my friend’s face” at a high representational level. But that doesn't mean the basic colours, blobs etc.. stop appearing in my visual field, it just means the system is no longer organising the low-level representations into a higher-order world-directed concept like "my friend's face". The intrinsic qualitative properties of the base are still "there" they're just being represented differently by the brain.
It could of course be the case that their brains have differing underlying relata. But perhaps "these are not the quiddites you are looking for". The way red looks would seem to be a representation of a red object, and that representation can postulate quiddites; the primary intension of that representation can connect with the secondary intension of actual quiddites in the red object. But on the direct realist analogy, it can't connect with the quiddites of the brain.
I think there are two concepts here that should be distinguished:
Imagine we implemented an exact copy of my brain in a silicon twin. The categorical base properties in my neurons are R and in the silicon twin they're S. We both look at a red stop sign and say "the stop sign looks red" so at the level of the environmental concept we both latch onto the primary intension of "the red on the stop sign" and use the public word 'red' to denote it.
But for phenomenal concepts, the primary intension of "the way red feels to me" would be different for both me and the twin as it's anchored to the categorical base our internal states are realised in. They'd also have different secondary intensions as the content is realised in different categorical bases R vs S.
This is possibly a misinterpretation on my part, and perhaps the Russelian monist wants the representation of intrinsic properties of color to point at the brain somehow, so that the original and the twin could actually have different intrinsic properties corresponding to color perception. I don't see how to square this with the direct realism analogy, although maybe I'm not adequately exploring alternative ways to operationalize direct acquaintance.
I think this is right. I'm broadly happy with your picture of direct realism about world-directed content. On my view, the acquaintance relation is with the internal state that realises the content.
I should clarify my view a little here. Roughly I’m committed to two things:
* Intrinsic/categorical properties exist and they are qualitative.
* Conscious experience of a quality consists in representing the quality where further conditions obtain.
I’m deliberately not going to endorse a detailed view on when “further conditions obtain” because I think basically any good cog-sci theory of consciousness could be ported in here e.g. RPT, GWT etc.. and I’m happy to just let disputes among these be settled empirically by whatever best fits the data.
I think you’re circling a genuine pressure point on my view which is more epistemic than semantic, namely, the Awareness Problem. Roughly, the objection says that if qualities exist as intrinsic properties of the categorical base and the brain is “aware” of the qualities then it’s conceivable that a qualitative zombie could exist i.e. the quality could be present but the structure of the brain would conceivably not be able to become aware of it. The original objection targets a Higher-Order-Thought view of panqualityism where the quality needs to be quoted or indexed by a HOT for the brain to be aware of it. This feels reminiscent of the “tokening” objection you’re pushing where the brains structure and the categorical base are two separate kinds of stuff and the brain needs to “reach across” to token the base in a Cartesian dualist sort of way.
I don’t think this is the correct route. On my view the brain is part of the categorical qualitative base and it’s just in those qualitative states. It’s not a separate type of stuff so I don’t think it needs to “reach across” by indexing or quoting them in a special way, it just needs to minimally represent the qualities.
There’s an interesting recent paper Rosenberg (2025) which argues that certain brain states represent the qualities at first-order. The analogy is a projector film reel[1] which is capable of producing coloured film when it’s projected in the right way onto the wall. By contrast, the HOT is like a sticky note stuck on the reel saying “this film plays X” which doesn’t add anything to the actual content. I don't need to literally token the state with a higher thought like "boy am I in some pain right now!" to be feeling pain. The first-order representation of pain seems to be doing the work. I don’t think this fully solves the problem but it does make “qualitative zombie” feel less compelling for me. If we have qualities in the base and the structural machinery to represent them in the right way it’s hard, for me at least, to conceive of a scenario where the result is a zombie with no awareness of the quality. In fact, I’d be inclined to treat the zombie as an absence of the categorical base with the structure/relations intact i.e. OSR.
Again, I don't think this view is without challenges but I think it has real theoretical parsimony that makes it attractive. It takes phenomenal consciousness seriously, it doesn't lead to counter-intuitive bullets like panpsychism and it fits squarely into a naturalist/monist picture.
The minds could be directly acquainted with color qualia that have no physical definition. But then who am I talking to? The mind isn't causing any talking. It's a bit like the deterministic MMORPG situation. (I realize this gets into "standard problems with Chalmers-type views" territory and is less of a knock-down semantic argument.)
One of the main motivations for Russellian views is to provide a natural story for where qualia sit. They’re not “causal” in the sense of meddling with the physics but rather “constitutive” in terms of populating the structure. So Russellian views are typically thought to evade epiphenomenalism objections.
The analogy is imperfect and it runs straight into your “implementation details” objection - but it serves to illustrate the point about tokening.
When I said “they don’t have direct access to R” this was imprecise and invited reading R as an implementation detail. It’s not an implementation detail so I should clarify precisely what I mean here.
The phenomenal character of red in my experience and the categorical base property R are the same thing. So when I have a red experience, in a sense, I’m directly acquainted with R as this quality in experience (which is the primary intension.) What I meant to deny is not direct acquaintance with R but rather a transparently rendered a priori access to R under a physical/structural description that would let you derive what the phenomenal character is like. In other words, you need to be literally tokening the property R from a first-person perspective to experience the phenomenal character of red. The secondary intension just rigidly designates the property R across all possible worlds.
The 256-bit float vs 128-bit floats example is disanalogous because there’s a structural implementation difference in the host system which is causing the change. R and R’ have no internal structure with which to differ, instead they differ intrinsically. Think of it like the mass-role in physics. If we switched the intrinsic property of the mass m with m’ such that F = ma now read F = m’a the physicist would say that nothing has changed. Whether m or m’ is playing the mass-role leaves the third-person physical observables untouched. The Russellian move is to say that there's still a further fact about the categorical properties m and m' that cause them to differ intrinsically.
I think on this view you still have trouble "referring to R", on relatively standard semantic views like "you refer to things by saying information specifying them", requiring a significant weakening of semantics to get the references to work out.
Your semantics is doing a lot of work here, and I wouldn't grant that it's standard. If you build semantics in a way that reference always goes via informational differences in the structure, then of course reference to intrinsic properties will look impossible. From my perspective, this is baking the illusionist/structuralist conclusion into the semantics and I'd treat that as a bug rather than a feature.
The motivation for this Kripke-Chalmers style 2D semantics is motivated by these kinds of cases where we do seem to latch onto things whose immediate microstructure we don't know (e.g. water/H20). And I'd argue that this acquaintance-based form of semantics works for all sorts of things like "this pain", "that red" and arguably even "that object" without needing informational differences to specify them up to some structural isomorphism.
Regarding the term relata, I'm just using it to mean "things" which stand in relation to each other. On view 1) ontic structural realism says there are no "things" which stand in relation to each other, it's the relations themselves which exist and that's all. Your view sounds to me like it leans towards 1) with some epistemic humility about whether a richer structure like 2) really underlies reality.
I share your intuition that OSR sounds like a metaphysically dubious hypothesis even if it's methodologically useful and given that your semantics is strongly information-based, it's unsurprising that talk of "things" or "relata" feels slippery because your semantic machinery doesn't have the resources to pick them out. That's exactly why I'm inclined to bring acquaintance-based reference to intrinsic categorical properties as an extra ingredient.
I'm dubious about the existence of "multiple substances" in the classical philosophical sense. There is a "syndiffeonesis" argument that for things to be different, they have to have something in common. And as long as they have something in common, what is the meaning of claiming they have "multiple substances"?
I’m happy to grant that talking about “phenomenal substance” vs “physical substance” in the Cartesian sense is not well-formed. What matters more for me is just the distinction between intrinsic properties and relational/structural properties. Once we’ve granted that reality is not purely structural and there are some categorical/intrinsic aspects to reality then the distinction between 2) and 3) starts to collapse: they both allow relata with an intrinsic nature, 3) just treats our first-person acquaintance with experience as evidence about what that intrinsic nature is like.
So I think there are two cruxes:
I take your view as saying: we have epistemic access to the structural relations, and it’s at least plausible that they’re populated by some kind of relata, even if our information-based semantics can’t get a clean handle on them. If that’s right, then there’s actually some convergence in our views: the phenomenal realist just wants to say “yes, there are such relata and their intrinsic nature is presented to us in experience.”
I’d reject the analogy between Vulcans and video game players.
Vulcans can freely interact with their environments and have goals/desires which can be promoted/thwarted. It’s not clear that any of these hold for video game characters.
If you made the character sophisticated enough that the algorithm in the game realised a conscious mind capable of interacting with the environment and having goals then I think I’d bite the bullet and say it’s wrong to kill the character gratuitously.