Searle puts a man who doesn't speak Chinese into a room with a rulebook. The man receives prompts in Chinese and has to manipulate these given symbols, according to this rulebook, to produce plausible sounding responses in Chinese. The people on the outside are convinced that the person inside the room can understand Chinese, but the man inside, as we’ve stated, doesn’t.
I wonder if the Chinese, confused by the sudden appearance of a mysterious box with a man inside, might at least be somewhat concerned with this man's wellbeing. Did Searle leave him with any food or water? How long has he been in there for?
Maybe the Chinese do just that—they send in a prompt that asks if he's hungry—"奇怪人,你饿吗?"—and then, hold on, how does the man reply? Let's see what the rulebook has to say.
Maybe the rulebook says that whenever a question is asked of the person inside the room, then the person in the room should just respond with how they are feeling. So, upon receiving this prompt in Mandarin enquiring if the strange man inside hungry, the rulebook guides the man to say yes, yes he is hungry, thank fucking god for food. And if the Chinese then follow up with a question on what food he likes—does the rulebook then let him say that he loves peking duck with extra hoisin sauce?
This has to be a slippery slope to something outside of Searle's original experiment. For the room to demonstrate this kind of understanding of Chinese, the rulebook would need to take the man’s internal states as a semantic input. But that will no longer be a view of language as pure, syntactical manipulation.
So the rules would probably have to be more deterministic. What if Searle chose one possible answer for each of the possible questions in the world, and wrote all of them down in the rulebook. In that case, the rulebook should probably lean towards more ambiguous answers, like:
'Are you okay?' — 'I don't know’.
'Why are you in China?' — 'I don't know’.
'Who put you in this box?' — 'I don't know’.
But can this (or any other set of deterministic mappings) still constitute an understanding of Chinese? I can't figure out how to rescue Searle's room from these two failure modes. The way I see it, the room holds perfectly fine for questions like
'What is the colour of the sky?' — ‘Blue'
'What is two plus two?' — ‘Four'
but breaks down in questions which refer to the subjectivity of the person inside the room. In those cases, demonstrating a strong understanding of Chinese would require the rulebook to take input from the internal states of the person inside the room—if they are actually hungry, thirsty, held against their will—to answer those questions truthfully. But this sounds like the rulebook is performing the act of translation—between Chinese to English, asking the man what they think, then translating that back to Chinese. And that is something other than what Searle intended; an understanding of language grounded in pure symbolic manipulation—which can’t operate with semantic input from the man inside the room.
In the second case, Searle suggests that an understanding of Chinese can still be simulated without input from the man inside the room. We say that instead, a really sophisticated rulebook can pass off the appearance of understanding Chinese, to plausibly answer all sorts of possible questions. In that case, believe it or not, we’re just talking about philosophical zombies.
Because if you believe in the logical possibility of zombies without subjective experience—zombies that can plausibly answer the question ‘are you hungry’ the same way that humans can, then you are actually admitting to Searle’s view of language. You’re saying that one can simulate their subjectivity within a language they fundamentally do not understand.
For one can consider the p-zombie that appropriately replies 'I am hungry’ when asked. Perhaps signals from its physical body (a low blood sugar, empty stomach) would trigger its brain to produce this correct linguistic output, all without any accompanying subjective experience of hunger. But how would the zombie distinguish between a sarcastic ‘are you hungry’ (as a way to ask ‘are you angry?’), and an enquiring ‘are you hungry’ as ‘what do you want to eat?’. To understand how to respond in each of these different contexts has to require an understanding of the different contextual clues which imply what is being asked.
Can these zombies appear to understand language without any subjectivity? Searle supposes so; it is possible for language understanding to be simulated without subjectivity. And Chalmers thinks so too.
But surely we can agree that language is not a lookup table. That is a perspective fundamentally divorced from how language is used in our day-to-day lives. The success of large language models should be an indication of how these thought experiments feel increasingly…superficial? For what does subjectivity really mean, if LLMs can convincingly respond to our prompts, including those that seemingly defer to their subjectivity, their judgement? A stochastic parrot that writes code for you, reviews essays on your behalf, decides what is good and what is bad? At what point does simulation flip over into actuality?
In fact, doesn’t the success of GPT models, the self-attention mechanism, suggest something else about the nature of how language is understood—as what is being focused on? Can’t we take that as a form of subjectivity?
Meaning doesn't exist outside of rooms, only inside them. The Chinese room is empty, the man inside is long gone, having starved to death by neglect. Are we dragging his corpse around in a box? We’re dragging his corpse around in a box.