Even if we assume or it becomes true that LLMs are genuinely minds and all that, it still seems similarly bad to use them like this. LLM-generated text is not your testimony, regardless of whether or not it is 'testimony'.
I like the analogy of a LARP. Characters in a book don't have reputation or human-like brain states that they honestly try to represent - but a good book can contain interesting, believable characters with consistent motivation, etc. I once participated in a well-organized fantasy LARP in graduate school. I was bad at it but it was a pretty interesting experience. In particular people who are good are able to act in character and express thoughts that "the character would be having" which are not identical to the logic and outlook of the player (I was bad at this, but other players could do it I think). In my case, I noticed that the character imports a bit of your values, which you sometimes break in-game if it feels appropriate. You also use your cognition to further the character's cognition, while rationalizing their thinking in-game. It obviously feels different from real life: it's explicitly a setting where you are allowed and encouraged to break your principles (like you are allowed to lie in a game of werewolf, etc.) and you understand that this is low-stakes, and so don't engage the full mechanism of "trying as hard as possible" (to be a good person, to achieve good worlds, etc.). But also, there's a sense in which a LARP seems "Turing-complete" for lack of a better word. For example in this LARP, the magical characters (not mine) collaboratively solved a logic puzzle to reverse engineer a partially known magic system and became able to cast powerful spells. I could also imagine modeling arbitrarily complex interactions and relationships in an extended LARP. There would probably always be some processing cost to add the extra modeling steps, but I can't see how this would impose any hard constraints on some measure of "what is achievable" in such a setting.
I don't see hard reasons for why e.g. a village of advanced LLMs could not have equal or greater capability than a group of smart humans playing a LARP. I'm not saying I see evidence they do - I just don't know of convincing systematic obstructions. I agree that modern LLMs seem to not be able to do some things humans could do even in a LARP (some kind of theory of mind, explaining a consistent thinking trace that makes sense to a person upon reflection, etc.) but again a priori this might just be a skill issue.
So I wonder in the factorization "LLM can potentially get as good as humans in a LARP" + "sufficiently many smart humans in a long enough LARP are 'Turing complete up to constant factors' " (in the sense of in principle being able to achieve, without breaking character, any intellectual outcome that non-LARP humans could do), which part would you disagree with?
"Potentially get as good as humans" I of course think in general, as I think we're by default all dead within 100 years to AGI. If you mean actual current LLMs, I'm pretty sure no they cannot. See https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce and https://www.lesswrong.com/posts/5tqFT3bcTekvico4d/do-confident-short-timelines-make-sense
I would point you for example to low-sample-complexity learning that humans sometimes do, and claim that LLMs don't do this in the relevant sense and that this is necessary for getting good. See also this thread: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce?commentId=dqbLkADbJQJi6bFtN
I see another obstruction in attention span. I strongly suspect that whenever an LLM is tasked with writing the next token, attention mechanisms compress all potentially relevant information into less than a hundred thousand numbers, preventing the model from taking many nuances into account when writing the token. A human brain, on the other hand, takes into account billions of bits of information stored in neuron activations.
@TsviBT, we had a warning shot of LLMs becoming useful in research or writing a coherent short story in @Tomás B.'s experiment (the post on LW describing it was removed for an unknown reason).
The claim that the thought process behind words—the mental states of the mind and agency that produced the words ... does not exist seems phenomelogically contradicted by just interacting with LLMs. I expect your counteragrument be to appeal to some idiosyncratic meanings of words like thoughts or mind states, and my response being something in the direction 'planes do fly'.
Why LLM it up? Just give me the prompt. One reason why not to is your mind is often broadly unable to trace the thoughts of an LLM, and if the specific human-AI interaction leading to some output has nontrivial context & lenght, you would also be unable to get an LLM to replicate the trace without the context shared.
I expect your counteragrument be to appeal to some idiosyncratic meanings of words like thoughts or mind states
I think you're just thinking of some of the thought processes, but not about the ones that are most important in the context of writing and reading and public communication. I have no big reason to suspect that LLMs don't have "mental states" that humans do in order to proximally perform e.g. grammar, sufficient distinctions between synonyms, facts like "Baghdad is the capital of Iraq", and a huge variety of algorithms. I do think they lack the distal mental states, which matter much more. Feel free to call this "idiosyncratic" but I sure hope you don't call anything that's insufficiently concrete / specific / explicit / well-understood "idiosyncratic", because that means something totally different, and lots of centrally important stuff is insufficiently concrete / specific / explicit / well-understood.
I would describe that position as "I suspect LLMs don't have distal/deep mental states, and as I mostly care about these distal mental states/representations, LLMs are not doing the important parts of thinking"
Also my guess is you are partially wrong about this. LLMs learn deep abstractions of reality; as these are mostly non-verbal / somewhat far from "tokens", they are mostly unable to explain or express them using words; similarly to limited introspective access of humans.
I do think they lack the distal mental states, which matter much more.
Out of interest, do you have a good argument for this? If so, I’d be really interested to hear it.
Naively, I’d think your example of “Baghdad is the capital of Iraq” encodes enough of the content of ‘Baghdad’ and ‘Iraq’ e.g. other facts about the history of Iraq, the architecture in the city etc.. to meaningfully point towards the distal Baghdad and Iraq. Do you have a different view?
I mean the deeper mental algorithms that generate the concepts in the first place, which are especially needed to do e.g. novel science. See https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
and
https://www.lesswrong.com/posts/5tqFT3bcTekvico4d/do-confident-short-timelines-make-sense
See also this thread: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce?commentId=dqbLkADbJQJi6bFtN
LLM text categorically does not serve the role for communication that is served by real text.
The main thesis is missing some qualifier about what kind of text you are talking about.
There are many kinds of communication where the mental state of the writer matters very little and I would be interested in the text even knowing it was generated by an LLM (though I'd prefer to know it was generated by an LLM).
In particular, for most kinds of communication, the text says sth about the world, and I care about how well the text matches the world much more than I care about whether it was produced by some human-like idea generator:
(In all those cases I want to know what GPT-7's contribution is. I think it's bad to be mislead about what was LLM-generated.)
There are also situations where I am sympathetic to your point:
Given that current LLMs are weak at the sort of text that blogposts are, I think "just don't use LLMs to generate blogposts" is a reasonable heuristic. But my understanding is that you are making much more general claims that I think are wrong and will become obviously wrong if/when applied to expensive-to-run-and-smart LLMs.
I largely agree but your two lists are missing a bunch of really important cases that I gestured at, e.g.
[retracted due to communication difficulty. self-downvoted.]
The intro sounded promising, but almost immediately you're massively overclaiming. maybe someone will pick it apart, but you're doing a lot of it, and I don't feel like it right now. Many sentences are similar to ones that are true, but taken literally they imply or directly state things that are somewhere between not-established to clearly-false. eg, as StanislavKrym mentions: "Temporally: ...that is carrying out investigations." - this is just obviously not true in some cases. I do agree that there's some form of claim like your title that is plausible. Anthropic's recent paper seems to imply it's not consistently not testimony. I could buy that it's not systematically able to be testimony when trained on generative modeling of something else. Many of your subclaims are reasonable; please fix the ones that aren't.
(Note, posted by private request: The OP is primarily motivated by questions about human discourse--what it's for, how it works, how to care about it more. LLMs are the easiest contrast / foil.)
Obviously I disagree, but mainly I want to say, if you're trying to actually communicate with me personally, you'll have to put more effort into thinking about what I'm saying, e.g. seeing if you can think of a reasonable interpretation of "investigate" that LLMs probably / plausibly don't do.
What is YOUR interpretation of investigation that LLMs don't do? Whatever it is, you are either mistaken or it doesn't treat this stuff (which I obtained by prompting GPT to do a task from the Science Bench) as an investigation.
E.g. the sort of investigation that ends in the creation of novel interesting concepts: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce?commentId=dqbLkADbJQJi6bFtN
[edit: retracted due to communication difficulty]
I agree they rarely do that and are not driven to it. I knew you meant that, but I'm not playing along with your use of the word because it seems to me to be an obvious rules-lawyering of what investigation is, in what is in my opinion confusingly called motte-and-bailey; if you were willing to try to use words the same way as everyone else you could much more easily point to the concept involved here. For example, I would have straightforwardly agreed if you had simply said "they do not consistently seek to investigate, especially not towards verbalizing or discovering new concepts". But the overclaim is "they do not investigate". They obviously do, and this includes all interpretations I see for your word use - they do sometimes seek out new concepts, in brief flashes when pushed to do so fairly hard - and if you believe they do not, it's a bad sign about your understanding; but they also obviously are not driven to it or grown around it in the way a human is, so I don't disagree with your main point, only with word uses like this.
Just FYI instead of doing this silently, this comment thread is pretty close to making me decide to just ban you from commenting on my posts.
...I will update to be less harsh rather than being banned, then. surprised I was even close to that, apologies. in retrospect, I can see why my frustration would put me near that threshold.
I don't think I mind harshness, though maybe I'm wrong. E.g. your response to me here https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in?commentId=hjvF8kTQeJnjirXo3 seems to me comparably harsh, and I probably disagree a bunch with it, but it seems contentful and helpful, and thus socially positive/cooperative, etc. I think my issue with this thread is that it seems to me you're aggressively missing the point / not trying to get the point, or something, idk. Or just talking about something really off-topic even if superficially on-topic in a way I don't want to engage with. IDK.
[ETA: like, maybe I'm "overclaiming"--mainly just be being not maximally precise--if we look at some isolated phrases, but I think there's a coherent and [ought to be plausible to you] interpretation of those phrases in context that is actually relevant to what I'm discussing in the post; and I think that interpretation is correct, and you could disagree with that and say so; but instead you're talking about something else.]
[ETA: and like, yeah, it's harder to describe the ways in which LLMs are not minds than to describes ways in which they do perform as well as or better than human minds. Sometimes important things are hard to describe. I think some allowance should be made for this situation.]
Temporally: there's no mind that is carrying out investigations.
- It won't correct itself, run experiments, mull over confusions and contradictions, gain new relevant information, slowly do algorithmically-rich search for relevant ideas, and so on. You can't watch the thought that was expressed in the text as it evolves over several texts, and you won't hear back about the thought as it progresses.
You just had to prompt an LLM like Claude, Grok or GPT-5-thinking with a complex enough task, like one task in the Science Bench. GPT-5-thinking lays out the stuff it did, including coding and correcting itself. As for gaining new information, one could also ask the model to do something related to a niche topic and watch it look up relevant information. The ONLY thing which GPT-5 didn't do was to learn anything from the exchange, since nobody bothered to change the neural network's weights to account for the new experience.
Please stop mixing the plausible assumption that LLM-generated text is likely a mix-and-match of arguments already said by others and the less plausible assumption that an LLM doesn't have a mind. However, the plausible assumption has begun to tremble since we had a curated post whose author admitted to generating it by using Claude Opus 4.1 and substantially editing the output.
In the discussion of the buck post and elewhere, I’ve seen the idea floated that if no-one can tell that a post is LLM generated, then it is necessarily ok that it is LLM generated. I don’t think that this necessarily follows- nor does its opposite. Unfortunately I don’t have the horsepower right now to explain why in simple logical reasoning, and will have to resort to the cudgel of dramatic thought experiment.
Consider two lesswrong posts: a 2000 digit number that is easily verifiable as a collatz counterexample, and a collection of first person narratives of how human rights abuses happened, gathered by interviewing vietnam war vets at nursing homes. The value of one post doesn’t collapse if it turns out to be LLM output, the other collapses utterly- and this is unconnected from whether you can tell that they LLM output.
The buck post is of course not at either end of this spectrum, but it contains many first person attestations- a large number of relatively innocent “I thinks,” but also lines like “When I was a teenager, I spent a bunch of time unsupervised online, and it was basically great for me.” and “A lot of people I know seem to be much more optimistic than me. Their basic argument is that this kind of insular enclave is not what people would choose under reflective equilibrium.” that are much closer to the vietnam vet end of the spectrum.
EDIT: Buck actually posted the original draft of the post, before LLM input, and the two first person accounts I highlighted are present verbatim, and thus honest. Reading the draft, it becomes a quite thorny question to adjucate whether the final post qualifies as “generated” by Opus, but this will start getting into definitions.
It seems to me like both this post and discussion around Buck's post are less about LLM generated content and more about lying.
Opus giving a verifiable mathematical counterexample is clearly not lying. Saying "I think" is on somewhat shakier but mostly fine ground. LLMs saying things like "When I was a teenager" when not editing a human's account is clearly lying, and lying is bad no matter who does it, human or not. Extensively editing personal accounts indeed gets into very murky waters.
However, the plausible assumption has begun to tremble since we had a curated post whose author admitted to generating it by using Claude Opus 4.1 and substantially editing the output.
TBF "being a curated post on LW" doesn't exclude anything from being also a mix and match of arguments already said by others. One of the most common criticisms of LW I've seen is that it's a community reinventing a lot of already said philosophical wheels (which personally I don't think is a great dunk; exploring and reinventing things for yourself is often the best way to engage with them at a deep level).
I think not passing off LLM text as your own words is common good manners for a number of reasons - including that you are taking responsibility for words you didn't write and possibly not even read in depth enough, so it's going to be on you if someone reads too much into them. But it doesn't really much need any assumptions on LLMs themselves, their theory of mind, etc. Nearly the same would apply about hiring a human ghostwriter to expand on your rough draft, it's just that that has never been a problem until now because ghostwriters cost a lot more than a few LLM tokens.
One might think that text screens off thought. Suppose two people follow different thought processes, but then they produce and publish identical texts. Then you read those texts. How could it possibly matter what the thought processes were? All you interact with is the text, so logically, if the two texts are the same then their effects on you are the same.
But, a bit similarly to how high-level actions don’t screen off intent, text does not screen off thought. How you want to interpret and react to text, and how you want to interact with the person who published that text, depend on the process that produced the text. Indeed, "[...] it could be almost anything, depending on what chain of cause and effect lay behind my utterance of those words".
This is not only a purely propositional epistemic matter. There is also the issue of testimony, narrowly: When you public assert a proposition, I want you to stake some reputation on that assertion, so that the public can track your reliability on various dimensions. And, beyond narrow testimony, there is a general sort of testimony—a general revealing of the "jewels" of your mental state, as it were, vulnerable and fertile; a "third-party standpoint" that opens up group thought. I want to know your belief-and-action generators. I want to ask followup questions and see your statements evolve over time as the result of actual thinking.
The rest of this essay will elaborate this point by listing several examples/subcases/illustrations. But the single main point I want to communicate, "on one foot", is this: We care centrally about the thought process behind words—the mental states of the mind and agency that produced the words. If you publish LLM-generated text as though it were written by someone, then you're making me interact with nothing.
(This is an expanded version of this comment.)
LLM text is structurally, temporally, and socially flat, unlike human text.
This could have been an email a prompt.
We have to listen to each other's utterances as assertions.
Because we have to listen to each other's utterances as assertions, it is demanded of us that when we make utterances for others to listen to, we have to make those utterances be assertions.
If you wouldn't slash someone's tires, you shouldn't tell them false things.
If you wouldn't buy crypto on hype cycles, then you shouldn't share viral news. I learned this the hard way:
In the introduction, I used the example of two identical texts. But in real life the texts aren't even identical.
If you're asking a human about some even mildly specialized topic, like history of Spain in the 17th century or different crop rotation methods or ordinary differential equations, and there's no special reason that they really want to appear like they know what they're talking about, they'll generally just say "IDK". LLMs are much less like that. This is a big difference in practice, at least in the domains I've tried (reproductive biology). LLMs routinely give misleading / false / out-of-date / vague-but-deceptively-satiating summaries.
In order to make our utterances be assertions, we have to open them up to inquiry.
In order to make our utterances be useful assertions that participate in ongoing world co-creation with our listeners, we have to open up the generators of the assertions.
A sentence written by an LLM is said by no one, to no one, for no reason, with no agentic mental state behind it, with no assertor to participate in the ongoing world co-creation that assertions are usually supposed to be part of.