Crosspost from my blog.
Synopsis
- When we share words with each other, we don't only care about the words themselves. We care also—even primarily—about the mental elements of the human mind/agency that produced the words. What we want to engage with is those mental elements.
- As of 2025, LLM text does not have those elements behind it.
- Therefore LLM text categorically does not serve the role for communication that is served by real text.
- Therefore the norm should be that you don't share LLM text as if someone wrote it. And, it is inadvisable to read LLM text that someone else shares as though someone wrote it.
Introduction
One might think that text screens off thought. Suppose two people follow different thought processes, but then they produce and publish identical texts. Then you read those texts. How could it possibly matter what the thought processes were? All you interact with is the text, so logically, if the two texts are the same then their effects on you are the same.
But, a bit similarly to how high-level actions don’t screen off intent, text does not screen off thought. How you want to interpret and react to text, and how you want to interact with the person who published that text, depend on the process that produced the text. Indeed, "[...] it could be almost anything, depending on what chain of cause and effect lay behind my utterance of those words".
This is not only a purely propositional epistemic matter. There is also the issue of testimony, narrowly: When you public assert a proposition, I want you to stake some reputation on that assertion, so that the public can track your reliability on various dimensions. And, beyond narrow testimony, there is a general sort of testimony—a general revealing of the "jewels" of your mental state, as it were, vulnerable and fertile; a "third-party standpoint" that opens up group thought. I want to know your belief-and-action generators. I want to ask followup questions and see your statements evolve over time as the result of actual thinking.
The rest of this essay will elaborate this point by listing several examples/subcases/illustrations. But the single main point I want to communicate, "on one foot", is this: We care centrally about the thought process behind words—the mental states of the mind and agency that produced the words. If you publish LLM-generated text as though it were written by someone, then you're making me interact with nothing.
(This is an expanded version of this comment.)
Elaborations
Communication is for hearing from minds
-
LLM text is structurally, temporally, and socially flat, unlike human text.
- Structurally: there aren't live mental elements underlying the LLM text. So the specific thoughts in the specific text aren't revealing their underlying useful mental elements by the ways those elements refract through the specific thought.
- Temporally: there's no mind that is carrying out investigations.
- It won't correct itself, run experiments, mull over confusions and contradictions, gain new relevant information, slowly do algorithmically-rich search for relevant ideas, and so on. You can't watch the thought that was expressed in the text as it evolves over several texts, and you won't hear back about the thought as it progresses.
- The specific tensions within the thought are not communicating back local-contextual demands from the specific thought back to the concepts that expressed the more-global contextual world that was in the backgroundwork of the specific thought.
- Socially: You can't interrogate the thought, you can't enforce norms on the thinker, and there is no thinker who is sensitive to emergent group epistemic effects of its translations from thought to words. There is no thinker who has integrity, and there is no thinker with which to co-construct new suitable concepts and shared intentions/visions.
-
This could have been an email a prompt.
- Why LLM it up? Just give me the prompt.
- When you publish something, I want you to be asserting "this is on some reasonable frontier of what I could write given the effort it would take and the importance of the topic, indicating what I believe to be true and good given the presumed shared context". It's not plausible that LLM text meets that definition.
- If the LLM text contains surprising stuff, and you didn't thoroughly investigate for yourself, then you don't know it's correct to a sufficient degree that you should post it. Just stop.
- If the LLM text contains surprising stuff, and you DID thoroughly investigate for yourself, then you obviously can write something much better and more interesting. Just stream-of-consciousness the most interesting stuff you learned / the most interesting ideas you have after investigating. I promise it will be more fun for everyone involved.
- If the LLM text does not contain surprising stuff, why do you think you should post it?
Communication is for hearing assertions
-
We have to listen to each other's utterances as assertions.
- We have to defer to each other about many questions, which has pluses and minuses.
- Most statements we hear from each other are somewhere between kinda difficult and very difficult for us to verify independently for ourselves. This includes for example expert opinions, expert familiarity with obscure observations and third-hand testimony, personal stories, and personal introspection.
- It's valuable to get information from each other. But also that means we're vulnerable to other people deciding to lie, distort, deceive, mislead, filter evidence, frame, Russell conjugate, misemphasize, etc.
- When someone utters a propositional sentence, ze is not just making an utterance; ze is making an assertion. This involves a complex mental context for what "making a propositional assertion" even is—it involves the whole machinery of having words, concepts, propositions, predictive and manipulative bindings between concepts and sense organs and actuators and higher-order regularities, the general context of an agent trying to cope with the world and therefore struggling to have mental elements that help with coping, and so on.
- When ze asserts X, ze is saying "The terms that I've used in X mean roughly what you think they mean, as you've been using those terms; and if you try (maybe by asking me followup questions), then you can refine your understanding of those terms enough to grasp what I'm saying when I say X; X is relevant in our current shared context, e.g. helpful for some task we're trying to do or interesting on general grounds of curiosity or it's something you expressed wanting to know; X is roughly representative of my true views on the things X talks about; I believe X for good reason, which is to say that my belief in X comes from a process which one would reasonably expect to generally produce good and true statements, e.g. through updating on evidence and resolving contradictions, and this process will continue in the future if you want to interact with my assertion of X; my saying X is in accordance with a suitable group-epistemic stance; ...".
- In short, "this is a good thing for me to say right now".
- Which generally but not always implies that you believe it is true,
- generally but not always implies that you believe it is useful,
- generally but not always implies you believe that I will be able to process the assertion of X in a beneficial way,
- and so on.
-
Because we have to listen to each other's utterances as assertions, it is demanded of us that when we make utterances for others to listen to, we have to make those utterances be assertions.
-
If you wouldn't slash someone's tires, you shouldn't tell them false things.
-
If you wouldn't buy crypto on hype cycles, then you shouldn't share viral news. I learned this the hard way:
- I saw a random news article sharing the exciting, fascinating news that the Voynich manuscript has been decoded! Then my more sober and/or informed friend was bafflingly uninterested. Thus I learned that not only had the Voynich manuscript been decoded just that week, but also it had been decoded a month before, and two months before, and a dozen other times.
- Several times, people shared news like "AI just did X!" and it's basically always either BS, or mostly BS and kinda interesting but doesn't imply what the sharer said.
- I shared the recent report about lead in food supplements without checking for context (the context being that the lead levels are actually fine, despite the scary red graphs).
-
In the introduction, I used the example of two identical texts. But in real life the texts aren't even identical.
- The choice of words, phrases, sentence structure, argument structure, connecting signposts, emphasis—all these things reveal how you're thinking of things, and transmit subtleties of the power of your mental gears. The high level pseudo-equivalence of "an LLM can't tell the difference" does not screen off the underlying world models and values! The actual words in LLM text are bad—e.g. frequent use of vague words which, like a stable-diffusion image, kinda make sense if your eyes are glazed over but are meaningless slop if you think about them more sharply.
- maybe you think that's a small difference. i think you're wrong, but also consider this... if it is small, then the total effect is small times 100 or 1000. i sometimes used to write in public without capitalizing words in the standard sentence-initial way. my reasoning was that if i could save a tiny bit on the cognitive load of chording with the shift key, then i could have the important thoughts more quickly and thoroughly and successfully, and that was more important than some very slight difference in reading experience. i still usually write like that in private communications, but generally in public i use capitalization. it makes it a bit easier to parse visually, e.g. to find the beginning of sentences, or know when you've reached the end of a sentence rather than seeing etc. and not knowing if a new sentence just started. that difference makes a difference if the text is read by 100 or 1000 people. are you seriously going to say that all the word choice and other little choices matter less than Doing This Shit? all text worth reading is bespoke, artisanal, one-shot, free-range, natural-grown, slow-dried, painstaking, occult, unpredictable, kaleidoscopic, steganographic—human. we should be exercising our linguistic skills.
- Writing makes you think of more stuff. You get practice thinking the thought more clearly and easily, and rearranging it so that others understand it accurately. At least my overwhelming experience is that writing always causes a bunch of new thoughts. Generating a video that depicts your AI lookalike exercising is not the same as you actually exercising, lol. By putting forth a topic in public but not even doing this exercise regarding that topic is a kind of misdirection and decadent laziness, as if the public is supposed to go fill in the blanks of your farted-out notions. Verification is far from production, and you weren't even verifying.
- You can't make a text present propositions that are more true or good just by thinking about the text more and then keeping it the same no matter what. However, you can make a text more true or good just by thinking about it, if you would change the text if you thought of changes you should make. In practice if you do this, then you will change your LLM text a lot, because LLM text sucks. The more you change it, the less my objection applies, quantitatively.
-
If you're asking a human about some even mildly specialized topic, like history of Spain in the 17th century or different crop rotation methods or ordinary differential equations, and there's no special reason that they really want to appear like they know what they're talking about, they'll generally just say "IDK". LLMs are much less like that. This is a big difference in practice, at least in the domains I've tried (reproductive biology). LLMs routinely give misleading / false / out-of-date / vague-but-deceptively-satiating summaries.
Assertions live in dialogue
-
In order to make our utterances be assertions, we have to open them up to inquiry.
- LLM text is not open to inquiry.
- When you're making an assertion, we need you to be staking some of your reputation on the assertion.
- ("We" isn't a unified consensus group; but rather a set of other individuals, and some quasi-coherent subsets.)
- We're going to track whether your assertions are good and true. We might track separately for different domains and different modalities (e.g. if you prefaced by "this is just a guess but", or if you're in a silly jokey mood, and so on). We will credit you when you've been correct and when we are pressed for time (which is always). We will discount your testimony when you've been incorrect or poisonous. We will track this personally for you.
- If that sounds arcane, consider that you do it all the time. There are people you'd trust about math, other people you'd trust about wisely dealing with emotions, other people you'd trust about taking good pictures, and so on.
- You can't go back and say "oh I didn't mean for you to take this seriously", if that wasn't reasonably understood within the context. You can say "oops I made a mistake" or "yeah I happened to give a likelihood delta from the consensus probabilities that wasn't in the direction of what ended up being the case".
- But you are lying in your accounting books if you try to discount the seriousness of your assertions when you're proven incorrect. E.g. if you try to discount the seriousness by saying "oops I was just poasting LLM slop haha". It's not a serious way of communicating. It's like searching for academic papers by skimming the abstracts until you find an abstract that glosses the paper's claims in a vague way that's sorta consistent with what you want to assert, and then citing that paper. It's contentless, except for the anti-content of trying to appear contentful.
- We might want to cross-examine you, like in a courtroom. We want to clarify parts that are unclear. We want to test the coherence of the world-perspective you're representing. We want to coordinate your testimony with the testimony of others, and/or find contradictions with the testimony of others.
- We want to trace back chains of multiple steps of inference back to the root testimony.
- If David judges that Alice should be ostracized, on account of Carol saying that Alice is a liar and on account of Bob saying that Alice is a cheat, but Carol and Bob are each separately relying on testimony from Eve about Alice, then this is a fact we would like to track.
- We want to notice contradictions between different testimonies and then bring the original sources into contact with each other. Then they can debate; or clarify terms and reconcile; or share information and ideas and update; or be disproven; or reveal a true confusion / mystery / paradox; or be revealed as a liar. Even if one assertion in isolation can't be decided, we want to notice when one person contradicts others in several contexts (which may be the result of especially good behavior or especially bad behavior, depending).
- We want to avoid miasmatic pollution, i.e. unsourced claims in the water.
- Unsourced claims masquerade as consensus, and nudge the practical consensus, thus destroying the binding between the practical consensus and the epistemic consensus. Instead, we want claims made by people saying "yes I've seen this, I'm informing you by making this utterance where you'll take it as an assertion".
- We don't want people repeating mere summaries of consensus. This leads to a muffling of the gradient of understanding and usefulness. Think of an overcooked scientific review paper that cites everything and compresses nothing. It's all true and it's all useless. Also it isn't even all true, because if you aren't thinking then you aren't noticing other people's false testimony that you're repeating.
- LLM text is made of unsourced quasi-consensus glosses.
- The accused has some sort of moral right to face zer accuser, and to protection from hearsay, and to have accusers testify under penalty of perjury.
-
In order to make our utterances be useful assertions that participate in ongoing world co-creation with our listeners, we have to open up the generators of the assertions.
- LLM text tends to be less surprising—more correlated with what's already out there, by construction.
- This is the case in every way. That hides and muffles the shining-through of the human "author"'s internal mental state.
- We want the uncorrelatedness; the socially-local pockets of theoryweaving (groundswell babble—gathering information, hypothesizing ideas and propositions) and theorycrafting ("given enough eyeballs, all bugs are shallow"—testing predictions, resolving contradictions, retuning theories, selecting between theories).
- If you speak in LLM, we cannot see what you are thinking, how you are thinking it, how you came to think that way, what you're wanting, what possibilities we might be able to join with you about, and what procedures we could follow to usefully interoperate with you.
- I want you to generate the text you publish under your own power, by cranking the concepts as you have them in your head, so I can see those gears working, including their missing teeth and rough edges and the grit between them—and also the mechanical advantage that your specific arrangement has for the specific task you have been fiddling on for the past month or decade, because you have apply your full actual human general intelligence to some idiosyncratic problem and have created a kinda novel arrangement of kinda novel-shaped gears.
- I want to be able to ask you followup questions. I want to be able to ask you for examples, for definitions, for clarifications; I want to ask you for other possibilities you considered and discarded; I want to ask you what you're still confused/unsure about, what you're going to think about, what you're least and most confident in, where you think there's room for productive information gathering.
- Sometimes when people see something interesting and true, they struggle to express it clearly. I still want that text! Text that is literally incorrect but is the result of a human mind struggling to express something interesting / useful / novel / true, is still very useful, because I might be able to figure out what you meant, in combination with further information and thinking. LLM text throws all that stuff out.
- I want to figure out some of your goals / visions are, so we can find shared intentions. This process is difficult and works through oblique anastomosis, not by making an explicit point that you typed into a prompt for the LLM to ensloppenate.
- Stop trying to trick me into thinking you know what the fuck you're talking about.
- Non-testimony doesn't have to be responded to, lest it be trolling—cheaply produced discourse-like utterances without a model behind them, aimed at jamming your attentional pathways and humiliating you by having you run around chasing ghosts.
-
A sentence written by an LLM is said by no one, to no one, for no reason, with no agentic mental state behind it, with no assertor to participate in the ongoing world co-creation that assertions are usually supposed to be part of.