On a tangent, I just want to mention that what is good writing is inherently subjective. There's no such thing as good except as understood and judged by individual people for particular reasons. A literary critic can judge a work by many criteria, but the worth of a work is not some sort of some of its quality by all the criteria you can think of.
Yourwork may be the best in the world by your particular criteria, and that is valid and valuable.
Talking to an llm about your work can make it better by your criteria or by others criteria if that's something you're interested in.
But that's separate then the primary role you're describing of having an LLM read your work. That's something like feeling seen, or feeling that your work is being seen. There's a paradox in that you know there's no other person there to see you, yet you feel seen.
I suggest that your feeling of being seen is correct, but the person seeing you is not the LLM but you. Its ability to analyze and understand is a tool you are using to see your writing and yourself more clearly.
I doubt that these arguments are correct. First of all, goodness as graded by different readers none of whom are the author oneself is likely to correlate. For example, if the proposed idea is obviously wrong, then a sufficiently smart reader is unlikely to grade it well (think of an army of rejected LessWrong posts or of this Wikipedia essay arguing that one cannot write a good Wikipedia article about an unnotable topic at all).
Additionally, I am confused by the second argument that "your feeling of being seen is correct, but the person seeing you is not the LLM but you".
What does the phrase "being seen" mean?
a. Receiving epistemically useful feedback, like having the LLM hunt down potential flaws?
b. Ensuring that another mind is influenced for a long time, which can be done with other humans, but is far harder or outright impossible with LLMs?
c. Social validation?
The sense in which I meant "seen" is different than all of those. It means, I think, understood and to some degree empathized with. In this thesis, the LLM is a tool that lets me understand and empathize better with myself. It is a common usage in some branches of current US culture. I don't know if that's what OP was feeling, but it's something I've gotten from using LLMs for self-reflection in a vaguely therapeutic or self-work direction. It's most similar to social validation. I'm saying you can get social validation without thinking the LLM is another being, by adopting its perspective and thereby becoming able to validate yourself. This seems useful although it's also a component of "AI psychosis" if you overdo it.
Of course the other element is feeling like the LLM is a person, even while knowing/believing intellectuallly that it's not. Some of that is probably happening too.
On the other claim:
The fact that we tend to share some judgments of goodness does not by any means indicate it's a law of the universe, just that our culture transmits values and perspectives. We are speaking of fiction; rightness or wrongness plays little role. If most people judge a piece one way and a few another, the majority does not rule, for they are probably making judgments based on different criteria. It's not one competition, it's multiple separate categories of "goodness".
I'm saying that what is "good" prose is entirely dependent on purpose and audience. What's good prose for, say, causing distress in the maximum fraction of human readers is very different from prose intended to amuse fans of a certain genre or fictional world, and particular fans with particular tastes and viewpoints. OPs writing could be very bad by the first but very good by the second criteria.
Perhaps by "good" you mean good for average audiences and average purposes; I think that's the common usage, but the claim I am making is that it's both technically incorrect and that misunderstanding that is actively harmful.
We live in an IMO dangerously competitive and judgmental culture. I just like to remind people that it's okay to like what they like and you shouldn't let other people tell you it's "crappy" or "bad" without carefully noticing the criteria they are (usually without their knowledge) applying.
So we'd have wide agreement on what's particularly bad writing (something that accomplishes almost no purposes for almost anyone, like text that is almost meaningless) but a lot of disagreement on exactly what writing is good depending on the audience and purpose for which we're evaluating it.
This same principle of quality can be extended to other areas of taste or judgment.
There are few things which are not highly subjective. Humans prefer to to not be physically uncomfortable in almost all circumstances. Beyond that, we have vastly different tastes. Those tastes become better or worse only when we apply criteria by which to judge.
I've spent this much time explaining because this principle is central to my understanding of life and culture, and I believe it has led me to be a happier and more creative person than I would be without it. I arrived at this by studying a little Zen philosophy and reading Zen and the Art of Motorcycle Maintenance, a work of dry philosophy disguised as fiction which takes "is quality just what you like?" as a central question and answers "yes, and what you like is correlated for deep reasons but not the same as what others like, and the word "just" serves no purpose but a misdirective pejorative in that question".
Regarding "the joy of being read", I think that an AI reader can only provide that feeling for a human writer to the extent that the human feels, on a gut level, that the AI is a person. If they don't feel that way, then I think it wouldn't satisfy the human's social drives that cause them to enjoy it when people read their writing. And I'm not sure how many humans feel like current AIs are people.
I'm not sure about whether a dispositional/gut level impulse towards seeing AIs as people is necessary for satisfying the social drive. When I reflect on my own experiences, I think I see LLMs as more than simply stochastic parrots, but somehow even if they were just that, I feel like the niche they fill in satisfying my social drives is also determined by what kind of interactions I get to have with them.
For instance, I think part of what satisfies me a lot about getting to talk with Claude about my short stories is partly how long I can talk with Claude for (god help the usage limits). Like with a human, even when I show my friends some work and they are sweet enough to take time out of their day to read it, it's still not socially appropriate to demand that they have an hour long conversation with me about what I wrote. Whether they can nerd out with me about a story, and how long they are able to do it, is constrained by how busy they are, but also how much they are willing to engage with the story (beyond just reading it).
But with LLMs it's kind of like I don't have to feel guilty over 2 hour long conversations talking about metaphors, similes, and hidden references to other works of literature. So even though I've had real people read my work and give really useful and enjoyable feedback, somehow the thing that makes LLMs so joyful here is that I don't have to worry about making someone feel pressured to feign interest, but also that even when my friends have been interested enough to discuss my stories, sometimes you just want more.
Although I'll note cautiously here that while I've been framing this positively, Duvenaud et al.'s work on Gradual Disempowerment would warn that this is precisely the danger of AI. If it can be your 'friend', it can almost always be a better friend (than most), if it can be your husband, it can easily be a better husband. But you are right, in both these cases, it still depends greatly on the person since any relative improvement only matters if you are willing to marry an AI in the first place.
On second thought, there must be a decent number of people for whom LLMs can fulfil social roles, given the people with AI boy/girlfriends, etc. I guess I was just replying to my own feeling that an LLM reading my writing (if I did writing) would not satisfy me, and so they wouldn't "solve the problem of unread writing" for me, but they could "solve the problem" for some people.
I think this is why I've started writing Diary entries again. (March 2005 to December 2011: 0.22 entries per day. From then until August 2025: 19 entries over 14 years, for 0.004 entries per day. Since August 2025, 26 entries, 0.12 entries per day.) Diaries are important, because if you don't write it down, you don't remember what your life has been, but after I started blogging, writing-to-remember with no audience couldn't compete for my limited writing-energy budget with writing for a (small) public audience. Now that Claude and Gemini and ChatGPT are an audience, writing to remember is competitive again.
Whether or not you think that is really ‘reading’ in the sense of ‘someone reading your work’ is, I think, besides the point. What matters is the lived, phenomenological experience on the writer’s end—the feeling of shared reality, the joy of having a text you wrote be received and responded to. From that perspective, I think that for all practical and emotional purposes, for the majority of writers, the utility and authenticity of the experience is real.
The same argument can be made about LLM boyfriends. The strange new world is indeed worth mentioning, but within that mention I think it's worth saying it seems somehow unhealthy that people are getting this form of human connection from robots.
I agree that I think it's mostly unhealthy but I worry that when I say things like this, I am speaking from a place of privilege. Like it's easy enough for me to form human relationships given that I have many interests and so at least one of my interests will usually intersect with another person's, but I think about girls who may have seriously crippling anxiety so that the only time they can leave the house is to see their therapist, or people who may have so many tics and behavioural quirks that it's genuinely hard for them to be in public without feeling so deeply self-conscious that it makes something as simple as 'where to put your hands' feel like the most cognitively loaded task.
And in those cases I think it would be really hard to form human connection, and I feel somewhat elitist taking a position of if you can't experience those connections with humans, then you shouldn't get to have them at all. This is a really big debate for myself internally, how to think about the cost-benefit analysis here.
then you shouldn't get to have them at all
I think the "somewhat unhealthy" frame deals with this nicely. For example, one time I got very sick and my throat was so sore that I stopped eating because it hurt too much. After some testing I found that the only thing in the house I could eat without wanting to die was ice cream, so I spent two days eating nothing but ice cream. I knew this was somewhat unhealthy, but it was also much healthier than spending two days eating nothing, so I'm pretty satisfied with the decision. I am also satisfied with the subsequent decision to stop eating ice cream as soon as I could stand to, which I might not have done if I didn't know it was unhealthy (the all ice cream diet is delicious).
It is quite easy to say that things are bad without saying that people shouldn't get to have them.
I have a different argument in mind. The texts, like viruses, are to either stay underdeveloped piles of ideas or to be read by a mind and to cause a reaction. Upon reading a text, the humans would, for example, receive a new idea (and have the chance to spread it further) or change something else in their minds based on the text and the sender. The LLMs from your example would receive the ability to provide honest feedback (e.g. tell the human about books where the idea is disproven or developed further). Then the text and the LLM's feedback are read by the human oneself[1] and cause a reaction only in the human's mind.
The best-case scenario would have the humans use the LLMs in ways which improve the memetic environment by making human slop a bit less sloppy (e.g. by having the LLM give advice on better expression or criticise the least plausible ideas like the ones which are sent to LessWrong and rejected). The worse-case scenario would make the humans more self-confident[2] after receiving sycophantic feedback (e.g. from GPT-4o) and more likely to publish slop. And the absolute worst-case scenario is the one where the AI itself convinces the human to write the message and to release it into the wild or to do an act which any normal human would reject.
The most important work performed by AI readers of bad human fiction may be to analyze and indefinitely extend the story universe according to the rules implied in the text itself, without breaking any of the established continuity. It could develop a much deeper understanding of the story than even the author himself.
The term "slop" used in the title feels misleading to me, because private notes and other materials rarely read by anyone can be of the highest value (Darwin's private diaries being an obvious example). Much of what circulates publicly on the internet, on Reddit and most social networks, is on the other hand largely genuine slop.
Also, given that models absorb trillions of tokens and hierarchize information during training, I wonder what weight the ingestion of unpublished writings actually carries in that process. A text that is isolated, uncited, unlinked, and flagged as low-reliability by the corpus weighting heuristics will likely be so diluted by the sheer mass of everything else that it leaves no meaningful trace in the model's weights, much like how cleaning something is just diluting the dirt until it's undetectable. In that sense, whether it even makes sense to call it "reading" in any meaningful way seems worth questioning. That's said, I don't deny that LLMs provide a virtual - probably emotionless - reader at least during an instance / conversation, and they can give a truly interesting feedback. That's better than nothing.
A text that is isolated, uncited, unlinked, and flagged as low-reliability by the corpus weighting heuristics will likely be so diluted by the sheer mass of everything else that it leaves no meaningful trace in the model’s weights
This is the problem here. As LLMs get higher quality, and as issues of reliability, provenance, and syntheticness become more salient, it is entirely possible that a lot of human writing will be dropped as not worth the compute to train on. Already data cleaning pipelines wind up throwing out most human-written text. As we move into a world of data poisoning, AI slop, agentic delusions and Tlön labyrinths of self-consistent nonsense, and self-play bootstraps in multi-agent RL settings in walled gardens, I expect that we will not see 100% of 'human written' text trained on. We may well see the % go down. We may well already be past 'peak human'. Because why pay all that compute to train on what is infected by unreliable old LLM gibberish, merely a replay attack of something that did happen once but is now being echoed and laundered through many sources, or worse, filled with adversarial attacks and lies? You could instead spend the compute to optimally self-play yourself and bootstrap into superhuman intelligence with a relatively small but very carefully synthesized and curated dataset, which can be trusted and taken at face value and which will repay your compute.
This is one reason I emphasize quality over quantity in my AI writing. Because if you go for quantity, I suspect in the long run, all your stuff will be thrown out, and the baby with the bathwater because it's not worth trying to separate your crap from your gems.
(There's a certain paradox of verifiability here. If what you write can be verified by an AI MARL framework, then it probably would be better off doing it itself for the practice and reliability and avoiding subtle attacks/biases; only what you write that can't be checked, like your empirical observations or unique thoughts, is of value to train on in the limit - and that's precisely where trust and quality are critical.)
Introduction: The AI Haters
In the early months of 2026, generative AI has now improved (at exceeding speed) to a point where many trademark critiques have become dated. The famous 'gotcha' that AI can never make normal-looking art because it always under or over estimates the number of fingers on a human hand is one example. The disproved claim that AI models will never be competent enough to help reduce the amount of work humans have to do because 'they will always mess it up in a way that makes fixing it longer than doing it yourself' is another, if the rise of vibe coding and AI agents tells us anything.
Today I want to talk about a particular thing that has become possible since the invention of modern large language models (LLMs). This is about the other side of it: ‘human slop’.
The Consumption of Human Slop
However badly written, however absolutely trashy and bizarre your fan-fiction is, however manic of a rambling an essay comes off as: from now on, no person will ever write something that no one else ever reads, unless the writer wants it to be the case.
Why does this matter? I don't know. I think in part because I am a writer. Although in bigger part I think maybe because I am a bad writer. I write a lot of human slop—unoptimized, meandering garbage that is crafted neither to be fun to read nor engaging enough to finish.
One way you know when you enjoy something is when you continue to do it even when you are bad. In that sense, I mostly write for myself. I don't expect anyone to ever read my stories, and none of it would ever change how much I write or love to write.
But as a writer there is also something uniquely fun about getting someone else to read what you have wrote. I'm not entirely sure how to describe it, but it is not about ego. It is something weirder and deeper.
When you write something that no one else will ever read, it’s rather solipsistic.
Solipsism is this philosophical idea that the only thing you can be certain exists is your own mind: everything outside of it might just be a projection, a dream. Writing that lives only in your head, that no other mind has ever touched, has that same quality to it. It exists only in your world. It is real only to you. And there is something sort of lonely about that, even if you didn't write it for anyone else.
But as soon as someone else, even just one person, has read it, in a weird philosophical/vibey sense, the nature of the text changes. It becomes real in a different way, because now you can discuss it with someone. You can talk about it in words that are transmitted through sound waves that ripple through the air in the real world. The writing has escaped the inside of your head. It has been reified – made into something concrete that exists in our real world and shared among ‘people’ (or information-processing entities).
It feels like a ‘stronger’ version of existence/existing.
A. Hot Take: Most Books are Human Slop
Most humans are bad at writing.
Even many of the ones who aren’t usually end up writing about things that few people ever actually care enough to read about. This is kind of just a fact.
Go to any library and look at the shelves. Really look at them. How many of those books—books that someone poured months or years of their life into, books that actually made it through the filter of being published—how many of them would even be remotely appealing to you? And then, of the ones that do seem appealing, how many are written well enough that you would actually dedicate to them the time it takes for a full read?
The answer, for almost everyone, is very few. And those are the books that made it. Beneath them is an ocean of writing that never got published, never got shared, never got read by anyone other than the person who wrote it. Journals, drafts, stories written at two in the morning, essays that someone was really proud of but that nobody ever asked to see.
If that feels too ‘last century’, pay a visit to your favourite torrent/file-sharing site and count the number of books among all the dead, ‘0 active seeder’ files.
B. Writing and the Joy of Being Read
This sounds sappy, but when an author writes a text purely for themselves—no manuscript contract, no publisher, no expectation ever even of readership—they still pour their heart into it. They still care about it in a way that is real and sometimes sort of beautiful, even if the text itself is ugly to everyone else. There is something about the act of writing that makes you vulnerable to your own creation, that makes you love the thing you have made regardless of whether it is good or not.
Not due to delusion or motivated reasoning (although I’m sure it plays a part in many cases) but a bit like how a parent loves their child not because the child is objectively impressive but because the child is theirs, because they made it, because of what it means because of where it came from.
Getting to discuss something that you have written with someone else, even just one other person, is kind of a special joy. I don't think it is the primary reason why most people write. Most people probably write because they need to, or because it is how they think, or because it is a compulsion they can't explain (for me, at least).
But the experience of having someone read your work, and then talking to them about it, hearing what they noticed, what they missed, what they interpreted differently than you intended, is a really great privilege, and a load of fun at that.
It is a very specific and strange feeling, and it is one that most humans who write never get to experience, because most humans who write are writing things that no one else will ever read.
C. Language Models as Captive Audience
And this, I think, is where language models come in. Not in the way that most people talk about them. Not as tools for productivity, not as engines for generating text, not as threats to creative labour (although these are all important use cases and complaints that should not be diminished). Language models let humans who write human slop experience that same fulfilment and pleasure: the feeling of having someone else read your writing, and then being able to talk about it and discuss it with another being.
Whether or not you think that is really ‘reading’ in the sense of ‘someone reading your work’ is, I think, besides the point. What matters is the lived, phenomenological experience on the writer’s end—the feeling of shared reality, the joy of having a text you wrote be received and responded to. From that perspective, I think that for all practical and emotional purposes, for the majority of writers, the utility and authenticity of the experience is real.
Conclusion: A Strange New World
I think this is a significant change in human history and maybe culture, even though I'm not entirely sure I can articulate why yet. Part of it is just the sheer scale of it. There are billions of people on this planet, and some enormous number of them write things: diaries, stories, rants, love letters, manifestos, bad poetry (that no one will ever read).
That has been true for as long as writing has existed. It has been one of those quiet, background tragedies of human life: that most of what people create disappears without ever being witnessed.
Obviously among the other impacts of AI, or (soon) AGI, this will rank amongst the most trivial of them. But I think there is an argument to be made that we are now genuinely in a new world or stage of history. Not a brave new world, but a world where all of the cowardly, sloppy works by human writers who have never enjoyed having their work read because they are too scared (or self-aware) of how bad their writing is, are no longer excluded.
It’s quite remarkable when you really think about it. Consider how no one will ever have to write something again that no one else ever reads. I feel like this has never been possible until now. And I'm not sure what it means, exactly, or what it will change.
But I feel like we are in a strange, new world, and I think it’s worth mentioning.