This aligns with my experience, but my experience also includes "ugh, this is slop" reactions to writing composed a decade or more ago. (Professionally written midbrow nonfiction is the most vulnerable.) So I wouldn't dismiss hypothesis 3 out of hand at least as a component; there's a lot of previous standard "good prose" moves, such as tricolon, that now read as yucky.
This is just another step in the obsolescence of the unaugmented human being. AI can generate lifelike videos, books, personas. Humans trying to survive in AI society may as well be trying to get by in a crowd of T-1000 shapeshifters. The real question is, how would AIs establish their individuality and identity? Maybe through cryptographic means - maybe all the work that bitcoiners and blockchainers have done to develop trustless technology, is ultimately helping to make the world safe for the transhuman AIs of the near future. So, imagine a future of superintelligent AIs that can inhabit any physical form, and whose proof of identity rests on amplified versions of the abstract procedures our computers and bankcards already use when interacting with each other. That's your AI society (maybe living in the shadow of truly godlike ASIs), which humans would understand about as well as pet mice understand human society.
I regret how useless this outburst of prophecy is, for someone trying to deal just with the current level of the AI tide. I don't even assert that the future will definitely be as I just described it. But we all feel that most forms of communication don't quite mean what they did, now that AI can generate them so easily. I'm just looking ahead at the possible long-term implications.
Epistemic status: something of a rant. Is not meant to make claims about the general capabilities (or lack thereof) of LLMs (beyond their prose), but observations about how society seems to use them excessively without being perfectly candid about it.
Nanobanana's take on the situation
Over the past few months, it has become a bit of a running gag that every evening, I inform my girlfriend of the multiple cases of LLM slop I encountered in the wild throughout the day[1]. And pretty much every day, I have several cases to report. These typically involve:
Exhibit 1: The all-time top-rated post on /r/humanize - if TwainGPT is so great, why didn't they use it for this post? Many other posts and comments on that subreddit read equally LLM-generated.
Exhibit 2: post on thatprivacyguy.com - Not quite as "in your face", but exactly matches "LLM article style", from "silently" to the many short sentences to "The file sits at". The rest of the article is no different in that regard.
Exhibit 3: https://browsergate.eu/ - well, what else would one expect from an "association of commercial LinkedIn users"
I'm using the term "slop" pretty loosely throughout this post and primarily mean LLM-written speech/text, most of which one may classify as "stylistic slop". E.g., at least two out of the four slop-style LessWrong posts I encountered seemed quite valuable and sincere, and the authors likely just used LLMs for the writing itself while still communicating primarily their original thoughts and ideas. So, when I speak of slop, I don't necessarily mean there's no value behind something, but rather that some people don't appear to care a lot about the words they communicate, and that LLMs' way of speaking is showing up absolutely everywhere on the internet, including in many places I wouldn't necessarily have expected it.
I don't think anyone here will be surprised that this happens a lot. But I still am occasionally shocked by the frequency of it. While for blog posts and articles, this seems somewhat expected, I was more surprised to find this to be the case in some personal messages, Slack messages, and online comment sections (isn't the overhead of using an LLM for that higher than the time it saves people?). And in videos - because creating a video involves many steps that an LLM won't notably speed up, so the effort-saving aspect is much less relevant here than when the medium is text.
To name a few examples of videos that appear to involve a lot of LLM speech[2]:
I don't want to get too deep into why I have a high credence that the scripts of these videos are at least partially LLM-generated, but not everyone here will be familiar with the unique tells of LLMs. It may also be the case that different people pick up very different patterns. And the most infamous example of how to recognize LLM text is the em dash — which you can't hear or see in a video. So, I'll just give a few examples of the types of patterns that are more and more ubiquitous due to today's frontier models being completely in love with them:
Asking Opus 4.6 to wittily explain the game PUBG. Its very first suggestion includes "parachute onto an island" and a reference to frying pans. Maybe that's just what's most salient about this game, or maybe that's what happens when you ask an LLM to be witty.
There are many, many more such patterns, and each of the videos listed above contains dozens of such cases. I'm not claiming they're entirely LLM-written, and some seem less LLM-like overall than others, but I am pretty certain that a substantial amount of the words in the video scripts were originally produced by an LLM. That said, I don't think there's any way to prove that the examples I mentioned really are (partially) AI-generated, and I may be wrong about any individual case.[5]
What Does This Tell Us About the World?
Overall, there seem to be at least three possible explanations for the recently much higher frequency of LLM speech patterns on the internet:
As you can imagine, explanation #1 seems most likely to me, even though I do acknowledge that "sloppification of human brains" is a real effect and I sometimes catch myself reaching for phrases that I probably picked up from LLMs[6]. However, I doubt that #2 can explain the recent omnipresence of LLM speech, for three reasons:
The "nothing to see here" explanation also seems unlikely to me overall, as the evidence that slop style really is everywhere seems pretty overwhelming. Although I could imagine that I'm sometimes reacting too strongly, and some of the cases of suspected slop I encounter really are just false positives where I'm reading too much into a few stylistic coincidences. For instance, "And this is where it gets interesting"-type phrases may never have been that unusual, and I just now started paying attention to them.[7]
All that said, even if it's true that many people out there are casually presenting LLM speech as their own words, I want to make no claim about how much effort any of these creators have put into these pieces overall. It certainly reduces my trust in them doing thorough work, but that's merely a heuristic that may obviously be wrong about any given case.
Why Are People Doing This?
Why would creators (and friends, and colleagues, and CEOs giving keynote speeches) do this and rely so heavily on LLM-written text without disclosure? I haven't asked them, but can only speculate that possible reasons may range from laziness, to a lack of time and pressure from deadlines, to just not seeing a problem with it, and considering LLMs writing text for them to be a completely acceptable case of tool use – which is definitely a point one can make, of course, even though I'd largely disagree, as I'll explain later.
Part of the reason is very likely also that many people underestimate how recognizable LLM language really is (unless you put real effort into prompting them out of it; but my impression is that this is very hard to do). And indeed, many people who use LLMs to write text that they publish at least take that one extra step of replacing em dashes with some other character to make it less obvious that their text is LLM-written. So, many people do seem to prefer to hide that fact.
Of the sample of people I've spoken to (in a somewhat arbitrary sample of non-rationalists), it seemed that more than half of them are at least aware of the "Not X — Y" pattern. And yet, a successful tech CEO and his team, as well as the people who made that SeaGate video, averaged about one instance of that widely known pattern per minute without realizing it makes their speech sound LLM-generated[8]. Which makes me think that surprisingly many people really are oblivious to the fact that LLM-writing is easy to recognize, and that when you use it, (some) people will be able to tell.
Why Do I Care?
There are a variety of reasons why this entire relatively new development seems less than ideal to me.
Honesty / Truth / Authenticity
First and foremost, it just seems dishonest when people sell LLM writing as their own words. Sure, there are many degrees here. Some people may invest a lot of cognitive work ahead of time and come up with well-thought-out lists of ideas/arguments/whatever, and they then merely use an LLM to connect the dots and turn their ideas into flowing prose. And perhaps they then invest even more time to meticulously check whether the LLM's output remains truthful to their original ideas. Others may use LLMs because they have a hard time phrasing something in a diplomatic, non-offensive way when they are angry or annoyed about someone or something. Others again may not feel comfortable writing in English, or whatever language they publish their work in[9]. I can certainly sympathize with such cases. But I'd be very surprised if these are the most common ones. E.g., in the videos I linked to, none of these caveats seem to apply.
Two people I know have shared blog posts with me in the past half year that "they wrote", that, very clearly, were written by LLMs, from em dashes to the typical section headings to all the slop patterns I described earlier in the post. Again, it's hard to say if they invested any real effort into these posts at all, but based on how little time they seemingly spent writing or editing, that seems unlikely. I'm happy to read something a person has put actual effort into, but if it's not worth your time to write it, then it's not worth my time to read it.
Similarly, a friend as well as some work colleagues of mine have repeatedly used LLMs to write chat messages, or sometimes Google doc comments, even in entirely informal 1-1 interactions. I see no issues with doing so when you flag it explicitly, like "Here's Claude's summary of my thoughts on the issue" or whatever, but often this was not the case, and then it seems pretty deceptive.
Correlated Communication
Many people are familiar with the anchoring effect: if you ask others to estimate some number, but then first present them with your own guess, this tends to systematically skew their estimate towards yours. One explanation for why this happens is that when people take a guess, they intuitively have some fuzzy range of plausible-seeming values in mind. When not being anchored, they might do a good job of finding something close to the middle of that range. But when you anchor them, they may instead start out at the anchor and then gradually move closer towards their own plausible range until they're satisfied, which leads to systematically different responses.
Depiction of Anchoring. Instead of sampling unbiased from your intuitively plausible range of some value, you instead unwittingly start from the anchor and move in one direction until the value seems plausible enough. (image generated with ChatGPT)
I'd argue that a similar thing happens in writing. Say you have some idea in your head that you want to communicate. When you write on your own, you try to find the words that best match that idea; you basically aim for the "middle" of the conceptual space that you want to describe. If, instead, you let an LLM write for you, then chances are it will describe something subtly different, or focus on different aspects, or hedge in different places than you would. But it's just close enough to what you had in mind that you give it your stamp of approval.
One issue with this approach is that it makes your message less precise. As a consumer of your writing, I likely care more about what you actually think than what's just close enough to what you think that you'd approve it. What's more, this can lead to a high correlation in the communication of many people, where, say, Claude's or ChatGPT's world model and propensities suddenly taint huge amounts of the things that are being shared on the internet. Of course, this happens already through the fact that many people talk to these LLMs and use them for research and reasoning purposes. But then also letting them choose the words that you project out into the world magnifies this effect even more.
Bad Signaling
When people do put a lot of effort into whatever they create, but they still let it look superficially like LLM slop, that's also not optimal, as they're then sending a broken signal, signaling "this is slop" to the world, when in fact it isn't! So, people like me will likely not engage with their work, even though it may be valuable, because the evidence we see is that they took shortcuts and wanted to get something out quickly, likely at the expense of accuracy and quality.
Imagine a journalist friend of yours puts a huge amount of work into some investigative piece, but then publishes it with countless typos because they didn't bother to go that last little step of polishing it. I'd be a bit mad about them being so sloppy about one thing that then casts doubt on the entire rest of their work. Using LLM writing, to me, seems pretty similar.
Aesthetics
I can imagine that many people don't care much about this, but freedom of style and expression seems like a nice thing to me. I like it when people have their own quirks and patterns and occasionally do interesting things with the tools their language provides. But now, it seems like the English language in particular is progressively collapsing. Slop style is taking over all kinds of publicized writing, and few people seem to care or notice. People write articles or create videos that hundreds of thousands of people will read, and don't even invest an extra twenty minutes to get rid of the slop phrases or make it sound like their own voice. And then everything, everywhere, sounds more and more the same.
A Unique Point in Time
I acknowledge that this post may have a bit of a negative vibe. But on the flip side to all of the above, there is one positive to the situation: we're at a point in time where it's often unusually easy to know which people you can ignore as they (very likely) take serious shortcuts in their thinking, judgment, and communication. At least if you agree with my take that selling undisclosed LLM writing as your own is a strong signal for the quality of people's outputs being low. Three things seem to be true at the same time today:
What Do We Do With This?
For those of you who haven't engaged much with what LLM speech sounds like, it may be worthwhile to do so. Both to recognize when you're exposed to slop, and to avoid producing things yourself that sound like slop to others. When letting LLMs write for you, be aware that there may be many patterns in your text that are not apparent to you, but to others, and that may lead to some unfavorable judgments.
As JustisMills has recently put it in a related post:
I end up with two main takeaways about all of this.
First, as a general realization about the state of the world, the last few months taught me something similar to the Gell-Mann Amnesia Effect. I realize much more than before how much of the media out there, and sometimes even supposedly personal messages, are partially or mostly LLM-written. It's probably on me that I didn't realize and expect the extent of slop in the world earlier. But this experience of realizing first-hand just how many people take such shortcuts when they think others won't notice just left a mark.
And second, adding to the JustisMills post linked above, I'll end on an appeal to those who rely heavily on LLMs as writing partners. I've used LLMs for countless use cases before, and I'm not here to argue about their general capabilities (or any lack thereof). I let them write close to 100% of my code. I use them for brainstorming, some forms of fact-checking, general feedback on my writing, and more. And in the past, I have occasionally used them to aid my writing directly. But the more I noticed their extremely dominant speech patterns, the further I started keeping them away from the actual writing process. And I wish others did the same. I can only speak for myself here, but I, for one, want to hear your own words, as a direct and dense representations of your thoughts, and not any LLM's lossy, biased, and stylistically stale interpretation of them.
I'm also very fun at parties, I think.
Two of which were from earlier this year though, before the new LLM policy was announced.
Just as a test, I just logged into X for the first time after months to have a look at the top of my (admittedly not very curated) feed. Ignoring the one-liners, 5 out of the first 5 longer tweets read like AI slop (1 of which was all lowercase - which could both indicate that the author just learned to speak in that way, or that they asked their LLM to do that, which really wouldn't surprise me), after which I stopped scrolling further. Admittedly, Twitter in particular may actually incentivize people to write in that "punchline"-style way that LLMs love, so I assume the risk of false positives is higher here than elsewhere. Besides, even if it were the case that X is full of AI slop, I'd also be the first to argue that one shouldn't judge a tool by its average output; if I just followed the right people and blocked the countless slop producers out there, then I wouldn't have this experience. However, the point remains that slop (style) appears to be the default, almost everywhere, and unless you've engaged with any given platform with intention and know what you're doing, then slop is likely what you'll find.
OK, this one does look like obvious slop based on title and thumbnail alone. When it was recommended to me, I only clicked it because I already suspected it would make a good case for this post. So perhaps I've trained YouTube a bit to show me slop, after all? But then again, I dislike all videos that contain LLM speech, so I would hope that provides sufficient counter-incentive.
While there are AI detectors out there, and some seem to be quite reliable as far as fully AI-written texts and fully human-written texts go, I'm less convinced of their judgment on mixed content. And in the case of YouTube videos, we don't even have the original transcript with all its punctuation, but can only recreate an imperfect copy.
Like that very sentence. "real" is an adjective LLMs just love, and "reaching for phrases" is one of their favorite types of metaphors. Oops.
If someone actually thinks that the "nothing to see here" explanation is actually likely, I'd be happy to collaborate on some way to test this.
Or maybe they did realize, but just didn't care, or didn't think that would lead to any negative reactions? Seems a bit unlikely to me, but who knows.
Although I'd argue the way to go then would be to write in a language you're comfortable in, and then translate the text. This would avoid most LLM style slop, even if you use an LLM for the translation.
Naturally, I can't be sure if it really is "most". I can only detect what I can detect, and even for those cases, I can't be entirely sure. But if some people put enough effort into their text creation that their LLM slop is truly not detectable as such, then at least they've put effort into something. And then perhaps this also applies to other parts of their process. :)