So there’s been a lot of talk recently about AI slop taking over the internet. And I just wonder if it's going to end up changing the way we talk. And more importantly: should we (want to) change the way we talk?
Many places (websites, schools, forums, although strangely enough, not social media sites) have begun predictably pushing back. Part of this involves tools—ironically, sometimes also other LLMs themselves—to detect LLM/AI-generated content so that they can be taken down as spam before they’ve ever had the chance to poison the web further.
And (this is by no means an insightful or novel concern), but a part of me wonders is this is going to end up changing the way we talk. For example, already, there’ve been anecdotal reports of people trying their darnedest to swear off using em-dashes in essays or blogposts for the rest of their lives. Other people who write a little too eloquently or who have internalized ‘Wikipedia voice’ have found themselves consciously editing their work after hand to deliberately make their work look more stylized.
For me, after being the recent victim of an academic misconduct allegation where I was told by a professor that I had used Chat-GPT to do all of my work, am now a little too hesitant to write things that sound a little bit too smart or to use math that other people think I should probably be too dumb to understand in my papers.
And I guess I’m wondering what the end result of all this will be. A common critique of LLMs is that they all talk in the same ‘voice’. Now, this isn’t strictly true, but I will admit that individual models themselves—at the very least—all seem to have unique ‘styles’ of writing that appear to be at least somewhat recognizable. For example, GPT-4o is apparently infatuated with emojis and cannot for the life of her stop including one every 5 sentences. GPT-4.1 is a little better, although, like the rest of the GPT-4 family, it seems to be unable to write in anything besides bullet points and short paragraphs - possibly a post-training habit that has been encouraged by OpenAI in order to save on inference costs by forcing shorter outputs.
But if we start changing the way we write out of a fear that we sound ‘too much like LLMs’ (the kind of thing that can get a post rejected on LessWrong for instance), I wonder if we’ll end up in a similar place, where over time, we all just start converging on some optimal ‘most-human’-like style because of it. And I’m sure it sounds sappy and all, but a part of me feels upset about that.
I don’t like the idea that LLMs are forcing—or at least incentivizing—us to change the ways we communicate. I don’t like the idea that we’re all collateral damage from the products of these giant frontier labs and are slowly shaping ourselves to become some kind of linguistic or stylistic hive mind where eventually we all sound the same because at least it means we don’t sound like the AIs.
To me, the worst part is because I don’t know if this will even work. I mean, all of the frontier labs are presumably training on our data and posts right now as they build the next generation of frontier models, so it may be quite likely that in the next 5 years or so, models learn to re-copy whatever our new form of ‘most-human’-like speech is: i.e. ditching em-dashes, inserting bizarre stylistic flourishes here and there to sound more human. And the whole process of having a base-task of next token prediction simply means that we’re right back were we started.
Anecdotally, just within myself, I’ve started noting that I don’t… really care about typos anymore, or weird grammatical errors where I’ve missed a word or used the wrong tense, and I also neurotically fight my own instincts for good grammar and English to keep those mistakes in there because, at the very least, it would show anyone reading it that hey, she actually wrote this paper/article, not an LLM. And maybe we should just be thankful that, for now, making natural typos is still at least not one of the capabilities that any of these frontier models have.
Anyways, I don’t know how to feel about this. But that isn’t nearly as bad of problem as me not knowing what to do about this. A part of me still wants to cling rebelliously to being allowed to write the way I write, without having to second-guess whether someone else will start taking me less seriously just because they think I had a bot write everything for me. But the other part of me worries that if changing the way we write is just something we have to do now to be taken seriously, or at least make it past automated content filters, that it might be something I eventually cave on and start doing.
I’d like to hear anyone else’s thoughts, if they have any, on this, and maybe what you all think we should do about it.