I think this essay has a reasonable point, but also goes a little too far. When you say things like
However, as it pertains to the writing process, the same advice to “develop your taste and judgment” demands the opposite outlook: do not use AI. When it feels hard to edit your writing, that’s what it feels like to exercise your mind and to excavate good ideas. While this belief is common, it’s rare for people to believe it as absolutely I do: I allow zero AI prose or even prose suggestions AT ALL. [...]
Eventually, that intuition doesn’t just help you choose between options as you would with AI outputs, but you become drawn towards the beauty of the next words which haven’t even materialized yet; you become possessed by the prophetic spirit of prosody, your feel for how language flows when it sounds right.
But that never happens when you write with AI from the outset.
Sometimes I will write something, notice that what I've written sounds clumsy but be unable to figure out how to make it better, and then ask an LLM to suggest alternative phrasings. By your arguments, this would imply that this would prevent me from getting any better at writing.
But that would imply that you also couldn't learn from having an editor or English teacher go through what you've written and suggest edits. It's not true that the most effective form of learning is to try thinking everything through yourself without resorting to any external feedback or assistance. Sometimes you simply get stuck or are able to recognize that something you've written is bad without being able to make it better, but are able to recognize an improvement when it is suggested to you.
In the past, for many of those sentences, I would just have concluded that I'm unhappy with this sentence but I'm not going to figure out anything better, so I'll just leave it in and move on. Maybe it would have been better for my learning if I'd just spent several days thinking about that very specific sentence, but... I'm not going to do that. People do learn from having a good example modeled to them as well. So an AI having suggested an improvement, in the future it's likely that my brain will be able to spontaneously generate a similar solution by itself, without AI help.
You do of course have a point in that you do need to put in genuine effort into the writing in order to improve. If you just ask an AI to write everything for you, then yeah, you're not going to get better. But if you've spent a bit of time trying to make something work and then ask for suggestions for improvement, you can actually learn from that.
Sometimes I will write something, notice that what I've written sounds clumsy but be unable to figure out how to make it better, and then ask an LLM to suggest alternative phrasings. By your arguments, this would imply that this would prevent me from getting any better at writing.
Yes, I think it would prevent you from improving, insofar as you had room to improve tackling that yourself.
But that would imply that you also couldn't learn from having an editor or English teacher go through what you've written and suggest edits.
[...]
People do learn from having a good example modeled to them as well. So an AI having suggested an improvement, in the future it's likely that my brain will be able to spontaneously generate a similar solution by itself, without AI help.
I don't think that recognizing an improvement will make you much better. I think that an editor or English teacher can (1) make your piece better by overwriting parts of it (which doesn't improve your own skills) and (2) make you better by providing explanations/feedback of what's wrong (but not by overwriting your work).
Trying to learn through indirect lessons via their rewrites is probably similar to how much you'll improve at Tennis by watching Roger Federer play. Which is, a little, and more than if you hadn't, but still far less than playing on the court yourself in that same time. The reason is that when you watch Federer play, it's easy to identify the output of everything he does right, but you get no visibility into the subjective experience -- what it's like inside the mind and body -- to play in such a way that results in good output.
To further illustrate when it comes to writing, consider me and my mom's ESL experiences (footnote 1).[1]
I would argue there is still "generative effort" within editing (an AI's) work, because you still need to come up with with reasons for what's wrong, and alternatives that make it better. It's magnitudes more "mental exercise" to articulate problems than to merely feel something is wrong. However, in the opposite direction, I think it's magnitudes less generative effort to articulate problems vs. to write for yourself and "create something from nothing". In my view, flexing the 'generative muscle' is what leads to improvement.
---
In the past, for many of those sentences, I would just have concluded that I'm unhappy with this sentence but I'm not going to figure out anything better, so I'll just leave it in and move on.
This is a related reflex I've been actively trying to fight in my own writing process. Often I would come to a sentence, be unable to oneshot it, then [leave a rough outline in brackets to return to later]. These days, I've been trying to push through, and just write something passable. Partly because leaving work to return to duplicates my workload,[2] and partly because I believe that discipline trains my 'generative muscle'.
First, me: I understand Russian but can't speak it; classic immigrant child experience.[3] My understanding does not convert into generative capability. It's a different subjective experience to grope and flail at trying to pluck words from an inaccessible aether, vs. skipping that and reviewing words in front of you. Like how a multiple choice test is easier to bullshit than free text questions.
Second, my mom: She reads plenty of English literature, and has a great sense for when its written well vs. poorly. But she is aware that her writing is, say, typical for an immigrant, and also that AI is 'better' but 'soulless'. If all it took to improve was to notice deficiencies (or to know well-written examples), then couldn't she just edit her emails until they're good? But it's not that easy for her.
In his blog post Lessons from Peter Thiel, Joe Lonsdale writes:
6. Don’t waste time talking about what you plan to think about; instead, work through it immediately.
Intellectual laziness can easily sneak up on you. If you are sitting there talking about problems you plan to solve later, there’s a good chance you are being inefficient. Similarly, in GTD, you don’t put off tasks that only take a couple minutes. In many cases, you can outline and solve or at least clarify any decision or problem you’re confronted with in just a few minutes.
In his blog post Proto-World and the Language Instinct, conlanger Mark Rosenfelder writes:
It’s commonly thought that children absorb languages effortlessly; but language acquisition studies make it clear that this is not so (cf. François Grosjean’s Life with Two Languages, 1984). Language learning takes effort even for children, and children seek to minimize that effort by learning only those languages they absolutely must, in order to communicate. For instance, once children of immigrants discover that their parents speak the national language, they may speak it with them, and retain only passive knowledge of the parents’ native language. Full language ability is likely to continue only when there are people in the child’s life that cannot understand the national language.
I don't think that recognizing an improvement will make you much better. I think that an editor or English teacher can (1) make your piece better by overwriting parts of it (which doesn't improve your own skills) and (2) make you better by providing explanations/feedback of what's wrong (but not by overwriting your work).
I distinctly remember an English class in high school where we had to summarize an article, and our summary was to have a target length of 200 words or something like that. My first draft for the summary had something like 250 words, and I complained to our teacher that I just couldn't shorten it without losing essential content. Everything I had was necessary!
She looked at what I had and then struck out redundant words within a couple of my sentences, reducing their length by something like half.
I looked at what she'd done and went "oh". She didn't need to explain to me why she'd struck out those particular words: when I looked at the resulting sentences, it was obvious that they worked just as well without the removed bits. Any explanation would have been as redundant as the words themselves.
Having received that demonstration, I started looking for more redundant words in my summary and managed to bring it down to something like 190 words. That one example had been enough that I could now do something that I'd literally considered impossible only two minutes ago.
I have many memories of learning working like this across a variety of domains, from language to math to interacting with children. Of course, some things need to be explained, but often just providing enough demonstrations of the correct pattern is more effective than trying to explain it explicitly. Often these things have lots of implicit rules that even the person applying them wouldn't be able to articulate consciously, so if you tried to follow the explicit rationale, you'd get more confused than if you just trusted your brain to learn the pattern from demonstrations.
But of course sometimes it can be beneficial to get an explanation too. And... I find that at least Claude does usually also give me an explanation? Like, here's an excerpt when I asked it to suggest improvements:
Me:
"If you are working on a task, you will want to know whether you are getting any closer to your goal. If you find that you are, then you are doing something right and can feel satisfied. If you are not, then you are probably doing something wrong and should change tactics.
In objective terms, waiting for a bus is not something you can do better or well. Time will pass at the same rate regardless of what you do.
But in subjective terms, you can make a difference. If you manage to think about something else and forget that you are waiting, it can make it feel like time is passing faster.
So the brain can apply some of the same machinery it uses to generate feedback signals for any other task. If I look at the time twice in rapid succession and see that it's 11 minutes to the bus on both occasions, the brain can interpret that as “the thing you did failed to bring you any closer to the goal” and feel disappointment.
If I manage to think about something else and hold on for longer, such that it's 8 minutes the next time I look at the clock, that can register as a great success! The thing that I did led to three whole minutes passing, a substantial improvement over zero!"
---
now "So the brain can apply some of the same machinery it uses to generate feedback signals for any other task" doesn't feel clear enough, or feels like it'd need a bit more explanation. suggestions for that part?
Claude:
You're right - that sentence is doing a lot of conceptual heavy lifting without enough setup. Here are a few ways to make it clearer:
Option 1 - More explicit about the mechanism: "So the brain can treat this like any other goal-directed activity, generating the same kind of feedback signals it would for a work project or exercise routine."
Option 2 - Break it into two sentences: "So the brain treats waiting like it's a task with measurable progress. It generates the same kind of feedback signals - satisfaction when you're 'succeeding,' disappointment when you're 'failing' - that it would for any goal you're working toward."
Option 3 - Lead with a concrete analogy: "It's like how your brain gives you a little hit of satisfaction when you check off items on a to-do list, or feels frustrated when you're stuck on a problem. The brain can apply this same reward-and-disappointment machinery to the 'task' of waiting."
I lean toward option 2 because it maintains your conversational tone while being more specific about what you mean by "feedback signals" and "machinery."
That's both offering suggestions for improvement and providing a rationale for them. (Though in this particular case, I ultimately chose not to use any of the suggested versions and just removed the sentence entirely.)
Yes, your comment is shaping my opinion to concede that examples are useful for teaching 'binary classification tasks'. "Here's an example of how to identify redundant words/sentences which don't contribute to your point," or "here's an example how you can write with more vivid language."
Ie, if you're new to the gym, you can follow your friend, copy their workout, and learn how to do those exercises. However, just copying them gives you very little insight into why: what made them choose this exercise, which muscle it's supposed to hit, next time when you come alone and the machine is taken how to ask a stranger if you can work in a set, and a million other open-ended "project management" tasks. I see those as requiring executive function, and knowing those well is a more reliable indicator of if you'll go to the gym consistently later & improve in the long run, rather than knowledge of one exercise.
I think my unstated contention is that in writing, examples can show you how to put something more briefly, etc., but that doesn't impart any learning in your subjective executive function and critical thinking. If a good writer did a total rewrite of your paragraph, they may have changed it on so many layers of abstraction (word, sentence, paragraph etc.) at once that it's impossible to reverse-engineer "Which mental state would produce these changes in the first place, and how do I get that?"
For what it's worth, when I see your example:
"So the brain can apply some of the same machinery it uses to generate feedback signals for any other task" doesn't feel clear enough, or feels like it'd need a bit more explanation.
The way my mind parses it is:
I think that's clearer, obviously pending alignment with (1) what you actually meant, and (2) your target audience. But that only happens after taking effort to clarify what you're even trying to say, which is the 'platonic' writing process, and has lots of friction. If I just dropped the end result without my reasoning, it's not clear what one could learn from the example to repeat next time, besides "just make it better". But you do learn that by writing without assistance, clarifying your thoughts (or even asking AI to help you clarify what you mean).
And FWIW, I think this version is a lot better than what Claude came up with, I only reviewed those after. Which I think substantiates that if you did resort to Claude without taking the effort yourself, you would inherit AI barnacles that you don't actually mean to collect. The Claudisms can be too abstract to even articulate--though, did you catch how each example said "[it's] like [x]"? I wouldn't have, unless I looked closer. And that's the risk someone runs resorting to AI. "[it's] like [x]" is a construction that evidently leads to wordier explanations. But an editor telling you to avoid that construction will not inoculate yourself from the next Claudism sneaking in (and won't tell you in which cases that construction is appropriate).
Lastly, Claude is unlikely to ever suggest that you just remove the passage, which is what you ultimately did. That too is an example of what becomes clear when you think about what you're even trying to say, rather than asking AI to "convert" your idea into fully phrased thoughts.
By the way, I have no issue with using AI like a tool, e.g., asking it "term for a task you can't make progress in, as in, waiting for a bus couldn't be done any better or worse than the one way". But this only works well once you know what you really need, as opposed to asking for AI's composition from the outset.
Ie, if you're new to the gym, you can follow your friend, copy their workout, and learn how to do those exercises. However, just copying them gives you very little insight into why: what made them choose this exercise, which muscle it's supposed to hit, next time when you come alone and the machine is taken how to ask a stranger if you can work in a set, and a million other open-ended "project management" tasks. I see those as requiring executive function, and knowing those well is a more reliable indicator of if you'll go to the gym consistently later & improve in the long run, rather than knowledge of one exercise.
I agree that just copying your friend's workout does not teach you how to do all of those other skills.
At the same time, especially if you are a complete beginner, it can still be better to start out from just copying it and not thinking about all that other stuff. Doing that workout sufficiently many times will give you a feel of what it's like to do that workout, how it usually ends up leaving you feeling, natural variation in the ways you end up doing it probably gives you some sense of what works better and what works worse, etc.. Getting that initial experiential understanding will then make it easier for you to apply any information related to how to do many of the "project management" tasks.
I think my unstated contention is that in writing, examples can show you how to put something more briefly, etc., but that doesn't impart any learning in your subjective executive function and critical thinking. If a good writer did a total rewrite of your paragraph, they may have changed it on so many layers of abstraction (word, sentence, paragraph etc.) at once that it's impossible to reverse-engineer "Which mental state would produce these changes in the first place, and how do I get that?"
I read this as saying "if you have an editor whose skill level is far above yours and they completely rewrite a paragraph that you've written, you may recognize that it is better, but simply seeing that rewritten paragraph isn't sufficient to teach you how to write paragraphs that are equally good".
I agree with that. But I'd note that:
I read your original post as saying "using prose from AI will prevent you from improving at all". If your position is just "using prose from AI won't allow you to jump to the AI's skill level right away", then I agree, but that seems like a much less interesting claim.
The idea's expression is not clear because its content is not clear, not the other way around. [...] But that only happens after taking effort to clarify what you're even trying to say, which is the 'platonic' writing process, and has lots of friction.
That seems right to me. And I think the most useful part of Claude's feedback was its comment of "that sentence is doing a lot of conceptual heavy lifting without enough setup", and the fact that all of its suggestions were, as you say, pretty bad - those two together pointed me to the same thing that you're saying, which is that my content wasn't clear.
Sometimes when I give Claude a somewhat muddled idea that's pointing to something useful, it's able to find the idea I'm trying to express and crystallize it better than I managed. And sometimes I get results more like this, which indicate that it's not clear what the idea even is that I am struggling to express. It doing so badly prompted me to think about "okay, I need to elaborate more on what exactly I mean... hmm, going into a longer explanation about how exactly reward mechanisms in the brain work would kind of distract from the overall narrative here... actually, do I need that sentence at all, what happens if I just cut it? ah, that works better."
In one sense, this might not have been the best example to use, since ultimately I didn't end up using any of Claude's suggestions. (I mostly picked it because I couldn't immediately remember any other specific example where I did end up using Claude's non-fiction prose directly, and claude.ai's functionality for searching old chats is bad, so I couldn't quickly find a more relevant example.) But on the other hand, I do feel like it supports my point of "making use of AI prose doesn't automatically make your writing worse"?
In that, when you say
Claude is unlikely to ever suggest that you just remove the passage, which is what you ultimately did. That too is an example of what becomes clear when you think about what you're even trying to say, rather than asking AI to "convert" your idea into fully phrased thoughts.
then it's true that I don't remember Claude ever suggesting this. But nonetheless, asking Claude for prose suggestions did lead to me removing the passage. Asking AI for prose suggestions is what caused me to think through what I was even trying to say. Even though it could also have offered me a suggestion that I would have thought through and then accepted.
So I think that too also contradicts what I understood to be your post's original message of "never ask AI for prose suggestions or you will learn nothing". Yes, if you ask the AI for prose suggestions and just automatically put them in rather than doing any real thought, probably your learning will suffer. But if you ask the AI for prose suggestions and then think through whether they're actually any better and what its feedback suggests about what you've already written so far, sometimes realizing that actually all of its suggestions are so bad that you're better off trying an entirely different approach or deleting that bit... then I'd think that that facilitates learning.
At the same time, especially if you are a complete beginner, it can still be better to start out from just copying it and not thinking about all that other stuff. [...] Getting that initial experiential understanding will then make it easier for you to apply any information related to how to do many of the "project management" tasks.
Yeah, you're right -- I think the absolutism can pull the ladder up from beginners.
I'll say, I see the absolutism that I advocate for as "aspirational absolutism", like how the commandment "You shall not murder" should be understood as practically absolute (ie, people do not have license to set personal thresholds of which murder is okay, because that does not technically forbid setting a threshold of "this person inconveniences me"). But everyone also recognizes that there are edge cases when its okay to murder -- i.e., war, self-defense. So, my essay is phrased strongly because I am trying to protect against the first kind permissiveness, but I do recognize there's another class of AI being fine. The concern is people thinking they're doing the second, as a cover for doing the first.
I read your original post as saying "using prose from AI will prevent you from improving at all". If your position is just "using prose from AI won't allow you to jump to the AI's skill level right away", then I agree, but that seems like a much less interesting claim.
I do think for people who already write on the internet, as opposed to complete beginners, AI does prevent 90-95%+ of potential improvement, and pragmatically speaking my point is true. Partly because "jump to the AI's skill level right away" doesn't apply: I think AI's ability to "express ideas clearly" is often bad. Claude's suggestions for you in this case is my prototypical example, because:
Like feeling the rain on your skin, no one else can feel it for you.
This is a deliberate reference to the lyrics of Natasha Bedingfield's thematically-relevant song "Unwritten", right? (Seems much more likely than coincidence or cryptomnesia.) I can empathize with it feeling too cute not to use, but it seems like a bad (self-undermining) choice in the context of an essay about the importance of struggling to find original words?
You're correct, it is an allusion. In earlier drafts I did mention, [...] Bedingfield's song was written just after the dot com boom, but applies equally to AI writing on Substack dot com. It is a clearly joking reference, but I left it included specifically for readers who notice the connection and dig into the lyrics, which were originally figurative, but map closely to my point in that section when taken literally (emphasis my own):
Staring at the blank page before you
Open up the dirty window
Let the sun illuminate the words that you could not find
Reaching for something in the distance
So close you can almost taste it
Release your inhibitions
Feel the rain on your skin
No one else can feel it for you
I do recognize that including high-variance tactics mean they won't always land with readers. It might be relevant to revive the rest of Orwell's quote from my epigraph:
[...] Phrases like a not unjustifiable assumption, leaves much to be desired, would serve no good purpose, a consideration which we should do well to bear in mind, are a continuous temptation, a packet of aspirins always at one’s elbow. Look back through this essay, and for certain you will find that I have again and again committed the very faults I am protesting against. [...]
If you want to write something genuinely original, I can see your point - but you declare this to be true of ALL creative writing, and I don't think you support that point at all.
I've got a stack of AI-written short stories. I've enjoyed reading them, and it's fun practice reading through them and editing them. I cannot imagine how I am possibly worse off, as an editor, for doing some practice rather than doing zero practice?
And, surely, being an editor is part of being a good writer - if you can't detect the flaws in a work, are you actually writing anything worth reading to begin with?
Your whole hypothesis seems to be that it's impossible to be an editor, that you cannot possibly detect flaws. But you also seem to insist that anything "AI flavored" is a flaw, regardless of how good it is. And quite frankly, I've seen AI write some really great lines. I cannot imagine how that line is less great, simply because of it's authorship - the whole point of fiction is that it's enjoyable to read, and I'm enjoying it!
You also seem to assume I have some existing goal, some existing thought that I'm trying to express. But sometimes I just want a fun encounter for tonight's D&D game, or to brainstorm for a much more open-ended creative writing project. Why are pirates a fine concept if I come up with them, but somehow worse if ChatGPT is the one who suggested the idea?
Again, there is a human in this loop, editing and evaluating the ideas
But even if you still intend to splice and curate AI’s outputs, you need good taste and judgment. That only gets developed through writing
In short, this seems ridiculous to me. Are the best editors really all famous authors as well? Does a sports couch need to have been a star athlete?
And even if we accept it, I can (and indeed do) write plenty of material without any AI assistance. Why is my occasional use of AI in one context magically poisoning me in other domains?
Sometimes, there’s a nuanced path you need to follow, and AI will simply never thread that path.
Again, when there's a nuanced path to a specific goal, I might agree, but you dismiss a vast sea of other uses.
Believe me, I agree that hating on AI writing because it's AI is a mistake (the genetic fallacy) -- that is the thesis of my essay before this one, A Thoughtful Defense of AI Writing (and that perspective was controversial here on LessWrong).
That is mainly from a reader's perspective -- don't judge pirates by if it was AI's idea, judge pirates by if that was a good or bad inclusion in that part of your campaign.
Note, I think there's a difference between "I am setting out to write a story with AI" (as you describe, which I think is almost entirely harmless), vs. "I am trying to express my idea, but can't figure it out, so I will amputate part of myself to install whatever spills out of the robot" (which I see as atrophying your creative and critical thinking).
This essay argues from the writer's perspective (as Orwell argued about trite platitudes in his time) that you risk picking up bad habits which themselves disable your critical faculties to notice and get rid them (by encouraging habitual lapses into laziness, and adding to the 'switching cost' of going back to free thinking).
---
Are the best editors really all famous authors as well?
The best editors are likely not famous authors, but are almost certainly good writers. On a tactical level, the alternatives/suggestions they offer would be of great writing quality. And on a strategic level, they need to know how to write the edits and feedback in a way which is clear and convincing to the author. If an editor can't write good edits, evidently they aren't one of the best editors. There is a reflexivity where the best knowledge work always requires good writing, because writing is just the primary medium to convey ideas.
It’s not that it weakens your point, but it’s that starting a sentence with “It’s not that …” triggers audiences to narrate your writing in “AI voice”. It disintegrates a reader’s brain because just the smell of AI is a noxious fume. Too often it’s a sign of other lurking deficiencies; there’s never just one cockroach in the kitchen.
oh god, I really really hate the self-illustration 😱 enough to think that it's brilliant? not sure yet...
People often let language do their thinking for them. For example, consider the phrase "this has happened before". A discussion of artificial intelligence leading to technological unemployment will devolve into "this has happened before" versus "this time is different".
But "this has happened before" muddles together two different ideas that I call *restrep* and *maytip*. *Restrep* is what happens at the start of a second game of chess. The board is cleared and the pieces set out in the standard starting position. Reset and repeat. The second game may start with exactly the same opening as the first game. One guesses that the player who lost the first game will be the first to depart from exact repetition and try something new. By contrast *maytip* is the story of pack horses being displaced by canals being displaced by railways being displaced by trucking. Since there is no reset, things are accumulating and we may reach a tipping point eventually, even if this time is *not* different.
If we had better language, we would talk of technological unemployment in terms of maytip and whether this time is really different. We would not confuse the strong evidence that the past provides in *restrep* with the weak evidence that the past provides in *maytip*.
LLMs are even more deeply trapped in language than we are. Language is all they have. This is sad because us humans could really do with some help in escaping the language traps that we fall into. Sometimes we build those traps ourselves. Think of how the abortion debate turned into pro-choice versus pro-life when word creation became a new front in the culture war. Other times we use traditional phrases such as "defensive alliance". Look at this attempt https://www.themotte.org/post/1043/splitting-defensive-alliance-into-chaining-alliance to split "defensive alliance" into "chaining alliance" and "isolating alliance". The comments spot the relevance to 2025 with the current war in Ukraine, and pursue the object level arguments. As humans we are quick to spot which political faction gains an advantage from existing language and they resist any attempt to improve language so that we can be less wrong.
That leaves us with a dream about artificial intelligence, that it would have an inner voice, speaking a language of thought unrelated to the natural languages of humans. It would find our words ambiguous, and help us to go beyond them. It is currently a broken dream. LLMs understand the world through our existing vocabulary and suffer the limitations that those words bring.
You may already be aware, but part of Eliezer's Sequences (his post Disputing Definitions) covers how by disambiguating homonyms,[1] we can talk about our differing (or not) intuitions of what exists in objective reality, rather than disagreements about definitions.
But I haven't seen reason to believe that AI has this problem particularly badly, in that I think its 'vector space' is decent enough at understanding which sense of the 'same' word is being used given the surrounding context of other words. Or at least it would readily parse the ambiguous senses if asked, as opposed to a biased human who might entrench and "resist any attempt to improve language," as you mentioned.
Or I'd also add, cases of the equivocation fallacy; "a word or phrase with multiple meanings is used ambiguously in an argument, shifting its meaning between premises or between a premise and the conclusion"
Eliezer's essay "Disputing Definitions" is didactic writing, but one can also read it as a lament. He even uses the word "mournful". He ends his essay, like I started my comment, with making up two new words, intending to head off what he calls the "Standard Dispute". His version is tongue-in-cheek. His words are "alberzle" and "bargulum" and there is a time machine.
His essay is excellent, but how does an essay from 2008 need updating for the LLM era? He is lamenting both what is in the training data and what is missing from it. People dispute definitions. They fail to invent new words to head off these disputes.
My claim is that many of our "Standard Disputes" have their origins in linguistic poverty. Enrich language with new words, targeting the ambiguities that we quarrel over, and the problems are solved at source. But turn to an LLM for help and it will help you to write fashionable prose. Since neologism have never been in fashion, (and are subject to mockery, see https://xkcd.com/483/) the LLM will not suggest any. Rather, it will guide you down the path of the "Standard Dispute", leading you away from low hanging fruit.
For a whimsical speculation, imagine that the New York Times publishes a list of one hundred new words to enrich political discussion. Inventing new words becomes all the rage. In 2026 human authors who want to join in the craze will have to invent their own. In 2027 the linguistic patterns involved will be in the training data. In 2028 egglogs (egregious neologisms) are the hallmark of AI slop. In 2029 neologisms are banned and by 2030 we are back to disputing definitions, just like we did in 2025.
I agree with the main post. My narrow point on neologisms all that I have to add.
I see your point -- at first I was thinking:
I don't think AI would have trouble differentiating between the senses of "sound" (using Eliezer's essay as an example)
But actually it seems like you're saying:
Suppose we live in a world before people recognized a distinction between sound (audio waves) and sound (aural sensation). In this world, AI trained on the corpus of human text would not spontaneously generate this distinction (one, it doesn't have the knowledge, and two, its dissuaded from even conjecturing it, because neologisms are taboo). But we don't even need to 'suppose' this world exists -- we do actually live in it now, it just applies to concepts more nuanced than "sound".
I think neologisms are interesting because on one hand, it is annoying to see terms "astroturfed" (e.g., sonder),[1] or have insane mismatch between their sound and meaning (e.g., "grok" which people use as "to profoundly understand", yet sounds more like a clunky word for a caveman's lack of understanding. Its "etymology" is quite fitting (it's supposed to be unrelatable),[2] but it's a shame the term caught on).
On the other hand, I think much of the pursuit of knowledge is building towards finer and finer distinctions in our experience of reality. This necessitates new words.
For whatever reason, some morphologies seem more tasteful than others, such as 'common extensions' (e.g., ChatGPT -> ChatGPTism), or 'combining neoclassical compounds' (e.g., xeno- + -cide = xenocide, from Ender's Game), or even just 'adding standard-word qualifiers' (e.g., your example of splitting "defensive alliance" into "chaining alliance" and "isolating alliance"). I think most of the people who find success in coining terms probably do it in these more intuitive ways, rather than purely 'random' morphologies -- find an excerpt from Nabeel Qureshi's post, Reflections on Palantir:
One of my favorite insights from Tyler Cowen’s book ‘Talent’ is that the most talented people tend to develop their own vocabularies and memes, and these serve as entry points to a whole intellectual world constructed by that person. Tyler himself is of course a great example of this. Any MR reader can name 10+ Tylerisms instantly - ‘model this’, ‘context is that which is scarce’, ‘solve for the equilibrium’, ‘the great stagnation’ are all examples. You can find others who are great at this. Thiel is one. Elon is another (“multiplanetary species”, “preserving the light of consciousness”, etc. are all memes). Trump, Yudkowsky, gwern, SSC, Paul Graham, all of them regularly coin memes. It turns out that this is a good proxy for impact.
From the Dictionary of Obscure Sorrows, whose whole project is to coin new terms for phenomena which don't yet have names.
Robert A. Heinlein originally coined the term grok in his 1961 novel Stranger in a Strange Land as a Martian word that could not be defined in Earthling terms, but can be associated with various literal meanings such as "water", "to drink", "to relate", "life", or "to live", and had a much more profound figurative meaning that is hard for terrestrial culture to understand because of its assumption of a singular reality.
I remarked to a friend a while ago that just as people today have far fewer friends on average then they did several decades ago, due in part to social media, that AI could could result in a world where people have far fewer original thoughts on average in the future. Do you agree with this as a related thesis to what you are arguing? If so, do you see this as a somewhat hopeless battle? Just deciding not to use AI writing at all is all well and good to say but as it gets better and more common you may find yourself at a severe disadvantage at least in terms of quantity. This would then result in obvious incentives to use it making a society level pressure to use a tool that makes you unable to think.
Yes, I do think there will be fewer people with original thoughts. Mostly because original thoughts come from immersing yourself deeply in a topic, and writing about topics is one of our main ways to do that.
Though I don't think it's a hopeless battle. If you're bought into the concept that writing is thinking, then Paul Graham makes the following case for continuing to write by yourself in his essay Writes and Write-Nots:
So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots. I know which half I want to be in, and I bet you do too.
This situation is not unprecedented. In preindustrial times most people's jobs made them strong. Now if you want to be strong, you work out. So there are still strong people, but only those who choose to be.
It will be the same with writing. There will still be smart people, but only those who choose to be.
I think there aren't many fields where quantity of output will give you some killer advantage compared to quality of output. As decent AI writing becomes more common, I think it will just raise the quality benchmark, and good thinking beyond that point will be highly valued. My friend Logan makes this point in his essay about thoughtfulness, This was meant for you (LW), as it pertains to business sales.
Sales is, by its very nature, a rivalrous domain. It is a game of limited attention, finite budgets, and constant competition. Multiple vendors want the same customer.
In these environments, the signal that breaks through is rarely the loudest. It is the most precise. It is the one that says: this was meant for you, and no one else. It is a gesture that cannot be mistaken for automation or routine. That is what makes sales so interesting in the context of thoughtfulness. It is a space where commoditized effort is common, and where genuine, specific sacrifice becomes all the more powerful.
[...]
AI SDRs (or any other technology) will get arbitrarily good at solving whatever they can; always, tech will get competitors up to a frontier, and what will push a customer over the edge is differentiation beyond that established frontier.
In the short run, AI SDRs will work as dishonest signals of units of work, but people will wizen up eventually. They are a leverage multiplier.
So even if some AI writing will be required to automate labor, you will always need your own brain to know how to use it better than others.
But if thought corrupts language, language can also corrupt thought. A bad usage can spread by tradition and imitation, even among people who should and do know better. The debased language that I have been discussing is in some ways very convenient. Phrases like a not unjustifiable assumption, leaves much to be desired, would serve no good purpose, a consideration which we should do well to bear in mind, are a continuous temptation, a packet of aspirins always at one’s elbow.
Before ChatGPTisms, sloppy writing was signposted by Orwellisms. There’s a political kind (to euphemize “firebombing a village” vaguely as “neutralize unreliable elements”), but I’m interested in the mundane kind: lazy phrases you default to when you can’t articulate exactly what you mean.
For years, I visualized this as: your idea is shaped like a nuanced path, but without finding the exact words to express it, you settle for tracing the worn groove of a common phrase that is roughly, but not exactly, the same shape you intended. One time doesn’t hurt, but enough worn grooves stacked end-to-end and you will land nowhere near where your original thoughts may have taken you.
I still don’t have an elegant rendition of this idea, but now I must also resist ChatGPT’s beckoning finger: “please write an aphoristic version of this concept: your idea shaped like a nuanced path…”
But you have to resist.
Orwell’s essay was published just after World War II, but applies equally to AI writing on the World Wide Web. Lazy writing in his time was vague cliches, and in ours is outsourcing work to a robot. But in both, it leads to lazy thinking, because writing is thinking:
Paper and pencil and “expressing ideas” is merely what writing looks like. But words on a page are just the visible artifacts of an invisible process. Writing words and choosing good ones is a process we found that propels hard thinking, and it’s that underlying thinking that is what writing really is.
In my previous essay, I defended AI writing as it pertains to reading it. You should develop your taste and judgment for good writing without resorting to lazy proxies like ‘Yuck, this is AI.’
However, as it pertains to the writing process, the same advice to “develop your taste and judgment” demands the opposite outlook: do not use AI. When it feels hard to edit your writing, that’s what it feels like to exercise your mind and to excavate good ideas.[1] While this belief is common, it’s rare for people to believe it as absolutely I do: I allow zero AI prose or even prose suggestions AT ALL.
To be clear, I’m not referring to functional writing like work emails or language translation. I mean creative writing when you’re wrestling with ideas you’re trying to express. I’m also not referring to using it like a tool: I use it like a magic thesaurus and sometimes ask for its feedback in conversation. I’m just opposed to AI composing your words; “write a hook for a post about x”, or “edit y to be shorter.”
The problem with both vague cliches and outsourcing your work is it feels like you’re merely changing the expression of your thoughts, rewriting them. But in reality, you’ve surrendered the content of your thoughts, overwriting them. Cliched and AI prose both entail slipping into the orbit of some common diction which enough other people also gravitated towards. You end up professing socialized beliefs, while identifying as an independent thinker. ChatGPTisms in your work signpost the exact spots where AI hijacked your mind.
Nope, doesn’t work. People think they’re very subtle when they rephrase all the ChatGPTisms. Except, they never catch all of them, because most are too abstract to explicitly grasp. Signs like em dashes, “delve”, and “not but” get mentioned most because they’re the most legible; word-level artifacts.
But AI patterns happen at all levels of granularity; across word, phrase, sentence, paragraph, and more. Its cadence of speech is just stilted. So a “search and destroy” approach to ChatGPTisms will lead you on a hunt for ghosts. To “edit away” the signs you’d have to change everything, which is equivalent to writing it yourself in the first place. That’s as true for eviscerating signs of AI writing as it is for fixing any incompetent writing:
Marking a good essay? That's really easy. Check. A. You did everything right. Marking a bad essay? Oh my God—the words are wrong, the phrases are wrong, the sentences are wrong, they're not ordered right in the paragraphs, the paragraphs aren't coherent, and the whole thing makes no sense. So, trying to tell the person what they did wrong? It's like well, you did everything wrong. Everything about this essay is wrong.
By letting ChatGPT compose even a short phrase for you, you’re inviting generic AI patterns that you can’t detect, which slowly infect your writing until it sounds unnatural and you can’t explain why.
Well, depending on the pattern, someone else could maybe detect it. But you’ll let through all the patterns beyond your power level to discern. In effect, people who are smarter or have a better intuition for AI writing than you will know you’re a fraud, but you won’t know which telltale sign gave it away.
It’s tempting to think that your usage of “not-but” is backed with clear substance that readers will recognize, or that you’ve always used em dashes, so you won’t let AI witchhunts wrest your speech.
I’ve seen real human experts apparently using AI to aid their copywriting, but not generate their ideas.[2] And while in theory you can parse an expert perspective from its bed of em dashes, in practice most people won’t extend you that leniency.
It’s not that it weakens your point, but it’s that starting a sentence with “It’s not that …” triggers audiences to narrate your writing in “AI voice”. It disintegrates a reader’s brain because just the smell of AI is a noxious fume. Too often it’s a sign of other lurking deficiencies; there’s never just one cockroach in the kitchen.
With writing, the medium is the message: polished language reflects polished thoughts, slovenly language reflects slovenly thoughts, and AI language reflects AI “thoughts”.
So while you should not invite AI into your writing, that doesn’t go far enough. You should evict any ChatGPTisms you find even if that’s just how I write. Em dashes and “not-but” are extremely versatile and effective linguistic technologies. But if you want an audience to listen, you have to adapt your style.
One problem is, although AI is good at mimicking genres, it still retains its other language patterns at higher levels of abstraction. The chameleon isn’t literally invisible after changing color. But the bigger problem is, why would you want to inherit any premade “style” in the first place?[3] This strikes me as pitifully stagnant.
Suppose you want to write in a “LinkedIn”-style tone to make your post sound professional. Well, Warren Buffet writes his own shareholder letters. Jeff Bezos writes his own letters. Howard Marks writes his own memos. Peter Thiel and Paul Graham and all these serious people write their own words.
The most “businessy” thing you can do, evidently, is write clear language in your own style. But the clear style is a consequence of clear thinking; knowing exactly what you want to say. And the clear thinking is a consequence of writing for yourself! Because you explore each dead end until only the critical path is left.
Alternatively, to sound ‘businessy” there’s a sequence of buttons on Google Docs that will convert your writing into a “formal” style:
Here’s the plain text. Feel free to skip it once you’re sufficiently nauseated.
A fundamental challenge arises from artificial intelligence's proficiency in mimicking genres while simultaneously retaining its inherent linguistic patterns at higher levels of abstraction. The metaphor of a chameleon, though illustrative, falls short; the creature is not literally invisible after altering its coloration. However, a more significant concern emerges: why would one willingly adopt a pre-existing "style" in the first instance? This inclination strikes one as lamentably unoriginal.
Consider the desire to adopt a "LinkedIn"-esque tone to convey professionalism in a written post. Yet, prominent figures such as Warren Buffett, Jeff Bezos, Howard Marks, Peter Thiel, and Paul Graham—indeed, all individuals of serious intellectual pursuit—personally compose their own communications.
Evidently, the most "business-oriented" approach is to articulate ideas with clarity and in one's distinctive voice. This clarity of style is a direct consequence of lucid thought, stemming from a precise understanding of one's intended message. Furthermore, clear thinking itself is fostered through the act of writing for oneself, as it necessitates the exploration of every tangential path until only the essential argument remains.
That text is a grotesque offence against the English language. And soon it’ll be on tap inside your word processor! The feature frankly borders on evil. Obviously, it exists because people want to use AI this way. Their writing is bad and needs AI’s help, but using AI prevents them from getting better. So they’re stuck.
You can get unstuck, you just need to practice writing outside of strictly when you need to. Write essays for fun.
But even if you still intend to splice and curate AI’s outputs, you need good taste and judgment. That only gets developed through writing; a crucible of wading through endless micro-possibilities and choosing one over another, over and over.[4] It’s how you earn a tacit, visceral instinct of “this idea feels like it should take 4 lines, not 5,” “if I restructure this part here it’ll open a natural spot for a joke over there,” “instead of a general description, this would work better as a vivid sketch of one example.”
Eventually, that intuition doesn’t just help you choose between options as you would with AI outputs, but you become drawn towards the beauty of the next words which haven’t even materialized yet; you become possessed by the prophetic spirit of prosody, your feel for how language flows when it sounds right.
But that never happens when you write with AI from the outset.
Like feeling the rain on your skin, no one else can feel it for you. Least of all an AI which can’t be pulled forward by the feeling of anticipation, because it can’t feel anything at all.
Outsourcing to AI jams any seed of insight you had into the predetermined shape of the training data, just like the course of a river is predetermined by its topography. Sometimes, there’s a nuanced path you need to follow, and AI will simply never thread that path.
For this reason, you also can’t feed a bullet point outline of “your ideas” to ChatGPT and ask it to “convert” it into full sentences. If you processed those ideas yourself, you’d almost always end with different, better ideas than you started with. The abbreviated form of bullet points seems to imply condensed work representing more substance, but really it obscures shallow work, yet to be filtered and developed through contact with reality.
For example, see this article about the Law of Armed Conflict by John Spencer, a man credentialled as chair of urban warfare studies at the Modern War Institute (MWI) at West Point (the most prestigious military academy in the US). And he does cite relevant historical substantiation, his points sound like those of his position. But his writing is a museum of AI bloopers, such as “The law of war is not a loophole in morality—it is morality under fire.” and “These aren’t just bad arguments. They’re dangerously shallow—and they should be rejected outright.”
Copying styles mindlessly is how you pick up bad habits. In her article, The Average Fourth Grader Is a Better Poet Than You (and Me Too), Hannah Gamble mentions how 3rd-6th grade poets were far better than middleschoolers and highschoolers, probably because the younger students had no inbuilt expectations for how poetry “should” sound.
Wouldn't this "don’t outsource your thinking to any crutch" reasoning apply to any technology? Like the calculator? Yes actually, if you want to think deeply about math, you should avoid calculators when you can. At advanced levels, you won’t even have the intuition of when and how to use the calculator if you don’t work through enough problems by yourself to train that instinct.