But if thought corrupts language, language can also corrupt thought. A bad usage can spread by tradition and imitation, even among people who should and do know better. The debased language that I have been discussing is in some ways very convenient. Phrases like a not unjustifiable assumption, leaves much to be desired, would serve no good purpose, a consideration which we should do well to bear in mind, are a continuous temptation, a packet of aspirins always at one’s elbow.
Before ChatGPTisms, sloppy writing was signposted by Orwellisms. There’s a political kind (to euphemize “firebombing a village” vaguely as “neutralize unreliable elements”), but I’m interested in the mundane kind: lazy phrases you default to when you can’t articulate exactly what you mean.
For years, I visualized this as: your idea is shaped like a nuanced path, but without finding the exact words to express it, you settle for tracing the worn groove of a common phrase that is roughly, but not exactly, the same shape you intended. One time doesn’t hurt, but enough worn grooves stacked end-to-end and you will land nowhere near where your original thoughts may have taken you.
I still don’t have an elegant rendition of this idea, but now I must also resist ChatGPT’s beckoning finger: “please write an aphoristic version of this concept: your idea shaped like a nuanced path…”
But you have to resist.
Orwell’s essay was published just after World War II, but applies equally to AI writing on the World Wide Web. Lazy writing in his time was vague cliches, and in ours is outsourcing work to a robot. But in both, it leads to lazy thinking, because writing is thinking:
Paper and pencil and “expressing ideas” is merely what writing looks like. But words on a page are just the visible artifacts of an invisible process. Writing words and choosing good ones is a process we found that propels hard thinking, and it’s that underlying thinking that is what writing really is.
In my previous essay, I defended AI writing as it pertains to reading it. You should develop your taste and judgment for good writing without resorting to lazy proxies like ‘Yuck, this is AI.’
However, as it pertains to the writing process, the same advice to “develop your taste and judgment” demands the opposite outlook: do not use AI. When it feels hard to edit your writing, that’s what it feels like to exercise your mind and to excavate good ideas.[1] While this belief is common, it’s rare for people to believe it as absolutely I do: I allow zero AI prose or even prose suggestions AT ALL.
To be clear, I’m not referring to functional writing like work emails or language translation. I mean creative writing when you’re wrestling with ideas you’re trying to express. I’m also not referring to using it like a tool: I use it like a magic thesaurus and sometimes ask for its feedback in conversation. I’m just opposed to AI composing your words; “write a hook for a post about x”, or “edit y to be shorter.”
The problem with both vague cliches and outsourcing your work is it feels like you’re merely changing the expression of your thoughts, rewriting them. But in reality, you’ve surrendered the content of your thoughts, overwriting them. Cliched and AI prose both entail slipping into the orbit of some common diction which enough other people also gravitated towards. You end up professing socialized beliefs, while identifying as an independent thinker. ChatGPTisms in your work signpost the exact spots where AI hijacked your mind.
Nope, doesn’t work. People think they’re very subtle when they rephrase all the ChatGPTisms. Except, they never catch all of them, because most are too abstract to explicitly grasp. Signs like em dashes, “delve”, and “not but” get mentioned most because they’re the most legible; word-level artifacts.
But AI patterns happen at all levels of granularity; across word, phrase, sentence, paragraph, and more. Its cadence of speech is just stilted. So a “search and destroy” approach to ChatGPTisms will lead you on a hunt for ghosts. To “edit away” the signs you’d have to change everything, which is equivalent to writing it yourself in the first place. That’s as true for eviscerating signs of AI writing as it is for fixing any incompetent writing:
Marking a good essay? That's really easy. Check. A. You did everything right. Marking a bad essay? Oh my God—the words are wrong, the phrases are wrong, the sentences are wrong, they're not ordered right in the paragraphs, the paragraphs aren't coherent, and the whole thing makes no sense. So, trying to tell the person what they did wrong? It's like well, you did everything wrong. Everything about this essay is wrong.
By letting ChatGPT compose even a short phrase for you, you’re inviting generic AI patterns that you can’t detect, which slowly infect your writing until it sounds unnatural and you can’t explain why.
Well, depending on the pattern, someone else could maybe detect it. But you’ll let through all the patterns beyond your power level to discern. In effect, people who are smarter or have a better intuition for AI writing than you will know you’re a fraud, but you won’t know which telltale sign gave it away.
It’s tempting to think that your usage of “not-but” is backed with clear substance that readers will recognize, or that you’ve always used em dashes, so you won’t let AI witchhunts wrest your speech.
I’ve seen real human experts apparently using AI to aid their copywriting, but not generate their ideas.[2] And while in theory you can parse an expert perspective from its bed of em dashes, in practice most people won’t extend you that leniency.
It’s not that it weakens your point, but it’s that starting a sentence with “It’s not that …” triggers audiences to narrate your writing in “AI voice”. It disintegrates a reader’s brain because just the smell of AI is a noxious fume. Too often it’s a sign of other lurking deficiencies; there’s never just one cockroach in the kitchen.
With writing, the medium is the message: polished language reflects polished thoughts, slovenly language reflects slovenly thoughts, and AI language reflects AI “thoughts”.
So while you should not invite AI into your writing, that doesn’t go far enough. You should evict any ChatGPTisms you find even if that’s just how I write. Em dashes and “not-but” are extremely versatile and effective linguistic technologies. But if you want an audience to listen, you have to adapt your style.
One problem is, although AI is good at mimicking genres, it still retains its other language patterns at higher levels of abstraction. The chameleon isn’t literally invisible after changing color. But the bigger problem is, why would you want to inherit any premade “style” in the first place?[3] This strikes me as pitifully stagnant.
Suppose you want to write in a “LinkedIn”-style tone to make your post sound professional. Well, Warren Buffet writes his own shareholder letters. Jeff Bezos writes his own letters. Howard Marks writes his own memos. Peter Thiel and Paul Graham and all these serious people write their own words.
The most “businessy” thing you can do, evidently, is write clear language in your own style. But the clear style is a consequence of clear thinking; knowing exactly what you want to say. And the clear thinking is a consequence of writing for yourself! Because you explore each dead end until only the critical path is left.
Alternatively, to sound ‘businessy” there’s a sequence of buttons on Google Docs that will convert your writing into a “formal” style:
Here’s the plain text. Feel free to skip it once you’re sufficiently nauseated.
A fundamental challenge arises from artificial intelligence's proficiency in mimicking genres while simultaneously retaining its inherent linguistic patterns at higher levels of abstraction. The metaphor of a chameleon, though illustrative, falls short; the creature is not literally invisible after altering its coloration. However, a more significant concern emerges: why would one willingly adopt a pre-existing "style" in the first instance? This inclination strikes one as lamentably unoriginal.
Consider the desire to adopt a "LinkedIn"-esque tone to convey professionalism in a written post. Yet, prominent figures such as Warren Buffett, Jeff Bezos, Howard Marks, Peter Thiel, and Paul Graham—indeed, all individuals of serious intellectual pursuit—personally compose their own communications.
Evidently, the most "business-oriented" approach is to articulate ideas with clarity and in one's distinctive voice. This clarity of style is a direct consequence of lucid thought, stemming from a precise understanding of one's intended message. Furthermore, clear thinking itself is fostered through the act of writing for oneself, as it necessitates the exploration of every tangential path until only the essential argument remains.
That text is a grotesque offence against the English language. And soon it’ll be on tap inside your word processor! The feature frankly borders on evil. Obviously, it exists because people want to use AI this way. Their writing is bad and needs AI’s help, but using AI prevents them from getting better. So they’re stuck.
You can get unstuck, you just need to practice writing outside of strictly when you need to. Write essays for fun.
But even if you still intend to splice and curate AI’s outputs, you need good taste and judgment. That only gets developed through writing; a crucible of wading through endless micro-possibilities and choosing one over another, over and over.[4] It’s how you earn a tacit, visceral instinct of “this idea feels like it should take 4 lines, not 5,” “if I restructure this part here it’ll open a natural spot for a joke over there,” “instead of a general description, this would work better as a vivid sketch of one example.”
Eventually, that intuition doesn’t just help you choose between options as you would with AI outputs, but you become drawn towards the beauty of the next words which haven’t even materialized yet; you become possessed by the prophetic spirit of prosody, your feel for how language flows when it sounds right.
But that never happens when you write with AI from the outset.
Like feeling the rain on your skin, no one else can feel it for you. Least of all an AI which can’t be pulled forward by the feeling of anticipation, because it can’t feel anything at all.
Outsourcing to AI jams any seed of insight you had into the predetermined shape of the training data, just like the course of a river is predetermined by its topography. Sometimes, there’s a nuanced path you need to follow, and AI will simply never thread that path.
For this reason, you also can’t feed a bullet point outline of “your ideas” to ChatGPT and ask it to “convert” it into full sentences. If you processed those ideas yourself, you’d almost always end with different, better ideas than you started with. The abbreviated form of bullet points seems to imply condensed work representing more substance, but really it obscures shallow work, yet to be filtered and developed through contact with reality.
For example, see this article about the Law of Armed Conflict by John Spencer, a man credentialled as chair of urban warfare studies at the Modern War Institute (MWI) at West Point (the most prestigious military academy in the US). And he does cite relevant historical substantiation, his points sound like those of his position. But his writing is a museum of AI bloopers, such as “The law of war is not a loophole in morality—it is morality under fire.” and “These aren’t just bad arguments. They’re dangerously shallow—and they should be rejected outright.”
Copying styles mindlessly is how you pick up bad habits. In her article, The Average Fourth Grader Is a Better Poet Than You (and Me Too), Hannah Gamble mentions how 3rd-6th grade poets were far better than middleschoolers and highschoolers, probably because the younger students had no inbuilt expectations for how poetry “should” sound.
Wouldn't this "don’t outsource your thinking to any crutch" reasoning apply to any technology? Like the calculator? Yes actually, if you want to think deeply about math, you should avoid calculators when you can. At advanced levels, you won’t even have the intuition of when and how to use the calculator if you don’t work through enough problems by yourself to train that instinct.