906

LESSWRONG
LW

905
Writing (communication method)AI
Frontpage

5

A Thoughtful Defense of AI Writing

by Michael Samoilov
16th Sep 2025
Linkpost from agenticconjectures.substack.com
4 min read
0

5

5

New Comment
Moderation Log
More from Michael Samoilov
View more
Curated and popular this week
0Comments
Writing (communication method)AI
Frontpage

These days, it's status-boosting to notice when something was written by AI. Look at all those em dashes. Yep. Oh wow, “it’s not x, but y”; such a dead giveaway. I am so keen and discerning.

Shortform and longform content goes viral for flagging the signs of AI writing. Or rather, sins, because everyone is eager to oust the witches, is paranoid of the lowbrow-plagiarist accusations, and is self-righteous in deriding the verbal pollution.

But frankly, if a post is engaging or insightful, I don’t care if it was written by AI. Of course, I do care if it’s sloppy. But that’s just as evident when a human writes it au naturel.

To discredit writing because you suspect it’s AI is itself a sloppy heuristic—you should discredit writing because it’s bad. AI writing often is bad, but also often is not—you only ever notice when it’s done lazily. But blanket-discarding it has many problems.

First, the false positives. After two gentle em dashes, many people will quit reading and start hunting like the Terminator. Once you suspect AI’s invisible hand, you’ll start sensing it where it’s not. Now anything the piece could have taught you is tarnished.

Second, even if it was AI, constructions like “it’s not x, but y” (negative parallelisms) are idiomatic in English because insights always take the form of swapping misconceptions with the truth. E.g., these two topics are not disparate, but actually related, or an assumption I took for granted is not hidden, but actually legible; or is not static, but actually variable. So it’s unsurprising this structure gets rewarded by the AI models.

Take a writing sample from the article, 13 Signs You Used ChatGPT To Write That. It’s meant to show how “low-IQ” models are for blindly parroting negative parallelisms:

Falling in love isn’t just about romance. It’s about discovering new parts of yourself.

Yep, what a low-IQ—wait. That really is insightful!

That the pleasure of falling in love seems to stem externally from your relationship with your lover (you’re charmed by how attractive they are), but really, it stems internally from the new identities you inhabit (you enjoy becoming someone who flirts, who yearns, who feels alive).[1]

Which is to say, many AI writing tropes are human writing tropes, which often exist because we like them.

Lastly, it’s a mistake to lock in a disdain for AI writing now, while many of the faults are transitory: byproducts of nascent tools.

Currently, you tell ChatGPT what you want, then take what it gives. If you’re prudent, you’ll edit that. If you’re advanced, you might adjust the prompt and retry, throw on some custom instructions, or use handcrafted prompts sourced online. And if you’re really clever, you’ll ask the AI to write the prompt to ask itself. Yet after all that, the writing quality is almost always stilted anyway!

Quality in any domain emerges from evolution: iterative generation and selection. But given our current prompt-and-response interfaces, it’s clear we presently live in the impending past. Chatbots weren’t designed for high-quality content: they’re alright at generation (but can be incorrigible to steer), and horrible at enabling selection (there’s too much friction to (a) keep asking for (b) plenty of (c) tailored outputs to choose from).

One vision of the future is tree interfaces. In 2021 (a year before ChatGPT existed), a pseudonymous AI researcher named Janus designed Loom, software to explore branching paths of AI outputs.[2] It works by generating as little as one sentence at a time, yet up to a dozen or more alternatives at once. Choose one and repeat, or backtrack and trace another path.

The generation bottleneck is paradoxically solved by selection: don’t describe what you want; instead, prune a context window from which what you want will naturally flow.[3] You constrain possible futures only to those which plausibly follow from a high-quality past.

And the selection bottleneck is solved by generation: don’t try one-shotting a multi-paragraph response; instead, get plenty of options per sentence. You produce a greater density of decision points to exert your judgment in, each with more material for your judgment to express itself through.

AI becomes leverage on your judgment.

The future appears to be a world where creation cedes ground to curation. In their 2021 post Quantifying Curation, Janus finds:

It is an interesting and unintuitive property of large language models that their stochastic completions to the same prompt can vary from nonsense to super-human – we might instead expect an AI of infra-human capability to consistently produce infra-human content, the way a person with a weak understanding of a topic is unlikely to say something “accidentally” indistinguishable from an expert. But the learning curves of language models have very different properties than that of humans.

AI writing will gain usage because it has extraordinary capacity for good writing. But that only happens when you apply enough evolutionary selection pressure by exhibiting good taste.

However, an unwillingness to engage with AI writing is not an expression of your taste, but an insecurity with it. If the writing is good, it’s a guardrail against accidentally “falling for” or enjoying AI: you’re scared to think for yourself. If the writing is bad, then rejecting it for being AI is a shallow proxy that absolves you from articulating what is bad about it.

If anything, you should sublimate a disdain for AI into a disdain for poor writing generally; raising your standards. If you ever think: “This writing sounds bad. Is it AI?” Don’t even wait to investigate the answer; just drop the piece. That is principled. Compare the reverse: “This sounds like AI. Is this writing bad?” That is paranoia.

There’s nothing to fear once you realize that AI is just a force multiplier on your judgment. And that if someone’s AI writing looks bad, it’s not a reflection of AI’s abilities, but a reflection of the author’s low effort. AI is a tool, a human still has to use it well.


After this entire tractate, you may have been expecting the inevitable twist of the genre: And guess what? This post was itself… written by AI 🫳🎤.

But the amount of AI writing in this post is… zero. Not for title ideation, not for promotional copywriting, not even for a single phrase, or even a fragment of a phrase anywhere in the piece.

I handle it like poison.

Reading AI writing is completely different from writing with AI.

  1. ^

     Esther Perel, famed relationship therapist, has a related insight when she explains Why Happy People Cheat: “Sometimes when we seek the gaze of another, it’s not our partner we are turning away from, but the person we have become. We are not looking for another lover so much as another version of ourselves.” Similarly, Ye (formerly Kanye West), famed relationship therapist-needer, observed in his song Broken Road: “Love is a version of bein' a virgin again”. So AI landing on this point is rivalling our expert-level benchmarks on the human condition.

  2. ^

     Video demo of Loom.

  3. ^

     This also applies to humans. Rick Rubin opens his book The Creative Act with a quote by the painter Robert Henri: “The object isn’t to make art, / it’s to be in that wonderful state / which makes art inevitable.” The same happens when you try to invent a joke on the spot. The same happens when you try to invent a joke on the spot. AI can’t do it well, but nor could you. Yet both can make jokes all the time when immersed in some context, passively scanning for connections or subversions.