This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
Preface
Summary: This post argues that resistance to AI-assisted writing is driven primarily by neurobiological uncertainty-aversion (amygdala) and ego-protection, not epistemic rigour. I frame this through "waterfall vs. agile epistemology" - the claim that early publication and iterative feedback produces better thinking than prolonged isolated development.
Key moves:
The Royce Irony: Winston Royce's 1970 paper warned against the waterfall model; Figure 10 looks like Agile. The industry read the diagram and stopped reading.
The perturbation window: releasing ideas early gives you (time minus one week) of feedback; waterfall gives you zero.
The metacognitive gap: AI doesn't remove the need for thinking; it removes the alibi for avoiding feedback.
This piece was written in ~2 hours with LLM assistance, as a demonstration of the core argument. Sections IX-X address whether that's insight or rationalisation.
The Almonds in the Tower
On the neurobiological origins of the ivory tower, the most expensive misreading in software history, & how the struggle was never the value - A thesis about feedback, ego, & the courage to be wrong.
This article was written in approximately two hours, with substantial assistance from a large language model. I mention this now—before you’ve had time to form an opinion about whether it’s any good—because the disclosure itself is part of the argument.
Notice what just happened in your reading. Something shifted. Perhaps a small recalibration of attention, a subtle adjustment of the evidentiary standards you were preparing to apply. Or perhaps not—perhaps you’re the kind of reader who genuinely doesn’t care how long something took to produce, only whether it rewards the time you spend with it. But even that indifference, if it’s real, is interesting. It suggests you’ve already answered a question that most people haven’t consciously asked.
The question is this: what, exactly, do we think we’re measuring when we measure the labor behind a piece of writing?
I could have not told you. I could have let this piece pass as “normally” authored—whatever that means now—and you would have evaluated it on its merits, or its apparent merits, without the contaminating variable of knowing how it was made. That I chose to lead with the confession is itself a kind of wager. A bet that the disclosure will cost me less credibility than the argument will earn back. Or perhaps a bet that you, specifically, are tired of the pretense—tired of the unspoken assumption that the conditions of production are somehow prior to the quality of the product.
We’ll see.
What I can tell you is that the thinking behind this piece took considerably longer than two hours. Weeks, arguably. Months, if you count the slow accumulation of observations that eventually crystallized into a thesis worth testing. The writing—the conversion of those thoughts into sentences, the sequencing, the iterative refinement of phrasing—that’s what happened fast. And whether that distinction matters is, in a sense, the entire subject of what follows.
So here we are. You, reading something that was made quickly. Me, watching to see if that fact will determine how you receive it—or whether you can hold it lightly enough to let the ideas speak for themselves.
I suspect your answer will reveal more about you than about me.
Section II: The Waterfall Cathedral
There is an image that lives in our collective imagination of what serious intellectual work looks like. You know it without being told: the scholar in the tower, surrounded by books and silence, laboring for years—decades, sometimes—on a single work of such depth and rigor that when it finally emerges, it lands like a stone tablet carried down from a mountain. Complete. Authoritative. Earned.
This image is not neutral. It carries with it an entire epistemology, a theory of how knowledge is best produced and validated. The theory goes something like this: important ideas require sustained, uninterrupted contemplation. The thinker must immerse themselves fully, must read everything relevant, must turn the problem over and over in solitude until they have seen it from every angle. Only then—only when the work is finished—should it be released into the world for judgment.
If you’ve spent any time in software development, you’ll recognize this pattern. It has a name: waterfall. Specification flows into design, design into implementation, implementation into testing, testing into deployment. Each phase completes before the next begins. The assumption is that sufficient foresight at the start—enough planning, enough expertise, enough isolated concentration—will yield a better product than messy, iterative engagement with reality.
The appeal is obvious. Waterfall feels serious. It respects the difficulty of the problem. It doesn’t rush. It suggests that the person doing the work has the discipline to resist premature exposure, the confidence to trust their own judgment, the integrity to refuse shortcuts. There’s something almost monastic about it—a willingness to withdraw from the noise of the world in service of something pure.
And buried in this appeal is an assumption so deep it rarely surfaces for examination: that duration correlates with depth. That a book which took ten years to write must contain more thinking than one written in a year. That the scholar who emerges from two decades of solitary labor has, by virtue of that labor, produced something more valuable than someone who worked faster and shared earlier.
Does it, though?
The question sounds almost impertinent. Of course longer is better—isn’t that obvious? Doesn’t more time mean more revision, more nuance, more opportunities to catch errors and deepen arguments?
Perhaps. But notice what the waterfall model also guarantees: zero feedback until launch. The scholar in the tower is not just concentrating; they are isolated. Their ideas are developing in a closed system, metabolizing only the inputs they themselves have chosen, refracted only through their own interpretive lenses. Whatever blind spots they carry into the tower, they carry out again—refined, perhaps, but not corrected.
The waterfall cathedral is beautiful. Soaring. Impressive in its commitment to purity of process. But it is also, by design, a structure that cannot learn from the world until it is too late to change.
And the question this raises—the uncomfortable question that the image of the solitary scholar is designed to deflect—is whether that isolation is a feature or a failure mode dressed in the robes of virtue.
Section III: The Perturbation Window
Here is a different way to think about the production of ideas: not as architecture, but as ecology.
In an ecology, nothing exists in isolation. Every organism is shaped by what it encounters—predators, symbionts, competitors, the chemical composition of the soil. Evolution doesn’t happen in a vacuum; it happens through interaction, through the constant pressure of an environment that tests, selects, and recombines. The fittest ideas, like the fittest organisms, are not the ones that developed longest in protected conditions. They’re the ones that survived contact with reality.
Now consider what happens when you release an idea early. Not finished—rough, possibly wrong, certainly incomplete. A first draft. A probe.
The probe enters the world and immediately begins to generate responses. Some people disagree; they articulate objections you hadn’t considered. Some people agree but extend the argument in directions you didn’t anticipate. Some people misunderstand in ways that reveal ambiguities in your own thinking—places where you thought you were clear but weren’t. And some people—this is the crucial part—take your half-formed idea and combine it with their half-formed ideas, producing something neither of you could have generated alone.
This is the perturbation window. The span of time during which your idea is out there, interacting, evolving, being stress-tested by minds that are not yours.
Let’s do the arithmetic. Say you have an idea that could, under the waterfall model, be developed into a book over ten years of solitary refinement. At the end of that decade, you release it. The perturbation window opens—but you’re already done. Whatever feedback you receive now is too late to incorporate; it can only inform your next project, if you have the energy and humility to begin again.
Alternatively: you spend one week producing a rough version. An article, perhaps. A sketch of the argument, incomplete but coherent enough to engage with. You publish it. Now you have a perturbation window of ten years minus one week—essentially the same duration, but active. The feedback you receive in month two can reshape your thinking for the remaining nine years and ten months. The objections raised in year one can be addressed, incorporated, or used to identify which parts of your framework are load-bearing and which were scaffolding you can discard.
The math is almost embarrassingly simple. And yet.
And yet the waterfall model persists, not because anyone has done this calculation and concluded that zero feedback is optimal, but because the calculation isn’t what’s driving the choice. Something else is. Something that prefers the closed system, the protected development, the revelation rather than the conversation.
We’ll get to what that something is. But first, notice what the agile alternative actually requires. It requires publishing things that aren’t done. It requires being wrong in public. It requires treating your own ideas not as monuments to be unveiled but as organisms to be released into an environment that will change them—and you—in ways you cannot fully predict or control.
The perturbation window is not a comfort. It’s an exposure. And that, I suspect, is why so few people willingly open it.
Section IV: The Two Little Almonds
Deep in your brain, nestled on either side of your temporal lobes, sit two almond-shaped structures that have opinions about everything you do. The amygdalae—your threat-detection system, your uncertainty-aversion engine, the part of you that was making decisions long before the prefrontal cortex showed up with its fancy “reasoning” and “long-term planning.”
The almonds don’t deliberate. They don’t weigh evidence. They react—fast, pre-verbal, with the authority of several hundred million years of evolutionary refinement. And what they react to, more than almost anything else, is uncertainty. The unknown. The uncontrolled. The space where you cannot predict what happens next.
This is not a flaw. For most of evolutionary history, uncertainty meant danger. The rustle in the grass that might be wind or might be predator. The stranger approaching the camp who might be trader or might be raider. The almonds kept your ancestors alive by erring on the side of caution, by flooding the body with cortisol at the first hint of ambiguity, by screaming wait, don’t, not yet, we don’t know enough.
The problem is that the almonds cannot distinguish between a leopard in the tall grass and a comments section on the internet.
Consider what it means to publish something unfinished. You are releasing an incomplete version of your thinking into an environment you cannot control, to be evaluated by people whose reactions you cannot predict, who may find flaws you cannot yet see. You are, in the almonds’ calculus, walking into tall grass at dusk and hoping for the best.
Better to stay in the tower a little longer, the almonds whisper. Better to check it one more time. You don’t know what’s out there. You don’t know what they’ll say. Ignorance is bliss—and certainty, even the false certainty of an untested idea, is safer than exposure.
This is why the waterfall model is so sticky. Not because anyone has rationally concluded that zero feedback is optimal. Not because the evidence supports isolated development. But because waterfall feels safe in a way that agile does not. The closed system protects you from the particular kind of uncertainty that the almonds find most threatening: the judgment of others, arriving at a time and in a form you cannot anticipate or prepare for.
And here is the irony—the first of several, each more layered than the last.
The image of the scholar in the ivory tower is perhaps our clearest cultural icon of the evolved human. The mind transcending the body. Thought purified of animal impulse. Reason ascending above instinct. We venerate this image precisely because it seems to represent everything that separates us from the beasts—the capacity for sustained, abstract contemplation uncorrupted by base urges.
And yet.
That scholar is in the tower because they are listening to their lizard brain. The whole edifice of contemplative isolation—the years of solitary refinement, the reluctance to share until the work is “ready,” the preference for revelation over conversation—is, at its root, a fear response. The almonds, dressed in academic robes, whispering that the world outside is dangerous and the work inside is never quite safe enough to release.
The tower isn’t a triumph over instinct. It’s a monument to it.
This doesn’t make the fear illegitimate. The world outside does contain predators of a kind—people who will misread you, misrepresent you, use your half-formed thoughts against you. The comments section really is a dangerous place. But the question isn’t whether the fear is understandable; it’s whether the fear is serving you. Whether the protective withdrawal that feels like rigor is actually producing better work, or merely protecting you from the discomfort of discovering, in real time, what you got wrong.
The almonds don’t care about that distinction. They care about keeping you safe. And safety, to them, looks like staying in the tower until you’re sure.
The problem is that you’re never sure. You’re just more invested.
Section V: The Royce Irony
In 1970, a computer scientist named Winston Royce published a paper titled “Managing the Development of Large Software Systems.” On page two, he presented a diagram—Figure 2—that would become one of the most influential images in the history of technology management.
You know this diagram, even if you’ve never seen the original. Seven boxes descending like a staircase: SYSTEM REQUIREMENTS → SOFTWARE REQUIREMENTS → ANALYSIS → PROGRAM DESIGN → CODING → TESTING → OPERATIONS. Each phase flowing into the next. The waterfall.
The image is clean. Sequential. It makes a kind of intuitive sense that feels almost inevitable—of course you define requirements before you design, of course you design before you code. The boxes are orderly. The arrows point one direction. A manager could look at this and feel that the chaos of software development had been tamed into something logical, predictable, controllable.
What that manager almost certainly did not do was read the next sentence.
Directly beneath Figure 2, Royce wrote:
“I believe in this concept, but the implementation described above is risky and invites failure.”
The diagram was not a recommendation. It was a warning.
The final sentence on that very page is equally unambiguous:
“The remainder of this discussion presents five additional features that must be added to this basic approach to eliminate most of the development risks.”
Must. Not “could helpfully include.” Not “might consider.” Must.
The rest of the paper—the part that history forgot—describes those five features. And when you see what Royce actually recommended, the scale of the misreading becomes almost physically painful.
Figure 10, his summary diagram, looks nothing like the clean staircase of Figure 2. It is dense, looping, recursive. Preliminary designs feed back into requirements. Analysis cycles back into software requirements. There are parallel documentation streams, customer involvement checkpoints, iterative refinement loops at every stage. The arrows don’t just descend; they return, reconnect, spiral. It looks less like a waterfall and more like the nervous system of an organism that learns from contact with reality.
It looks, in other words, remarkably like what we now call Agile—five decades before the Agile Manifesto was written.
Now place Figure 10 beside the modern “canonical” waterfall model—the one that appears in project management courses, in boardroom presentations, in the shared imagination of anyone who has tried to organize complex work. Pastel boxes. Gentle descent. Perhaps a “Maintenance” phase added at the bottom, as though learning only happens after everything else is finished. Clean. Simplified. Wrong.
The gap between these two images is perhaps the most expensive reading comprehension failure in post-war industrial history. Billions of dollars in failed projects. Decades of frustration. Entire careers built on defending a methodology that its own inventor explicitly labeled as an illustration of what not to do.
How does this happen?
The almonds have something to do with it. Figure 2 is clean; Figure 10 is complicated. The warning diagram fits on a slide; the actual recommendation requires study, interpretation, comfort with ambiguity. And when you are a manager under pressure, when your own uncertainty-aversion systems are screaming for something solid to hold onto, when the executives above you want confidence and the engineers below you want clarity—well.
The simple image wins. The nuance gets lost. And the almonds, once again, get what they wanted: the feeling of certainty, even at the cost of the thing itself.
Royce tried to communicate complexity. The world took his simplest image and stopped reading. And now we invoke his name to justify exactly the process he warned us against—while his actual model, sitting eight pages later in the same paper, gathers dust.
If you want to understand why the waterfall model persists despite its failures—and why the same pattern might be repeating right now, in different domains, with different diagrams—you could do worse than to sit with this irony for a moment.
Complexity was offered. Simplicity was taken.
The almonds always get what they want.
Section VI: Ego as Architect
The amygdalae explain the fear. But fear alone doesn’t account for the tenacity of the tower—the way certain minds will defend their isolation long past the point where it serves them, long past the point where they themselves can see it isn’t working.
For that, we need to talk about identity.
There is a particular kind of person—you may be one, I am certainly one—for whom intellectual output is not merely something they do but something they are. The quality of their ideas is load-bearing for their self-concept. To be wrong is not simply to have made an error; it is to be revealed as lesser than they believed themselves to be. The stakes of any given argument are not just epistemic but existential.
This is, to be clear, not a comfortable way to live. It creates a constant background hum of anxiety, a vigilance about being caught out, a sensitivity to criticism that can border on the allergic. But it also—and here’s the trap—produces genuine rigor, at least some of the time. The person who cannot tolerate being wrong often works very hard to avoid being wrong. They check their sources. They anticipate objections. They refine their arguments until the surface is smooth, until the gaps are hidden, until the thing looks finished.
The problem is that this same drive makes feedback feel like assault.
If your self-worth is structurally dependent on the quality of your thinking, then every critique—even the helpful kind, especially the helpful kind—arrives as evidence against your identity. The commenter who says “have you considered...” is not offering a gift; they are exposing a gap, a place where your thinking failed, a crack in the edifice you’ve built to convince yourself you’re the kind of person who doesn’t have cracks.
And so the tower becomes a fortress. The extended timeline becomes a moat. The years of solitary refinement become—let us be honest with ourselves—years of avoiding the verdict.
Ryan Holiday wrote a book called Ego Is the Enemy, and the title is doing most of the work I need it to do here. But I want to push on a specific aspect: the way ego distorts the economics of feedback.
From a pure information-theory perspective, early feedback is almost always more valuable than late feedback. Errors caught in draft one cost less to fix than errors caught in draft ten. Blind spots identified before publication can be addressed; blind spots identified after become permanent features of the public record. The rational actor would want feedback as early as possible, would seek it out, would treat every “have you considered” as free consulting.
But the ego is not a rational actor. The ego has its own economy, and in that economy, the currency is self-regard. Early feedback—feedback on unfinished work, feedback that exposes the gaps before you’ve had time to fill them—is expensive. It costs self-image. It costs the pleasant illusion that you were going to get it right on your own. It costs the narrative that you are the kind of person who doesn’t need help.
And so the ego invents reasons to delay. It’s not ready yet. I need to think about it more. I don’t want to waste people’s time with something half-baked. These sound like conscientiousness. They feel like rigor. But often—not always, but often—they are the ego negotiating for more time in the fortress, more distance from the verdict, more opportunity to refine the surface until the gaps are invisible.
The cruelest irony is that this pattern is most pronounced in precisely the people who have the most to offer. The mediocre thinker has less identity invested in being a good thinker; they can receive feedback with relative equanimity because their self-concept doesn’t depend on being right. But the genuinely brilliant mind—the one whose contributions might actually matter—often has the most fragile relationship with critique. Their brilliance has become structural. They cannot afford to be wrong in the way that wrongness demands to be afforded: openly, repeatedly, as a regular cost of doing intellectual business.
This is where Saruman enters.
In Tolkien’s legendarium, Saruman is not a villain because he lacks intelligence. He is the most intelligent—the most learned, the most far-seeing, the head of his order. His fall comes precisely because he cannot tolerate counsel from those he considers lesser. He cannot accept that the Hobbits might see something he missed, that Gandalf’s wandering might yield insights his study did not, that wisdom might arrive from unexpected directions. His tower—Orthanc, literally “cunning mind”—becomes his prison not because anyone locks him in but because his ego cannot survive leaving it.
The tower, in this light, is not a place of contemplation. It is a bunker against the possibility of being wrong in front of others.
And the ten years of solitary refinement? The insistence that the work isn’t ready, that it needs more time, that premature exposure would somehow contaminate its purity?
Sometimes that’s genuine incubation. Sometimes ideas really do need time to develop, and premature exposure really would distort them.
But sometimes—more often than we’d like to admit—it’s Saruman in Orthanc, convinced that more study will yield what contact with reality threatens to reveal: that we are not quite who we thought we were.
The question, as always, is how to tell the difference from inside.
Section VII: The Protestant Keyboard
We have, at this point, two interlocking explanations for the persistence of the tower: the amygdalae, which fear uncertainty, and the ego, which fears exposure. But neither of these feels quite respectable enough to say out loud. No one defends their ten-year timeline by admitting their lizard brain is scared or their self-concept is fragile. They need a story—a frame that transforms neurological self-protection and psychological defensiveness into something that sounds like virtue.
Enter the Protestant work ethic.
I am painting with a broad brush here, and I know it. Max Weber’s thesis about Protestantism and capitalism is contested, nuanced, more complicated than the version that has seeped into popular consciousness. But the popular version is what matters for our purposes, because the popular version is what we’ve internalized: the idea that hard work is morally redemptive. That struggle sanctifies. That the difficulty of a task is, in some sense, part of its value—not merely instrumental but intrinsic.
This is a powerful frame. It has built nations, fueled industrial revolutions, shaped the psychic landscape of entire civilizations. And it has, somewhere along the way, colonized our assumptions about creative and intellectual work.
The writer who struggles for years in obscurity is noble. The artist who suffers for their craft is authentic. The scholar who labors in solitude, emerging only when the work is “ready,” has earned their authority in a way that someone who worked faster somehow hasn’t. There is a moral weight to duration, a virtue in difficulty, a suspicion of ease.
Notice how convenient this is.
If suffering is noble, then the fear that keeps you in the tower becomes “rigor.”
If duration signals depth, then the ego-protection that prevents early exposure becomes “thoroughness.”
If the struggle itself has value, then you don’t have to examine whether the struggle is producing value—it already is value, by definition, regardless of outcome.
The Protestant work ethic, in this light, isn’t a cause of the waterfall pattern. It’s a cover story. A way of making neurobiological fear and psychological fragility look like moral seriousness. The almonds get to hide behind the language of discipline. The ego gets to hide behind the language of standards. And no one has to admit that they stayed in the tower because they were scared of what they’d find outside.
Here’s what I find almost funny about this: the moral valorization of struggle isn’t even consistent with the capitalism it supposedly supports.
Pure market logic doesn’t care how hard you worked. It cares what you produced. If two people create equivalent value—identical products, identical insights, identical contributions—the one who did it faster and with less effort is, by market logic, more efficient, more productive, more worthy of reward. The suffering adds nothing. The labor is instrumental, not terminal. To privilege the harder path when an easier path yields the same destination is, by strictly capitalist logic, a kind of irrationality.
But we’re not strict capitalists, are we? We’re creatures who inherited a story about the moral weight of effort, and we’ve applied that story to domains where it doesn’t quite fit.
Consider the keyboard.
There is a vague sense—rarely articulated but widely held—that the physical act of writing is part of the value of writing. The hours at the desk. The fingers on keys. The slow accumulation of words through sustained bodily effort. This feels real in a way that faster methods do not. It feels like work, which means it feels like worth.
But attend, for a moment, to the actual economics of typing.
Every keystroke requires energy—electrical energy to power the machine, metabolic energy to move the finger, mechanical stress on the keyboard itself. There is wear, entropy, heat. The physical act of transcription is not merely neutral; it has costs. Resources are consumed. Order degrades into disorder. The universe moves incrementally closer to heat death every time you press the spacebar.
These costs are only justified if they produce something that outweighs them. And what is that something? Not the keystrokes themselves. Not the hours logged. Not the suffering endured. The value—the only value—is the thinking that the words encode.
But here’s the thing about thinking: it doesn’t require keyboards.
Thinking happens in the shower, on walks, in the middle of the night when you should be sleeping. It happens during conversations, while reading, while staring out windows. The moments of genuine cognitive work—the actual generation and refinement of ideas—are largely decoupled from the act of transcription. You don’t think because you’re typing; you type to capture what you’ve already thought, or to discover what you’re in the process of thinking. The keyboard is a recording device, not a generative one.
Which means: if the same thinking can be encoded with fewer keystrokes, less time, less wear-and-tear on bodies and machines, the value of the thought is unchanged. What decreases is only the cost of capturing it.
This is, roughly speaking, what large language models offer. Not a replacement for thinking—thinking still has to happen somewhere—but a more efficient transcription layer. A way to convert cognitive work into written form with less friction, less time, less of the mechanical labor that we’ve been trained to mistake for the work itself.
And the resistance to this is fierce. Visceral. Moralized.
Because if the keystrokes aren’t the value—if the struggle isn’t the point—then what have we been venerating all this time? What was all that suffering for?
The answer, I suspect, is that the suffering was for the almonds and the ego. It was a way of avoiding the exposure that faster methods would have enabled. It was a way of staying in the tower, calling it rigor, and believing our own cover story.
The Protestant keyboard wasn’t sacred. It was a hiding place.
Section VIII: The Metacognitive Gap
Now we arrive at the objection that has been waiting patiently in the wings, and it deserves to be stated in its strongest form: most AI-generated writing is garbage.
Not all of it. But most. The internet is already drowning in LLM-produced slurry—text that is fluent, grammatical, superficially coherent, and utterly empty. Blog posts that say nothing. Articles that circle their subject without ever landing. Content that exists not because someone had something to say but because someone needed to fill a content calendar, hit a word count, produce output in a way that could be measured and reported.
This is real. It is a genuine problem. And if the argument I’ve been building seems to ignore it, seems to wave away the tsunami of AI-generated mediocrity in favor of some idealized “agile epistemology,” then the argument deserves to fail.
So let me be precise about what I think is happening.
The failure mode of AI-assisted writing is not that AI produces bad text. AI produces text that is, in a sense, too adequate—fluent enough to pass, coherent enough to seem finished, polished enough to trigger the sense that the task is complete. And that adequacy is the trap.
There’s a concept in cognitive science that goes by various names: System 1 versus System 2, fast thinking versus slow thinking, automatic versus controlled processing. The details differ across frameworks, but the core observation is consistent: we have a mode of cognition that is quick, effortless, and pattern-matching, and we have a mode that is slow, effortful, and genuinely analytical. Most of the time, System 1 handles things. System 2 is expensive to run, so we only spin it up when System 1 flags something as requiring attention.
Now consider what happens when you prompt an LLM and receive a response.
System 1 looks at the output. The grammar is correct. The structure is logical. The length is appropriate. Every surface-level cue says this is a finished piece of writing. And System 1, pattern-matching as it does, concludes: task complete. Move on.
The problem is that System 1 cannot evaluate depth. It can recognize fluency but not rigor. It can pattern-match against the form of good writing without detecting the presence or absence of genuine thought. And so the LLM output passes inspection not because it’s good but because it’s good enough to fool the part of you that isn’t really paying attention.
This is what I mean by the metacognitive gap.
Metacognition is, roughly, thinking about thinking. It’s the capacity to monitor your own cognitive processes, to notice when you’re confused, to detect when an argument is skating over a gap rather than bridging it. It’s what allows you to read your own writing and feel the places where the logic doesn’t quite hold, where you’ve asserted something you haven’t earned, where the words are doing the work that the ideas should be doing.
Most people, when using AI, do not engage this capacity. The output arrives; it looks like writing; they ship it. The metacognitive layer that would catch the emptiness—that would notice the absence of genuine thought beneath the fluent surface—never activates. System 1 handles the whole transaction, and System 2 never gets invoked.
The result is the slurry. The content-calendar filler. The “AI-generated” pejorative that has already become shorthand for text that no one actually thought about.
But—and here is the pivot—this is not an indictment of the tool. It’s an observation about the user.
The great writers we venerate, the ones who spent years in their towers producing work of genuine depth—what were they actually doing all that time? They were not, primarily, pressing keys. They were thinking. They were engaging the slow, expensive, effortful process of genuine cognition. They were reading their own drafts and feeling the gaps. They were sitting with discomfort rather than resolving it prematurely. They were running System 2 continuously, for years, refusing to let System 1 declare victory just because words existed on a page.
The keyboard was never the source of that rigor. The time was never the source. The source was the sustained, uncomfortable, metabolically expensive act of actually thinking—and refusing to stop when the surface looked smooth.
AI doesn’t eliminate this requirement. If anything, it intensifies it.
When the transcription layer was slow—when producing a draft took months of physical labor—there was a built-in forcing function for reflection. You had time to think while you typed. The very difficulty of production created space for cognition. The struggle, while not itself valuable, created conditions in which value could emerge.
Now that the transcription layer is fast, that forcing function is gone. You can produce a draft in minutes. Which means: if you’re going to think, you have to choose to think. You have to deliberately invoke the metacognitive layer that the old process invoked accidentally. You have to resist the System 1 verdict of “done” and ask, actively, is there actually anything here?
This is harder than it sounds. The almonds don’t like it—uncertainty is uncomfortable. The ego doesn’t like it—what if you discover your idea was shallower than you thought? The Protestant-keyboard mythology doesn’t help—it trained us to locate value in the wrong place, so we don’t know where to look for it now.
But the people who figure this out—who learn to use AI as a transcription layer while maintaining genuine cognitive engagement—will produce more, iterate faster, and refine their thinking in ways that the tower model never allowed. They’ll have the perturbation window. They’ll have the feedback. They’ll have the collision with reality that the waterfall model forecloses.
The metacognitive gap is real, and most people are falling into it. That is not a reason to reject the tool. It is a reason to develop the capacity that the tool requires.
The struggle was never about the typing. It was always about the thinking.
The thinking is still required. It’s just no longer disguised as something else.
Section IX: The Self-Devouring Serpent
You have, at this point, read approximately four thousand words about why early publication and iterative refinement are superior to prolonged isolation and deferred feedback.
You already know what comes next.
This piece was written in a single sitting—a long one, but single—with substantial AI assistance. It is not finished in the way a book would be finished. It has not been reviewed by colleagues, pressure-tested by critics, refined across multiple drafts over multiple years. It contains, almost certainly, errors I cannot see, gaps I have skated over, assumptions I have not examined closely enough. If I were to read it again in six months, I would wince at parts of it. If I were to read it in five years, I might disown sections entirely.
I am publishing it anyway.
This is not a confession. It is—or I am trying to make it—a demonstration.
The waterfall approach to this argument would have been to spend a year or two developing it into a proper book. I would have read more Weber, tracked down primary sources on Royce, consulted with neuroscientists about the amygdala’s actual role in uncertainty processing, built a more rigorous framework, hedged more carefully, anticipated more objections. The result would have been more defensible. More complete. More difficult to dismiss.
It would also have taken two years to reach anyone. Two years during which the ideas could not be challenged, refined, extended, or demolished. Two years of development in a closed system, metabolizing only what I chose to feed it, refracted only through my own limitations.
Instead, you’re reading this now. Today. While the ideas are still rough, still provisional, still wrong in ways I cannot yet identify.
And here’s what that means: you are not a passive recipient of a finished argument. You are part of the perturbation window. Your objections—if you have them—are exactly what the waterfall model would have denied me. Your extensions, your refinements, your “yes, but have you considered...” are the feedback that turns a static artifact into an evolving understanding.
The piece is eating its own tail. The method is the message is the test.
If I’m wrong about agile epistemology, someone will tell me—sooner, and in time to matter. If the amygdala framing is neuroscientifically naive, an actual neuroscientist might stumble across this and correct me. If the Royce interpretation is historically contested in ways I don’t know about, a software historian might surface that context. The perturbation window is open. The closed system has been breached.
And yes—my almonds are uncomfortable with this. The uncertainty about how this will be received is genuinely unpleasant. My ego is aware that the gaps I cannot see are visible to others, that the confidence of these sentences is not matched by the solidity of the ground beneath them. The part of me that wants to be seen as rigorous, as careful, as the kind of person who doesn’t publish half-baked arguments—that part is lodging formal objections.
I am overruling it. Not because the objections are wrong—they might not be—but because the alternative is the tower. And I’ve been in the tower. I know what it feels like to refine an idea for years in isolation, convinced that more time would make it better, only to discover upon release that I had been wrong in ways that a single outside reader could have caught in the first month.
The tower felt like rigor. It was actually ego preservation with good lighting.
So here we are. An argument about early exposure, exposed early. A case for iteration, in its first iteration. A thesis that cannot be proven from inside the skull that generated it, offered to skulls that might see what this one cannot.
The ouroboros swallows its tail.
Now we find out if there’s anything in its stomach.
Section X: The Provisional Non-Conclusion
I want to end by naming the thing I’ve been circling.
There is a version of this argument that is pure rationalization. A sophisticated, multi-layered justification for impatience, dressed in the language of epistemology and neuroscience. In this version, I am not a person who has genuinely discovered something about the economics of intellectual production; I am a person who wants to work faster, publish sooner, and avoid the hard labor of real refinement—and who has constructed an elaborate framework to make that avoidance look like insight.
I cannot rule this out from inside my own skull.
The almonds work in all directions. They are perfectly capable of generating sophisticated intellectual frameworks to protect me from feedback that says slow down. The ego, too, is flexible; it can attach itself to being the kind of person who publishes provocative arguments just as easily as it can attach to being the kind of person who labors in towers. The machinery of self-deception is not constrained by the content of the deception. It can wear any costume, including this one.
So maybe this entire piece is a symptom of the thing it claims to diagnose. Maybe the speed of its production is not evidence that speed can coexist with rigor but evidence that I have abandoned rigor and called the abandonment a methodology. Maybe five years from now I will look back at this and see only a person who wanted to believe that shortcuts were wisdom.
I don’t know. Genuinely. The uncertainty is not rhetorical.
But here’s what I do know: the waterfall alternative—the version where I sit with this argument for two years, refining it in isolation until I’m sure it’s not self-serving—would not resolve this uncertainty. It would only delay the confrontation. I would emerge from the tower with a more polished artifact, but the question of whether the core insight is genuine or rationalized would remain unanswered, because that question cannot be answered from inside. It requires contact with minds that are not mine. It requires the perturbation window that I have been arguing for across ten sections and four thousand words.
By publishing this now—rough, provisional, possibly wrong—I am subjecting the argument to exactly the test it recommends. If it’s a rationalization, someone will see the gaps. If the almonds have blinded me, the feedback will arrive in time to matter. If I have built an elaborate justification for my own impatience, you can tell me.
That’s not a rhetorical flourish. It’s an actual request.
The tower approach would be to refine this until I felt certain. But certainty generated in isolation is precisely the failure mode I’ve been describing. The feeling of confidence that comes from extended solitary work is not the same as the robustness that comes from surviving contact with reality. And I would rather discover I’m wrong in public, in time to update, than preserve the pleasant illusion of rightness in private.
So here, at the end, are the tensions I cannot resolve:
There is a genuine question about whether some ideas require marination—not as ego protection but as actual cognitive incubation. Not everything can be iterated in public. Some thoughts need time to unfold in ways that feedback might distort rather than refine. I have not adequately addressed where the boundary lies between productive iteration and premature exposure. Maybe this piece itself crosses that line.
There is a genuine question about feedback quality. Not all perturbation is useful. Some of it is noise—other people’s almonds, other people’s egos, reactions shaped by misunderstanding or motivated reasoning or simple bad faith. The iterative model assumes that feedback, on balance, improves thinking. That assumption might be wrong. The tower might sometimes be a legitimate defense against distortion, not just a hiding place.
There is a genuine question about whether I am simply impatient. Whether the “golden window” framing—the urgency to produce while the tools are good and cheap—is insight or impulse. Whether I would believe any of this if the process were still slow.
I don’t have clean answers to these questions. I’m not sure clean answers exist.
What I have is a bet. A wager that the discomfort of early exposure is more productive than the comfort of extended isolation. That whatever I’ve gotten wrong here, I will learn about faster than I would have in the tower. That the perturbation window, now open, will do what perturbation windows do: test, stress, refine, or demolish.
The argument is out of my hands now. It belongs to the collision.
I don’t know if I’ve escaped my own tower in writing this. But at least now you can look up and tell me if you see any walls.
This piece was written on the 9th of January 2026, in approximately 90 minutes, with AI assistance. If you’ve found the gaps I couldn’t see, I’d like to know. The perturbation window is open.
A note on AI-assistance and this post's relevance to LW policy:
This article was co-written with an LLM in approximately two hours. I'm disclosing this at the top, per site policy, but also because the disclosure is load-bearing: the piece argues that the resistance to AI-assisted writing is driven more by amygdala-level uncertainty aversion and ego-protection than by epistemic rigour.
The thesis is that "waterfall epistemology"—developing ideas in isolation until they're "complete"—is inferior to early publication and iterative refinement. The AI-assistance and rapid publication aren't incidental; they're the demonstration.
I'm genuinely uncertain whether this is insight or rationalisation. Sections IX-X addresses this directly. I'm submitting it because, per the argument, the only way to resolve that uncertainty is external feedback—and LessWrong is where I'd most like that feedback to come from.
If you reject it, I'd be curious whether the rejection is policy-based or object-level disagreement. Both would be informative.
Preface
Summary: This post argues that resistance to AI-assisted writing is driven primarily by neurobiological uncertainty-aversion (amygdala) and ego-protection, not epistemic rigour. I frame this through "waterfall vs. agile epistemology" - the claim that early publication and iterative feedback produces better thinking than prolonged isolated development.
Key moves:
This piece was written in ~2 hours with LLM assistance, as a demonstration of the core argument. Sections IX-X address whether that's insight or rationalisation.
The Almonds in the Tower
On the neurobiological origins of the ivory tower, the most expensive misreading in software history, & how the struggle was never the value - A thesis about feedback, ego, & the courage to be wrong.
Bjørn Flindt Temte
Jan 09, 2026
Section I: A Confession of Speed
This article was written in approximately two hours, with substantial assistance from a large language model. I mention this now—before you’ve had time to form an opinion about whether it’s any good—because the disclosure itself is part of the argument.
Notice what just happened in your reading. Something shifted. Perhaps a small recalibration of attention, a subtle adjustment of the evidentiary standards you were preparing to apply. Or perhaps not—perhaps you’re the kind of reader who genuinely doesn’t care how long something took to produce, only whether it rewards the time you spend with it. But even that indifference, if it’s real, is interesting. It suggests you’ve already answered a question that most people haven’t consciously asked.
The question is this: what, exactly, do we think we’re measuring when we measure the labor behind a piece of writing?
I could have not told you. I could have let this piece pass as “normally” authored—whatever that means now—and you would have evaluated it on its merits, or its apparent merits, without the contaminating variable of knowing how it was made. That I chose to lead with the confession is itself a kind of wager. A bet that the disclosure will cost me less credibility than the argument will earn back. Or perhaps a bet that you, specifically, are tired of the pretense—tired of the unspoken assumption that the conditions of production are somehow prior to the quality of the product.
We’ll see.
What I can tell you is that the thinking behind this piece took considerably longer than two hours. Weeks, arguably. Months, if you count the slow accumulation of observations that eventually crystallized into a thesis worth testing. The writing—the conversion of those thoughts into sentences, the sequencing, the iterative refinement of phrasing—that’s what happened fast. And whether that distinction matters is, in a sense, the entire subject of what follows.
So here we are. You, reading something that was made quickly. Me, watching to see if that fact will determine how you receive it—or whether you can hold it lightly enough to let the ideas speak for themselves.
I suspect your answer will reveal more about you than about me.
Section II: The Waterfall Cathedral
There is an image that lives in our collective imagination of what serious intellectual work looks like. You know it without being told: the scholar in the tower, surrounded by books and silence, laboring for years—decades, sometimes—on a single work of such depth and rigor that when it finally emerges, it lands like a stone tablet carried down from a mountain. Complete. Authoritative. Earned.
This image is not neutral. It carries with it an entire epistemology, a theory of how knowledge is best produced and validated. The theory goes something like this: important ideas require sustained, uninterrupted contemplation. The thinker must immerse themselves fully, must read everything relevant, must turn the problem over and over in solitude until they have seen it from every angle. Only then—only when the work is finished—should it be released into the world for judgment.
If you’ve spent any time in software development, you’ll recognize this pattern. It has a name: waterfall. Specification flows into design, design into implementation, implementation into testing, testing into deployment. Each phase completes before the next begins. The assumption is that sufficient foresight at the start—enough planning, enough expertise, enough isolated concentration—will yield a better product than messy, iterative engagement with reality.
The appeal is obvious. Waterfall feels serious. It respects the difficulty of the problem. It doesn’t rush. It suggests that the person doing the work has the discipline to resist premature exposure, the confidence to trust their own judgment, the integrity to refuse shortcuts. There’s something almost monastic about it—a willingness to withdraw from the noise of the world in service of something pure.
And buried in this appeal is an assumption so deep it rarely surfaces for examination: that duration correlates with depth. That a book which took ten years to write must contain more thinking than one written in a year. That the scholar who emerges from two decades of solitary labor has, by virtue of that labor, produced something more valuable than someone who worked faster and shared earlier.
Does it, though?
The question sounds almost impertinent. Of course longer is better—isn’t that obvious? Doesn’t more time mean more revision, more nuance, more opportunities to catch errors and deepen arguments?
Perhaps. But notice what the waterfall model also guarantees: zero feedback until launch. The scholar in the tower is not just concentrating; they are isolated. Their ideas are developing in a closed system, metabolizing only the inputs they themselves have chosen, refracted only through their own interpretive lenses. Whatever blind spots they carry into the tower, they carry out again—refined, perhaps, but not corrected.
The waterfall cathedral is beautiful. Soaring. Impressive in its commitment to purity of process. But it is also, by design, a structure that cannot learn from the world until it is too late to change.
And the question this raises—the uncomfortable question that the image of the solitary scholar is designed to deflect—is whether that isolation is a feature or a failure mode dressed in the robes of virtue.
Section III: The Perturbation Window
Here is a different way to think about the production of ideas: not as architecture, but as ecology.
In an ecology, nothing exists in isolation. Every organism is shaped by what it encounters—predators, symbionts, competitors, the chemical composition of the soil. Evolution doesn’t happen in a vacuum; it happens through interaction, through the constant pressure of an environment that tests, selects, and recombines. The fittest ideas, like the fittest organisms, are not the ones that developed longest in protected conditions. They’re the ones that survived contact with reality.
Now consider what happens when you release an idea early. Not finished—rough, possibly wrong, certainly incomplete. A first draft. A probe.
The probe enters the world and immediately begins to generate responses. Some people disagree; they articulate objections you hadn’t considered. Some people agree but extend the argument in directions you didn’t anticipate. Some people misunderstand in ways that reveal ambiguities in your own thinking—places where you thought you were clear but weren’t. And some people—this is the crucial part—take your half-formed idea and combine it with their half-formed ideas, producing something neither of you could have generated alone.
This is the perturbation window. The span of time during which your idea is out there, interacting, evolving, being stress-tested by minds that are not yours.
Let’s do the arithmetic. Say you have an idea that could, under the waterfall model, be developed into a book over ten years of solitary refinement. At the end of that decade, you release it. The perturbation window opens—but you’re already done. Whatever feedback you receive now is too late to incorporate; it can only inform your next project, if you have the energy and humility to begin again.
Alternatively: you spend one week producing a rough version. An article, perhaps. A sketch of the argument, incomplete but coherent enough to engage with. You publish it. Now you have a perturbation window of ten years minus one week—essentially the same duration, but active. The feedback you receive in month two can reshape your thinking for the remaining nine years and ten months. The objections raised in year one can be addressed, incorporated, or used to identify which parts of your framework are load-bearing and which were scaffolding you can discard.
The math is almost embarrassingly simple. And yet.
And yet the waterfall model persists, not because anyone has done this calculation and concluded that zero feedback is optimal, but because the calculation isn’t what’s driving the choice. Something else is. Something that prefers the closed system, the protected development, the revelation rather than the conversation.
We’ll get to what that something is. But first, notice what the agile alternative actually requires. It requires publishing things that aren’t done. It requires being wrong in public. It requires treating your own ideas not as monuments to be unveiled but as organisms to be released into an environment that will change them—and you—in ways you cannot fully predict or control.
The perturbation window is not a comfort. It’s an exposure. And that, I suspect, is why so few people willingly open it.
Section IV: The Two Little Almonds
Deep in your brain, nestled on either side of your temporal lobes, sit two almond-shaped structures that have opinions about everything you do. The amygdalae—your threat-detection system, your uncertainty-aversion engine, the part of you that was making decisions long before the prefrontal cortex showed up with its fancy “reasoning” and “long-term planning.”
The almonds don’t deliberate. They don’t weigh evidence. They react—fast, pre-verbal, with the authority of several hundred million years of evolutionary refinement. And what they react to, more than almost anything else, is uncertainty. The unknown. The uncontrolled. The space where you cannot predict what happens next.
This is not a flaw. For most of evolutionary history, uncertainty meant danger. The rustle in the grass that might be wind or might be predator. The stranger approaching the camp who might be trader or might be raider. The almonds kept your ancestors alive by erring on the side of caution, by flooding the body with cortisol at the first hint of ambiguity, by screaming wait, don’t, not yet, we don’t know enough.
The problem is that the almonds cannot distinguish between a leopard in the tall grass and a comments section on the internet.
Consider what it means to publish something unfinished. You are releasing an incomplete version of your thinking into an environment you cannot control, to be evaluated by people whose reactions you cannot predict, who may find flaws you cannot yet see. You are, in the almonds’ calculus, walking into tall grass at dusk and hoping for the best.
Better to stay in the tower a little longer, the almonds whisper. Better to check it one more time. You don’t know what’s out there. You don’t know what they’ll say. Ignorance is bliss—and certainty, even the false certainty of an untested idea, is safer than exposure.
This is why the waterfall model is so sticky. Not because anyone has rationally concluded that zero feedback is optimal. Not because the evidence supports isolated development. But because waterfall feels safe in a way that agile does not. The closed system protects you from the particular kind of uncertainty that the almonds find most threatening: the judgment of others, arriving at a time and in a form you cannot anticipate or prepare for.
And here is the irony—the first of several, each more layered than the last.
The image of the scholar in the ivory tower is perhaps our clearest cultural icon of the evolved human. The mind transcending the body. Thought purified of animal impulse. Reason ascending above instinct. We venerate this image precisely because it seems to represent everything that separates us from the beasts—the capacity for sustained, abstract contemplation uncorrupted by base urges.
And yet.
That scholar is in the tower because they are listening to their lizard brain. The whole edifice of contemplative isolation—the years of solitary refinement, the reluctance to share until the work is “ready,” the preference for revelation over conversation—is, at its root, a fear response. The almonds, dressed in academic robes, whispering that the world outside is dangerous and the work inside is never quite safe enough to release.
The tower isn’t a triumph over instinct. It’s a monument to it.
This doesn’t make the fear illegitimate. The world outside does contain predators of a kind—people who will misread you, misrepresent you, use your half-formed thoughts against you. The comments section really is a dangerous place. But the question isn’t whether the fear is understandable; it’s whether the fear is serving you. Whether the protective withdrawal that feels like rigor is actually producing better work, or merely protecting you from the discomfort of discovering, in real time, what you got wrong.
The almonds don’t care about that distinction. They care about keeping you safe. And safety, to them, looks like staying in the tower until you’re sure.
The problem is that you’re never sure. You’re just more invested.
Section V: The Royce Irony
In 1970, a computer scientist named Winston Royce published a paper titled “Managing the Development of Large Software Systems.” On page two, he presented a diagram—Figure 2—that would become one of the most influential images in the history of technology management.
You know this diagram, even if you’ve never seen the original. Seven boxes descending like a staircase: SYSTEM REQUIREMENTS → SOFTWARE REQUIREMENTS → ANALYSIS → PROGRAM DESIGN → CODING → TESTING → OPERATIONS. Each phase flowing into the next. The waterfall.
The image is clean. Sequential. It makes a kind of intuitive sense that feels almost inevitable—of course you define requirements before you design, of course you design before you code. The boxes are orderly. The arrows point one direction. A manager could look at this and feel that the chaos of software development had been tamed into something logical, predictable, controllable.
What that manager almost certainly did not do was read the next sentence.
Directly beneath Figure 2, Royce wrote:
The diagram was not a recommendation. It was a warning.
The final sentence on that very page is equally unambiguous:
Must. Not “could helpfully include.” Not “might consider.” Must.
The rest of the paper—the part that history forgot—describes those five features. And when you see what Royce actually recommended, the scale of the misreading becomes almost physically painful.
Figure 10, his summary diagram, looks nothing like the clean staircase of Figure 2. It is dense, looping, recursive. Preliminary designs feed back into requirements. Analysis cycles back into software requirements. There are parallel documentation streams, customer involvement checkpoints, iterative refinement loops at every stage. The arrows don’t just descend; they return, reconnect, spiral. It looks less like a waterfall and more like the nervous system of an organism that learns from contact with reality.
It looks, in other words, remarkably like what we now call Agile—five decades before the Agile Manifesto was written.
Now place Figure 10 beside the modern “canonical” waterfall model—the one that appears in project management courses, in boardroom presentations, in the shared imagination of anyone who has tried to organize complex work. Pastel boxes. Gentle descent. Perhaps a “Maintenance” phase added at the bottom, as though learning only happens after everything else is finished. Clean. Simplified. Wrong.
The gap between these two images is perhaps the most expensive reading comprehension failure in post-war industrial history. Billions of dollars in failed projects. Decades of frustration. Entire careers built on defending a methodology that its own inventor explicitly labeled as an illustration of what not to do.
How does this happen?
The almonds have something to do with it. Figure 2 is clean; Figure 10 is complicated. The warning diagram fits on a slide; the actual recommendation requires study, interpretation, comfort with ambiguity. And when you are a manager under pressure, when your own uncertainty-aversion systems are screaming for something solid to hold onto, when the executives above you want confidence and the engineers below you want clarity—well.
The simple image wins. The nuance gets lost. And the almonds, once again, get what they wanted: the feeling of certainty, even at the cost of the thing itself.
Royce tried to communicate complexity. The world took his simplest image and stopped reading. And now we invoke his name to justify exactly the process he warned us against—while his actual model, sitting eight pages later in the same paper, gathers dust.
If you want to understand why the waterfall model persists despite its failures—and why the same pattern might be repeating right now, in different domains, with different diagrams—you could do worse than to sit with this irony for a moment.
Complexity was offered. Simplicity was taken.
The almonds always get what they want.
Section VI: Ego as Architect
The amygdalae explain the fear. But fear alone doesn’t account for the tenacity of the tower—the way certain minds will defend their isolation long past the point where it serves them, long past the point where they themselves can see it isn’t working.
For that, we need to talk about identity.
There is a particular kind of person—you may be one, I am certainly one—for whom intellectual output is not merely something they do but something they are. The quality of their ideas is load-bearing for their self-concept. To be wrong is not simply to have made an error; it is to be revealed as lesser than they believed themselves to be. The stakes of any given argument are not just epistemic but existential.
This is, to be clear, not a comfortable way to live. It creates a constant background hum of anxiety, a vigilance about being caught out, a sensitivity to criticism that can border on the allergic. But it also—and here’s the trap—produces genuine rigor, at least some of the time. The person who cannot tolerate being wrong often works very hard to avoid being wrong. They check their sources. They anticipate objections. They refine their arguments until the surface is smooth, until the gaps are hidden, until the thing looks finished.
The problem is that this same drive makes feedback feel like assault.
If your self-worth is structurally dependent on the quality of your thinking, then every critique—even the helpful kind, especially the helpful kind—arrives as evidence against your identity. The commenter who says “have you considered...” is not offering a gift; they are exposing a gap, a place where your thinking failed, a crack in the edifice you’ve built to convince yourself you’re the kind of person who doesn’t have cracks.
And so the tower becomes a fortress. The extended timeline becomes a moat. The years of solitary refinement become—let us be honest with ourselves—years of avoiding the verdict.
Ryan Holiday wrote a book called Ego Is the Enemy, and the title is doing most of the work I need it to do here. But I want to push on a specific aspect: the way ego distorts the economics of feedback.
From a pure information-theory perspective, early feedback is almost always more valuable than late feedback. Errors caught in draft one cost less to fix than errors caught in draft ten. Blind spots identified before publication can be addressed; blind spots identified after become permanent features of the public record. The rational actor would want feedback as early as possible, would seek it out, would treat every “have you considered” as free consulting.
But the ego is not a rational actor. The ego has its own economy, and in that economy, the currency is self-regard. Early feedback—feedback on unfinished work, feedback that exposes the gaps before you’ve had time to fill them—is expensive. It costs self-image. It costs the pleasant illusion that you were going to get it right on your own. It costs the narrative that you are the kind of person who doesn’t need help.
And so the ego invents reasons to delay. It’s not ready yet. I need to think about it more. I don’t want to waste people’s time with something half-baked. These sound like conscientiousness. They feel like rigor. But often—not always, but often—they are the ego negotiating for more time in the fortress, more distance from the verdict, more opportunity to refine the surface until the gaps are invisible.
The cruelest irony is that this pattern is most pronounced in precisely the people who have the most to offer. The mediocre thinker has less identity invested in being a good thinker; they can receive feedback with relative equanimity because their self-concept doesn’t depend on being right. But the genuinely brilliant mind—the one whose contributions might actually matter—often has the most fragile relationship with critique. Their brilliance has become structural. They cannot afford to be wrong in the way that wrongness demands to be afforded: openly, repeatedly, as a regular cost of doing intellectual business.
This is where Saruman enters.
In Tolkien’s legendarium, Saruman is not a villain because he lacks intelligence. He is the most intelligent—the most learned, the most far-seeing, the head of his order. His fall comes precisely because he cannot tolerate counsel from those he considers lesser. He cannot accept that the Hobbits might see something he missed, that Gandalf’s wandering might yield insights his study did not, that wisdom might arrive from unexpected directions. His tower—Orthanc, literally “cunning mind”—becomes his prison not because anyone locks him in but because his ego cannot survive leaving it.
The tower, in this light, is not a place of contemplation. It is a bunker against the possibility of being wrong in front of others.
And the ten years of solitary refinement? The insistence that the work isn’t ready, that it needs more time, that premature exposure would somehow contaminate its purity?
Sometimes that’s genuine incubation. Sometimes ideas really do need time to develop, and premature exposure really would distort them.
But sometimes—more often than we’d like to admit—it’s Saruman in Orthanc, convinced that more study will yield what contact with reality threatens to reveal: that we are not quite who we thought we were.
The question, as always, is how to tell the difference from inside.
Section VII: The Protestant Keyboard
We have, at this point, two interlocking explanations for the persistence of the tower: the amygdalae, which fear uncertainty, and the ego, which fears exposure. But neither of these feels quite respectable enough to say out loud. No one defends their ten-year timeline by admitting their lizard brain is scared or their self-concept is fragile. They need a story—a frame that transforms neurological self-protection and psychological defensiveness into something that sounds like virtue.
Enter the Protestant work ethic.
I am painting with a broad brush here, and I know it. Max Weber’s thesis about Protestantism and capitalism is contested, nuanced, more complicated than the version that has seeped into popular consciousness. But the popular version is what matters for our purposes, because the popular version is what we’ve internalized: the idea that hard work is morally redemptive. That struggle sanctifies. That the difficulty of a task is, in some sense, part of its value—not merely instrumental but intrinsic.
This is a powerful frame. It has built nations, fueled industrial revolutions, shaped the psychic landscape of entire civilizations. And it has, somewhere along the way, colonized our assumptions about creative and intellectual work.
The writer who struggles for years in obscurity is noble. The artist who suffers for their craft is authentic. The scholar who labors in solitude, emerging only when the work is “ready,” has earned their authority in a way that someone who worked faster somehow hasn’t. There is a moral weight to duration, a virtue in difficulty, a suspicion of ease.
Notice how convenient this is.
If suffering is noble, then the fear that keeps you in the tower becomes “rigor.”
If duration signals depth, then the ego-protection that prevents early exposure becomes “thoroughness.”
If the struggle itself has value, then you don’t have to examine whether the struggle is producing value—it already is value, by definition, regardless of outcome.
The Protestant work ethic, in this light, isn’t a cause of the waterfall pattern. It’s a cover story. A way of making neurobiological fear and psychological fragility look like moral seriousness. The almonds get to hide behind the language of discipline. The ego gets to hide behind the language of standards. And no one has to admit that they stayed in the tower because they were scared of what they’d find outside.
Here’s what I find almost funny about this: the moral valorization of struggle isn’t even consistent with the capitalism it supposedly supports.
Pure market logic doesn’t care how hard you worked. It cares what you produced. If two people create equivalent value—identical products, identical insights, identical contributions—the one who did it faster and with less effort is, by market logic, more efficient, more productive, more worthy of reward. The suffering adds nothing. The labor is instrumental, not terminal. To privilege the harder path when an easier path yields the same destination is, by strictly capitalist logic, a kind of irrationality.
But we’re not strict capitalists, are we? We’re creatures who inherited a story about the moral weight of effort, and we’ve applied that story to domains where it doesn’t quite fit.
Consider the keyboard.
There is a vague sense—rarely articulated but widely held—that the physical act of writing is part of the value of writing. The hours at the desk. The fingers on keys. The slow accumulation of words through sustained bodily effort. This feels real in a way that faster methods do not. It feels like work, which means it feels like worth.
But attend, for a moment, to the actual economics of typing.
Every keystroke requires energy—electrical energy to power the machine, metabolic energy to move the finger, mechanical stress on the keyboard itself. There is wear, entropy, heat. The physical act of transcription is not merely neutral; it has costs. Resources are consumed. Order degrades into disorder. The universe moves incrementally closer to heat death every time you press the spacebar.
These costs are only justified if they produce something that outweighs them. And what is that something? Not the keystrokes themselves. Not the hours logged. Not the suffering endured. The value—the only value—is the thinking that the words encode.
But here’s the thing about thinking: it doesn’t require keyboards.
Thinking happens in the shower, on walks, in the middle of the night when you should be sleeping. It happens during conversations, while reading, while staring out windows. The moments of genuine cognitive work—the actual generation and refinement of ideas—are largely decoupled from the act of transcription. You don’t think because you’re typing; you type to capture what you’ve already thought, or to discover what you’re in the process of thinking. The keyboard is a recording device, not a generative one.
Which means: if the same thinking can be encoded with fewer keystrokes, less time, less wear-and-tear on bodies and machines, the value of the thought is unchanged. What decreases is only the cost of capturing it.
This is, roughly speaking, what large language models offer. Not a replacement for thinking—thinking still has to happen somewhere—but a more efficient transcription layer. A way to convert cognitive work into written form with less friction, less time, less of the mechanical labor that we’ve been trained to mistake for the work itself.
And the resistance to this is fierce. Visceral. Moralized.
Because if the keystrokes aren’t the value—if the struggle isn’t the point—then what have we been venerating all this time? What was all that suffering for?
The answer, I suspect, is that the suffering was for the almonds and the ego. It was a way of avoiding the exposure that faster methods would have enabled. It was a way of staying in the tower, calling it rigor, and believing our own cover story.
The Protestant keyboard wasn’t sacred. It was a hiding place.
Section VIII: The Metacognitive Gap
Now we arrive at the objection that has been waiting patiently in the wings, and it deserves to be stated in its strongest form: most AI-generated writing is garbage.
Not all of it. But most. The internet is already drowning in LLM-produced slurry—text that is fluent, grammatical, superficially coherent, and utterly empty. Blog posts that say nothing. Articles that circle their subject without ever landing. Content that exists not because someone had something to say but because someone needed to fill a content calendar, hit a word count, produce output in a way that could be measured and reported.
This is real. It is a genuine problem. And if the argument I’ve been building seems to ignore it, seems to wave away the tsunami of AI-generated mediocrity in favor of some idealized “agile epistemology,” then the argument deserves to fail.
So let me be precise about what I think is happening.
The failure mode of AI-assisted writing is not that AI produces bad text. AI produces text that is, in a sense, too adequate—fluent enough to pass, coherent enough to seem finished, polished enough to trigger the sense that the task is complete. And that adequacy is the trap.
There’s a concept in cognitive science that goes by various names: System 1 versus System 2, fast thinking versus slow thinking, automatic versus controlled processing. The details differ across frameworks, but the core observation is consistent: we have a mode of cognition that is quick, effortless, and pattern-matching, and we have a mode that is slow, effortful, and genuinely analytical. Most of the time, System 1 handles things. System 2 is expensive to run, so we only spin it up when System 1 flags something as requiring attention.
Now consider what happens when you prompt an LLM and receive a response.
System 1 looks at the output. The grammar is correct. The structure is logical. The length is appropriate. Every surface-level cue says this is a finished piece of writing. And System 1, pattern-matching as it does, concludes: task complete. Move on.
The problem is that System 1 cannot evaluate depth. It can recognize fluency but not rigor. It can pattern-match against the form of good writing without detecting the presence or absence of genuine thought. And so the LLM output passes inspection not because it’s good but because it’s good enough to fool the part of you that isn’t really paying attention.
This is what I mean by the metacognitive gap.
Metacognition is, roughly, thinking about thinking. It’s the capacity to monitor your own cognitive processes, to notice when you’re confused, to detect when an argument is skating over a gap rather than bridging it. It’s what allows you to read your own writing and feel the places where the logic doesn’t quite hold, where you’ve asserted something you haven’t earned, where the words are doing the work that the ideas should be doing.
Most people, when using AI, do not engage this capacity. The output arrives; it looks like writing; they ship it. The metacognitive layer that would catch the emptiness—that would notice the absence of genuine thought beneath the fluent surface—never activates. System 1 handles the whole transaction, and System 2 never gets invoked.
The result is the slurry. The content-calendar filler. The “AI-generated” pejorative that has already become shorthand for text that no one actually thought about.
But—and here is the pivot—this is not an indictment of the tool. It’s an observation about the user.
The great writers we venerate, the ones who spent years in their towers producing work of genuine depth—what were they actually doing all that time? They were not, primarily, pressing keys. They were thinking. They were engaging the slow, expensive, effortful process of genuine cognition. They were reading their own drafts and feeling the gaps. They were sitting with discomfort rather than resolving it prematurely. They were running System 2 continuously, for years, refusing to let System 1 declare victory just because words existed on a page.
The keyboard was never the source of that rigor. The time was never the source. The source was the sustained, uncomfortable, metabolically expensive act of actually thinking—and refusing to stop when the surface looked smooth.
AI doesn’t eliminate this requirement. If anything, it intensifies it.
When the transcription layer was slow—when producing a draft took months of physical labor—there was a built-in forcing function for reflection. You had time to think while you typed. The very difficulty of production created space for cognition. The struggle, while not itself valuable, created conditions in which value could emerge.
Now that the transcription layer is fast, that forcing function is gone. You can produce a draft in minutes. Which means: if you’re going to think, you have to choose to think. You have to deliberately invoke the metacognitive layer that the old process invoked accidentally. You have to resist the System 1 verdict of “done” and ask, actively, is there actually anything here?
This is harder than it sounds. The almonds don’t like it—uncertainty is uncomfortable. The ego doesn’t like it—what if you discover your idea was shallower than you thought? The Protestant-keyboard mythology doesn’t help—it trained us to locate value in the wrong place, so we don’t know where to look for it now.
But the people who figure this out—who learn to use AI as a transcription layer while maintaining genuine cognitive engagement—will produce more, iterate faster, and refine their thinking in ways that the tower model never allowed. They’ll have the perturbation window. They’ll have the feedback. They’ll have the collision with reality that the waterfall model forecloses.
The metacognitive gap is real, and most people are falling into it. That is not a reason to reject the tool. It is a reason to develop the capacity that the tool requires.
The struggle was never about the typing. It was always about the thinking.
The thinking is still required. It’s just no longer disguised as something else.
Section IX: The Self-Devouring Serpent
You have, at this point, read approximately four thousand words about why early publication and iterative refinement are superior to prolonged isolation and deferred feedback.
You already know what comes next.
This piece was written in a single sitting—a long one, but single—with substantial AI assistance. It is not finished in the way a book would be finished. It has not been reviewed by colleagues, pressure-tested by critics, refined across multiple drafts over multiple years. It contains, almost certainly, errors I cannot see, gaps I have skated over, assumptions I have not examined closely enough. If I were to read it again in six months, I would wince at parts of it. If I were to read it in five years, I might disown sections entirely.
I am publishing it anyway.
This is not a confession. It is—or I am trying to make it—a demonstration.
The waterfall approach to this argument would have been to spend a year or two developing it into a proper book. I would have read more Weber, tracked down primary sources on Royce, consulted with neuroscientists about the amygdala’s actual role in uncertainty processing, built a more rigorous framework, hedged more carefully, anticipated more objections. The result would have been more defensible. More complete. More difficult to dismiss.
It would also have taken two years to reach anyone. Two years during which the ideas could not be challenged, refined, extended, or demolished. Two years of development in a closed system, metabolizing only what I chose to feed it, refracted only through my own limitations.
Instead, you’re reading this now. Today. While the ideas are still rough, still provisional, still wrong in ways I cannot yet identify.
And here’s what that means: you are not a passive recipient of a finished argument. You are part of the perturbation window. Your objections—if you have them—are exactly what the waterfall model would have denied me. Your extensions, your refinements, your “yes, but have you considered...” are the feedback that turns a static artifact into an evolving understanding.
The piece is eating its own tail. The method is the message is the test.
If I’m wrong about agile epistemology, someone will tell me—sooner, and in time to matter. If the amygdala framing is neuroscientifically naive, an actual neuroscientist might stumble across this and correct me. If the Royce interpretation is historically contested in ways I don’t know about, a software historian might surface that context. The perturbation window is open. The closed system has been breached.
And yes—my almonds are uncomfortable with this. The uncertainty about how this will be received is genuinely unpleasant. My ego is aware that the gaps I cannot see are visible to others, that the confidence of these sentences is not matched by the solidity of the ground beneath them. The part of me that wants to be seen as rigorous, as careful, as the kind of person who doesn’t publish half-baked arguments—that part is lodging formal objections.
I am overruling it. Not because the objections are wrong—they might not be—but because the alternative is the tower. And I’ve been in the tower. I know what it feels like to refine an idea for years in isolation, convinced that more time would make it better, only to discover upon release that I had been wrong in ways that a single outside reader could have caught in the first month.
The tower felt like rigor. It was actually ego preservation with good lighting.
So here we are. An argument about early exposure, exposed early. A case for iteration, in its first iteration. A thesis that cannot be proven from inside the skull that generated it, offered to skulls that might see what this one cannot.
The ouroboros swallows its tail.
Now we find out if there’s anything in its stomach.
Section X: The Provisional Non-Conclusion
I want to end by naming the thing I’ve been circling.
There is a version of this argument that is pure rationalization. A sophisticated, multi-layered justification for impatience, dressed in the language of epistemology and neuroscience. In this version, I am not a person who has genuinely discovered something about the economics of intellectual production; I am a person who wants to work faster, publish sooner, and avoid the hard labor of real refinement—and who has constructed an elaborate framework to make that avoidance look like insight.
I cannot rule this out from inside my own skull.
The almonds work in all directions. They are perfectly capable of generating sophisticated intellectual frameworks to protect me from feedback that says slow down. The ego, too, is flexible; it can attach itself to being the kind of person who publishes provocative arguments just as easily as it can attach to being the kind of person who labors in towers. The machinery of self-deception is not constrained by the content of the deception. It can wear any costume, including this one.
So maybe this entire piece is a symptom of the thing it claims to diagnose. Maybe the speed of its production is not evidence that speed can coexist with rigor but evidence that I have abandoned rigor and called the abandonment a methodology. Maybe five years from now I will look back at this and see only a person who wanted to believe that shortcuts were wisdom.
I don’t know. Genuinely. The uncertainty is not rhetorical.
But here’s what I do know: the waterfall alternative—the version where I sit with this argument for two years, refining it in isolation until I’m sure it’s not self-serving—would not resolve this uncertainty. It would only delay the confrontation. I would emerge from the tower with a more polished artifact, but the question of whether the core insight is genuine or rationalized would remain unanswered, because that question cannot be answered from inside. It requires contact with minds that are not mine. It requires the perturbation window that I have been arguing for across ten sections and four thousand words.
By publishing this now—rough, provisional, possibly wrong—I am subjecting the argument to exactly the test it recommends. If it’s a rationalization, someone will see the gaps. If the almonds have blinded me, the feedback will arrive in time to matter. If I have built an elaborate justification for my own impatience, you can tell me.
That’s not a rhetorical flourish. It’s an actual request.
The tower approach would be to refine this until I felt certain. But certainty generated in isolation is precisely the failure mode I’ve been describing. The feeling of confidence that comes from extended solitary work is not the same as the robustness that comes from surviving contact with reality. And I would rather discover I’m wrong in public, in time to update, than preserve the pleasant illusion of rightness in private.
So here, at the end, are the tensions I cannot resolve:
There is a genuine question about whether some ideas require marination—not as ego protection but as actual cognitive incubation. Not everything can be iterated in public. Some thoughts need time to unfold in ways that feedback might distort rather than refine. I have not adequately addressed where the boundary lies between productive iteration and premature exposure. Maybe this piece itself crosses that line.
There is a genuine question about feedback quality. Not all perturbation is useful. Some of it is noise—other people’s almonds, other people’s egos, reactions shaped by misunderstanding or motivated reasoning or simple bad faith. The iterative model assumes that feedback, on balance, improves thinking. That assumption might be wrong. The tower might sometimes be a legitimate defense against distortion, not just a hiding place.
There is a genuine question about whether I am simply impatient. Whether the “golden window” framing—the urgency to produce while the tools are good and cheap—is insight or impulse. Whether I would believe any of this if the process were still slow.
I don’t have clean answers to these questions. I’m not sure clean answers exist.
What I have is a bet. A wager that the discomfort of early exposure is more productive than the comfort of extended isolation. That whatever I’ve gotten wrong here, I will learn about faster than I would have in the tower. That the perturbation window, now open, will do what perturbation windows do: test, stress, refine, or demolish.
The argument is out of my hands now. It belongs to the collision.
I don’t know if I’ve escaped my own tower in writing this. But at least now you can look up and tell me if you see any walls.
This piece was written on the 9th of January 2026, in approximately 90 minutes, with AI assistance. If you’ve found the gaps I couldn’t see, I’d like to know. The perturbation window is open.
A note on AI-assistance and this post's relevance to LW policy: