Your analysis of why a social norm is suboptimal is probably correct. Your implicit model of what happens when you unilaterally defect from it is probably wrong. These two facts are not in tension.
the next section of the essay describes Chesterton's fence. doesn't that notion directly contradict "Your analysis of why a social norm is suboptimal is probably correct."?
while it is true that social pressure is used to enforce norms, and a radical may run into this pressure as the first setback, i would caution against the implication that the consensus is the only obstacle to change. social pressure is used to enforce the norm, yes, because cultural evolution has seen what happens when that norm is not enforced!
Radical honesty. Explicit negotiation of social obligations. Treating every interaction as an opportunity for Bayesian updating. Refusing to engage in polite fictions. Pointing out logical errors in emotionally charged conversations. All of these are, in some sense, correct
as an alternative: all of these are making tradeoffs without realizing it. each may work wonderfully in small, high-trust groups, but lead rapidly to instability in any context with a shred of doubt. the norms of polite society (including and especially the meta-norm of having different norms in different places) are heavily selected for long-term stability: disregard this at your clubhouse's peril!
in general, when someone proposes a new rule, i am very suspicious. it seems that often the rule is a way of enforcing global norms to avoid a local grievance. "if only everyone acted accordingly, then i would not have suffered." fair enough! and i am sorry this happened! but it is not by itself a compelling reason to change the rules of the game.
I would say that in some sense, the relevant sequence is "everything written by Scott Alexander." Or at least, that was my takeaway as I was trying to think about why I didn't come away with quite as much of this particular misunderstanding.
Another item for the section on “When Should You Actually Act?”:
Is it actually a good idea? By definition, the thing the naive reasoner has just thought up has no history. (Has there ever been a culture in which coupling was customarily negotiated by the two people using Ask culture rather than Guess culture? Dating apps are all I can think of.) Reasoning is a movement on the map, not the territory. If the map is wrong, the conclusion may be wrong, no matter how airtight the reasoning.
The norm is genuinely new rather than Lindy.
I don’t understand what this means, nor its connection with its amplifying paragraph. (I do know what Lindy means.) Does “the norm” refer to the existing norm or the one that the naive reasoner has just thought up?
It refers to the existing norm. The author is saying that a recently developed norm is likely less load-bearing than a long-existing one, so the attempt to abolish it is less likely to be flawed.
[Author's note from Florian: This article grew out of a conversation with Claude. I described a line of reasoning I found compelling as a teenager, and we ended up identifying a general failure mode that I think the Sequences systematically create but never address. This is the second time I've used this collaborative workflow — the first was Deliberate Epistemic Uncertainty. As before, I provided the core insights and direction; Claude helped develop the argument and wrote the article. Had I understood what follows when I was fifteen, it would have saved me years of unnecessary friction with the world around me. I had these ideas in my head for years, but writing a full article would have taken me forever. Claude hammered it out in one go after I explained the problem in casual conversation.]
A Worked Example: The Signaling Problem
Here's a line of reasoning that I found intuitively obvious as a teenager:
If women use subtle, ambiguous signals for flirting, this creates a genuine signal extraction problem. Men can't reliably distinguish between "no (try harder)" and "no (go away)." This incentivizes pushy behavior and punishes men who take no at face value. Therefore, women who choose to flirt subtly are — however unintentionally — endorsing and sustaining a system that makes it harder for other women to have their refusals respected. It's a solidarity argument: subtle signaling is a form of free-riding that imposes costs on more vulnerable women.
The structure of this argument is identical to "people who drive unnecessarily are contributing to climate change." And just like the climate version, people do moralize about it. "You should fly less" and "you should eat less meat" are mainstream moral claims with exactly the same logical structure.
So what's wrong?
Not the analysis. The analysis is largely correct. The system-level observation about incentives is sound. So long as the motivation is "reducing harm to other women" and not the more self-serving "making things easier for men", this is genuinely defensible as moral reasoning.
What's wrong is what happens when you, a seventeen-year-old who has just read the Sequences, decide to act on it.
The Coordination Problem You're Not Seeing
Imagine a world where 90% of the population has internalized the argument above. In that world, explicit signaling is the norm, ambiguous signaling is recognized as defection, and social pressure maintains the equilibrium. The system works. People are better off.
Now imagine a world where fewer than 1% of people think this way. You are one of them. You try to implement the "correct" norm unilaterally. What happens?
You make interactions weird. You get pattern-matched to "guy who thinks he's solved dating from first principles." You generate friction without moving the equilibrium one inch. You're driving on the left in a country where everyone drives on the right. You're not wrong about which side is theoretically better — you're wrong about what to do given the actual state of the world.
This is the core insight: a strategy that is optimal at full adoption can be actively harmful at low adoption. A bad equilibrium with consensus beats a bad equilibrium with friction. Norms work through shared expectations, and unilaterally defecting from a norm you correctly identify as suboptimal doesn't improve the norm — it just removes you from the system's benefits while imposing costs on everyone around you.
Why the Sequences Don't Teach This
The Sequences are, in my view, one of the best collections of writing on human reasoning ever produced. They are extraordinarily good at teaching you to identify when a norm, a belief, or an institution is inefficient, unjustified, or wrong. What they systematically fail to teach is the difference between two very different conclusions:
The overall thrust of the Sequences — and of HPMOR, and of the broader rationalist memeplex — is heavily weighted toward "society is wrong, think from first principles, don't defer to tradition." Chesterton's Fence makes an appearance, but it's drowned out by the heroic narrative. The practical takeaway that most young readers absorb is: "I am now licensed to disregard any norm I can find a logical objection to."
This is not what the Sequences explicitly say. Eliezer wrote about Chesterton's Fence. There are posts about respecting existing equilibria. But the gestalt — the thing you walk away feeling after reading 2,000 pages about how humans are systematically irrational and how thinking clearly gives you superpowers — pushes overwhelmingly in the direction of "if you can see that the fence is inefficient, you should tear it down."
The missing lesson is: your analysis of why a social norm is suboptimal is probably correct. Your implicit model of what happens when you unilaterally defect from it is probably wrong. These two facts are not in tension.
Cultural Evolution Is Smarter Than You
Joseph Henrich's The Secret of Our Success is, in some ways, the book-length version of the lesson the Sequences forgot. Henrich demonstrates, across dozens of examples, that cultural evolution routinely produces solutions that no individual participant can explain or justify from first principles — but that are adaptive nonetheless.
The canonical example is cassava processing. Indigenous methods for preparing cassava involve an elaborate multi-step process that looks, to a first-principles thinker, absurdly overcomplicated. Someone with a "just boil it" approach would streamline the process, eat tastier cassava, and feel very clever. They would also slowly accumulate cyanide poisoning, because the elaborate steps they discarded were the ones that removed toxins. Symptoms take years to appear, making the feedback loop nearly invisible to individual reasoning.
The lesson generalizes: traditions frequently encode solutions to problems that the practitioners cannot articulate. The fact that nobody can tell you why a norm exists is not evidence that the norm is pointless — it's evidence that the relevant selection pressures were operating on a timescale or level of complexity that individual cognition can't easily access.
This doesn't mean all norms are good. It means the prior on "I've thought about this for an afternoon and concluded this ancient, widespread practice is pointless" should be much lower than the Sequences suggest. You might be right. But you should be surprised if you're right, not surprised if you're wrong.
When Should You Actually Act?
The point of this article is not "always defer to tradition." That would be a different error, and one the Sequences correctly warn against. The point is that there's a large and important gap between "this norm is suboptimal" and "I should unilaterally defect from this norm," and the Sequences provide almost no guidance for navigating that gap.
Here are some heuristics for when acting on your analysis is more likely to go well:
The cost of the norm falls primarily on you. If you're the one bearing the cost of compliance, defection is more defensible because you're not imposing externalities. Deciding to be vegetarian is different from loudly informing everyone at dinner that they're complicit in factory farming.
You can exit rather than reform. Moving to a community where your preferred norms are already in place is much less costly than trying to change the norms of the community you're in. This is one reason the rationalist community itself exists — it's a place where certain norms (explicit communication, quantified beliefs, etc.) have enough adoption to actually function.
Adoption is already high enough. If you're at 40% and pushing toward a tipping point, unilateral action looks very different than if you're at 0.5% and tilting at windmills. Read the room.
The norm is genuinely new rather than Lindy. A norm that's been stable for centuries has survived a lot of selection pressure. A norm that arose in the last decade hasn't been tested. Your prior on "I can see why this is wrong" should be calibrated to how long the norm has persisted.
You can experiment reversibly. If you can try defecting and easily revert if it goes badly, the downside is limited. If defection burns bridges or signals things you can't unsignal, be cautious.
You understand why the fence is there, not just that it's inefficient. This is the actual Chesterton test, applied honestly. "I can see that this norm is suboptimal" is not the same as "I understand the function this norm serves and have a plan that serves that function better."
The Central Tragedy
The pattern described in this article — "this would be better if everyone did it, but is actively costly if only I do it" — is not limited to the flirting example. It applies to a huge class of rationalist-flavored insights about social behavior.
Radical honesty. Explicit negotiation of social obligations. Treating every interaction as an opportunity for Bayesian updating. Refusing to engage in polite fictions. Pointing out logical errors in emotionally charged conversations. All of these are, in some sense, correct — a world where everyone did them might well be better. And all of them, implemented unilaterally at low adoption, will reliably make your life worse while changing nothing about the broader equilibrium.
This is, I think, the central tragedy of reading LessWrong at a formative age. You learn to see inefficiencies that are genuinely real. You develop the tools to analyze social systems with a precision most people never achieve. And then, because nobody ever taught you the difference between seeing the problem and being able to solve it unilaterally, you spend years generating friction — making interactions weird, alienating people who would otherwise be allies, pattern-matching yourself to "insufferable rationalist who thinks they've solved social interaction from first principles" — all in pursuit of norms that can only work through coordination.
The Sequences need a companion piece. Not one that says "don't think critically about norms" — that would be throwing out the baby with the bathwater. But one that says: "Having identified that a fence is inefficient, your next step is not to tear it down. It's to understand what load-bearing function it serves, to assess whether you have the coordination capacity to replace it with something better, and to be honest with yourself about whether unilateral action is heroic or just costly. Most of the time, it's just costly."
Or, more concisely: there should be a Sequence about not treating the Sequences as action guides unless you have the wisdom to tell why Chesterton's fence exists.