"This doesn’t work. People see through it. It comes across as either dishonest or oblivious, and both make you look bad"
I wouldn't be so quick to say that this doesn't work. If you want people to stop attacking you, then it usually won't work; but not admitting you have an agenda seems to be precisely what most conflict theorists would do in these circumstances because there often are bystanders who will accept this justification.
The mental frame I've found myself getting into when interacting with conflict practitioners is treating them not as defecting against me, but against everyone. Even if I agree with someone's position, I will give a pretty scathing reply or employ the dark arts even harder to make them look stupid. Why? Well, in adversarial games, there is nothing to gain from honesty, so the only reason they are using words at all is to manipulate people. The only way to have an honest discussion on the topic is to get them to stop playing adversarial games around it. If I guess they're just too stupid to realize what they're doing (as in, they sound like they're just repeating memes instead of carefully picking their words), I think it's a better idea to break them out of the adversarial game by forcing them to actually think (by being very precise in pointing out the issue with their rhetoric). But if they know what they're doing? Better to take them out of the game.
My guess is people in America's political climate are conflict practitioners. I do not know if this was the case 50 years ago, but I suspect it was not so, and it is merely an issue with democracy where infecting the populace with adversarial memes is a great way to get elected so after time (and especially the internet) the populace is more and more disease-ridden. If everyone were smarter, they would be more immune to banal manipulation, but I think the memes would also just be more adversarially selected.
I was hoping that this post would be something of a Defense against the Dark Arts post, but it doesn't seem to be:
[Dawkins saying "I do not intend to disparage trans people…"]
This doesn’t work. People see through it. It comes across as either dishonest or oblivious, and both make you look bad. If you want to genuinely engage in mistake theory discourse, do it in spaces where that’s actually the norm—academic seminars, private discussions, carefully-gated intellectual communities. Don’t perform neutrality in conflict arenas while making moves that clearly serve specific interests.
Suppose he actually wants to take the other side down a peg.
It seems to me that the only people who object to him playing power games are postmodernists, who think, by and large, that all communication is power games.
What is there to lose by playing power games with people who think that roughly all communication is power games?
What is there to lose by playing power games with people who think that roughly all communication is power games?
Well for one thing, it can get you dragged into negative-sum power games. Like with gambling, if you don't know what your edge is, you're better off not playing. If there isn't a clear way that getting into a Twitter argument about transgenderism helps you accomplish your goals, then in the typical case you waste your time, and you take on a tail risk of ruining your reputation. It's foolish to take that risk if it doesn't come with enough potential upside to make it +EV.
If you're already stuck in such a power game, then by all means play it as such, but don't go seeking them out.
I wonder if we need someone to distill and ossify postmodernism into a form that rationalists can process if we are going to tackle the problems postmodernism is meant to solve. A blueprint would be the way that FDT plus prisoners dilemma ossifies sartre’s existentialism is a humanism, at some terrible cost to nuance and beauty, but the core is there.
My suspicion of what happened at a really high level is that fundamentally one of the driving challenges of postmodernism is to actually understand rape, in the sense that rationalism is supposed to respect: being able to predict outcomes, making the map fit the territory etc. EY is sufficiently naive of postmodernism that the depictions of rape and rape threats in Three Worlds Collide and HPMOR basically filtered out anyone with a basic grasp of postmodernism from the community. There’s an analagous phenomenon where when postmodernist writers depicts quantum physics, they do a bad enough job that it puts off people with a basic grasp of physics from participating in postmodernism. Its epistemically nasty too: this comment is frankly low quality, but if I understood postmodernism well enough to be confident in this comment I suspect I would have been sufficiently put off by the draco-threatens-to-rape-luna subplot in HPMOR to have never actually engaged with rationalism.
And what about Cynical Theories and their understanding of postmodernism? They claim that a major part of postmodernism, especially one related to sociology, began to produce conflict-theoretic slop.
I suspect that postmodernism could be meant to study the potential to lock in oppressing memes (like Black people being dangerous or mentally less capable than White people) and ways to free mankind from them. What I don't quite buy is claims like "Memes propagating among White people harm Black people's ability to succeed". What if they achieve a different effect of making some Black people more determined to disprove these memes?
There might also exist things like crab bucket mentality or other meme complexes which tend to infect people and to change the behavior of the infected towards keeping them powerless. Or memes infecting weak-willed people and making them less likely to become strong-willed, but more likely to make the meme seem plausible and infect others.
However, in practice postmodernists try to stop propagation of the memes viewed as harmful even if they reflect some ground truth. More examples of such behavior can be found, for example, in the book that I mentioned.
In 2021, Richard Dawkins tweeted:
The fallout was immediate. The American Humanist Association revoked an award they’d given him 25 years earlier. A significant controversy erupted, splitting roughly into two camps.
One camp defended Dawkins. They saw him raising a legitimate question about logical consistency. If we accept self-identification for gender, what’s the principled distinction with race? This seemed like straightforward philosophical inquiry - the kind of question that deserves engagement rather than punishment.
The other camp saw the tweet as harmful. To them, Dawkins was lending intellectual credibility to attacks on trans people. The “just asking questions” frame was either naive or disingenuous. Whether he intended harm or not, the effect was to legitimize transphobia.
These camps weren’t arguing about the answer to Dawkins’ question. They were operating in different epistemologies - incompatible frameworks for what makes a good argument and how discourse works.
Why Read This Post
If you’ve ever been confused about why:
...then this post might help. I’m going to explain two fundamentally incompatible ways of evaluating truth and discourse, show you when each applies, and help you recognize which game you’re actually playing. This is a long post, but if you want to understand why public discourse works the way it does - and how to navigate it without being naive - it’s worth the read.
In Scott Alexander’s terminology, these map roughly to mistake theory versus conflict theory. The first cluster tends toward high decoupling, analytical thinking, STEM epistemology. The second cluster - which overlaps significantly with postmodern thought - tends toward low decoupling, context-awareness, and recognition of power dynamics. The camps aren’t perfectly uniform, but the pattern is consistent enough to be useful.
This post is examining these frameworks - how they work, when each applies, and why they’re incompatible. My main goal is explaining conflict theory epistemology to mistake theorists in analytical terms, since most explanations of postmodern or conflict-oriented thinking are deliberately dense and esoteric. But I think this could also be valuable for people who have good intuitions about conflict theory but want those intuitions spelled out more systematically.
Mistake Theory: Truth-Based Epistemology
When a mistake theorist encounters a statement, they ask: “Does this correspond to reality?”
This is the map-territory distinction. Good maps reflect the territory accurately. Bad maps don’t. The goal of discourse is building better maps - more accurate models of the world. Truth is correspondence between map and territory.
The framework gets its name from a core assumption: when things go wrong, it’s usually due to mistakes rather than malice. People disagree because someone has bad information, faulty reasoning, or incomplete models. Discourse exists to repair these mistakes. You examine evidence, identify errors, update beliefs. The process is collaborative - everyone benefits from more accurate maps.
The mistake theory framework assumes you can evaluate statements by checking them against reality - both in describing what exists now and predicting what happens next. Someone makes a claim, you gather evidence, you assess correspondence, you test predictions. The statement is either more true or more false, and this can be determined through investigation. The power of good maps comes precisely from their predictive capacity.
This epistemology is dominant among STEM types, rationalists, and analytical thinkers generally. Scott Alexander’s original mistake theory vs conflict theory post maps out much of this territory. The rationalist community has developed sophisticated frameworks around this - Yudkowsky’s Sequences on rationality, ideas about Kolmogorov complexity and information theory, Bayesian updating, prediction markets. It’s a rich intellectual tradition with real depth.
And it works. Science advances. Engineering produces results. Systems get built. The reason STEM fields can construct increasingly complex systems - from semiconductors to satellites to the internet - is because accurate maps are prerequisite for building on previous work. When your foundations are off, errors compound. You can’t build complex systems without correspondence to reality. The commitment to truth as correspondence isn’t arbitrary - it’s load-bearing for the entire enterprise.
But there’s an interesting jump that happens in rationalist thinking. Yudkowsky argues that “rationality is winning”1 - your beliefs should help you achieve your goals. But then he immediately pivots to: therefore rationality is about building accurate maps, because accurate maps help you win.
This leap deserves scrutiny. Building accurate maps helps you win in certain games. But there are other games where controlling what gets mapped matters more than mapping accuracy. I’m not saying mistake theory is wrong - I’m pointing out there’s a gap between “rationality is winning” and “rationality is effective map-building.” Keep this gap in mind. It’s where conflict theory lives.
Conflict Theory: Impact-Based Epistemology
When a conflict theorist encounters a statement, they ask: “What is this statement meant to achieve in the world?”
This represents a fundamental shift from evaluating truth to evaluating effect. In mistake theory, correspondence to reality is primary - does this map match the territory? In conflict theory, the causal impact is primary - what does making this statement do in the world?
To understand why, we need to recognize two key insights about how language actually works:
First Insight: Statements Are Interventions
Think about humans as agents. In mistake theory, a good claim is one that helps an agent build accurate maps - representations that correspond to reality. The map describes the territory so you can navigate it better.
But statements don’t just describe reality - they’re also inputs that change reality. A model isn’t just meant to serve as a map; it’s meant to change behavior, and through changing behavior, it changes the world itself.
This is a cybernetic framing. When you make a statement, you’re not just transmitting information - you’re intervening in a system. The statement enters as input, changes how agents behave, and thereby changes outcomes.
Hyperstition: When Predictions Create Reality
This is easiest to see in reflexive situations where predictions can become self-fulfilling. Consider George Soros’s famous bet against the British pound in 1992. Soros believed the pound was overvalued and bet massively against it. But the act of a major investor publicly betting against a currency signals to other market participants that it’s vulnerable. This triggers more selling, which drives down the price, which validates the original prediction. The prediction didn’t just describe reality - it created the reality it predicted.
Or take the way hype can bootstrap a startup into existence. When venture capitalists and media declare a young company “the next big thing,” that perception becomes a resource of its own. Investors pour in money, top engineers quit stable jobs to join, customers assume legitimacy, and journalists amplify the story further. The company suddenly is the juggernaut everyone said it was.
These are particularly clear examples, but conflict theorists argue that this pattern is far more pervasive than we typically recognize. Most discourse operates in domains where statements shape reality rather than just describing it, even when the causal chain is less obvious.
The Subject-Sentiment Model
Here’s a framework used frequently in media analysis: instead of measuring whether a statement is true, examine what it’s meant to achieve by looking at the subject it highlights and the sentiment it creates.
Consider this headline: “Experts SLAM Napoleon’s catastrophic tactical blunder cost French Empire the Battle of Waterloo.”
What information does this convey? On the surface, it’s about Napoleon’s military decision-making. But compress it to its essentials using subject-sentiment analysis: Napoleon + negative.
That’s the real message. The article signals: “High-status experts think negatively about Napoleon.” Whether Napoleon actually made a tactical error, whether that error was decisive, whether the analysis is sound - these questions matter less than the basic frame being established. The headline is coalition-building. It’s saying “the people who know things criticize this guy.”
You can do this with almost any news headline:
· “Economists warn CEO’s risky bet threatens company stability” = CEO + negative
· “Scientists praise breakthrough treatment” = treatment + positive
· “Critics slam politician’s controversial remarks” = politician + negative
The subject-sentiment compression often reveals more about the article’s purpose than its actual content. A publication attacking Napoleon isn’t really trying to inform you about 19th-century military tactics. They’re positioning Napoleon negatively, probably because he represents something they oppose - maybe centralized power, maybe French nationalism, maybe they just support his political rivals.
Selective Presentation of True Facts
This mechanism operates through choosing which facts get attention. You always have facts available that support different narratives.
This is a somewhat dated example but consider Dan Bilzerian, the Instagram personality known for posting photos of his lavish lifestyle. He hit every status checkbox that typically generates social approval in the Instagram world - wealth, expensive cars, private jets, attractive women, exotic locations. By conventional metrics of success that dominate Instagram, he was winning comprehensively.
This triggered jealousy. People wanted to attack him but couldn’t do so directly - he clearly had the wealth, the lifestyle, the status symbols. So they found the one thing they could criticize: his leg proportions. “Skinny legs” became a meme, a way to cut him down when no other angle worked.
Were the observations factually accurate? Somewhat. But that’s not the point. People weren’t conducting objective anatomical analysis - they were searching for any remotely true fact they could weaponize.
This pattern appears everywhere:
Want to make someone feel unsafe? Present accurate crime statistics from their neighborhood while omitting context about overall trends.
Want to undermine confidence in a political candidate? Focus coverage on their gaffes, awkward moments, and controversial statements while ignoring policy substance and track record.
Want to shift public opinion on a war? Show images of suffering civilians if you oppose it, or highlight acts of enemy brutality if you support it.
Speech Creates Attention, Attention Shapes Reality
The mistake theorist objects: “But you’re just cherry-picking! If you presented all the relevant facts, people would have accurate beliefs.” This misses the point. Comprehensive presentation of all relevant facts is impossible. Attention is finite. Someone must choose what gets emphasized. That choice is inherently political because it shapes outcomes.
From a conflict theory perspective, speech doesn’t just describe the world - it creates attention toward specific parts of the world. And where attention goes, cognition follows. How people feel about themselves, about others, about institutions and policies - all of this can be shifted through selective presentation of entirely accurate information.
The question “is it true?” becomes secondary to “what does highlighting this truth achieve?” A statement can be factually correct and still function as an attack, a manipulation, or a tool of control. Truth and effect are separate dimensions, and conflict theory prioritizes effect.
Attention Is Zero-Sum
There’s an invisible war happening constantly: the competition for attention. Unlike information, which can be duplicated infinitely, attention is fundamentally scarce. You can’t increase the total amount of attention available - you can only redirect it.
This creates a crucial implication from the conflict theory perspective: there is no neutral speech. Even if a fact is completely accurate, the choice to direct attention toward it is never neutral. You’re always pulling focus toward something and away from something else.
The Dimensions of Impact: A Framework
Consider two dimensions along which we can evaluate statements:
Accuracy: Does this correspond to reality?
Impact: What does this make salient, and what effect does it have?
From a mistake theory perspective, the accuracy dimension is paramount. This makes sense in fields like nuclear engineering or software development where accuracy is load-bearing. If you are 50% off on a reactor parameter, the system fails. If you have a single logic error in your code, the program crashes. You need strict correspondence to reality because reality provides harsh, immediate feedback.
But social and political discourse works differently. It isn’t trying to build a machine that reality will test rigorously. It is trying to coordinate tribes.
Here is a concrete example. Suppose you claim a specific demographic has 1.5% sociopaths when the true number is 1%. To a mistake theorist, this is a significant error that needs correcting. But for a conflict theorist, the rhetorical effect is identical. You have successfully directed attention toward that group as a threat. The inaccuracy doesn’t prevent the statement from doing its job.
This implies that the gap between “accurate statistics” and “somewhat misleading statistics” is often smaller than the gap between “discussing this topic” and “not discussing it.”
If your goal is to create concern about immigration, presenting statistics that are 80% accurate versus 100% accurate might not change the outcome much. Both direct attention toward immigration as a potential threat. Both trigger concern and resource allocation. The impact dimension dominates the accuracy dimension. A slightly wrong fact that redirects focus can have more effect than a perfectly correct fact that addresses the wrong question.
This dominance of impact over accuracy explains why even hard sciences can get dragged into the conflict frame. When Copernicus proposed heliocentrism, conflict theorists didn’t care about the astronomical accuracy. They cared about the impact. It undermined Church authority. The statement was about stars, but the effect was political. Similarly, evolution became politically charged despite being biological science because it challenged religious narratives.
In these domains, you aren’t evaluated on whether your map matches the territory. You are evaluated on what your map does to the territory.
Second Insight: Categories Are Not Neutral
Here’s the core epistemological problem: you build maps using tools that are already biased, so even “accurate” maps are politically loaded.
A mistake theorist thinks they can avoid bias by carefully checking correspondence to reality. But before you check anything, you’ve already made political choices about which categories to use, what distinctions matter, what to pay attention to. The mapping tools themselves shape what you see.
And it’s worse than that - by using these categories, you actively reinforce existing structures. This isn’t just historical residue. When you describe someone as a “citizen,” use terms like “brave” versus “cowardly,” or employ racial/class/gender categories, you’re not just describing reality. You’re validating these as the important distinctions, making them more real through use. Language is reflexive - the map changes the territory.
Loaded Language: The Category Game
The mechanism works like this: create categories with negative valence, populate them with clearly bad things, then label things you oppose with that category to transfer the negative association.
Think of “cancer” as a category. Everyone agrees cancer is bad. Now you want to argue that social media is harmful. You call it “the cancer of modern society.” The category does the work - you’re not making a detailed argument about mechanisms, you’re transferring the negative association from cancer to social media.
The term “conspiracy theory” works this way. The phrase itself carries low status - it signals “not worth taking seriously.” As soon as you describe something as a conspiracy theory, you’ve put it in a dismissive category. Someone can shut you down just by saying, “That’s an interesting conspiracy theory you have there”
But some conspiracy theories are true.
Imagine if astronomy and astrology shared the same word. Every time an astronomer proposed a theory, someone could mock them: “Oh, that’s nice - a real Libra-type theory you’ve got there.” The lack of distinction would make it impossible to discuss legitimate astronomical claims without being tarred by association.
That’s the situation with conspiracy theories. We lack a neutral descriptor for “sensible theories involving conspiracies.” The language itself stacks the deck. By having only one category that conflates legitimate and illegitimate theories, the term “conspiracy theory” functions as a weapon to dismiss ideas regardless of their merit.
Categories That Exist (and Don’t)
Beyond loaded language, there’s a deeper issue: some categories exist in our vocabulary and others don’t. Which categories exist reflects what powerful structures need - not only through conspiracy, but organically.
Think of it like elephants walking through a forest leaving footprints. The elephants aren’t conspiring to create a particular pattern - they’re just moving through the world doing elephant things. But their footprints shape the terrain. Language works similarly. Powerful structures leave traces just by existing and operating. But more than that, they actively develop and promote categories they need for coordination.
Consider religious moral terminology. Many languages developed under religious influence contain extensive vocabulary for moral transgression - different words for types of sin, degrees of spiritual impurity, categories of blasphemy, distinctions between venial and mortal offenses. Just by learning this language, you absorb that moral evaluation along religious axes is central. The density of categories in this domain tells you what mattered to that culture.
When you are asking the question “Is X a Sin?”, you are already empowering the religious framework, just by using the word sin.
Consider military terminology. We have precise distinctions for combat-relevant qualities. When you call someone “brave” or “cowardly,” you’re not just describing them. You’re reinforcing that this axis of evaluation is important, that this is a meaningful way to judge people. You’re making bravery more real as a social fact. The categories exist because military structures needed them for coordination - distinguishing who will hold the line from who might flee matters enormously in combat.
Meanwhile, other qualities lack simple category markers. Someone who excels at organizing gatherings - not just social skills, but genuine administrative talent for creating excellent communal experiences - doesn’t have a prestigious single-word character trait in English. Maybe in a post-scarcity society focused on leisure and community, such a quality would have its own term, its own honor code, its own set of distinctions. But we don’t have that because our current structures don’t particularly need it for coordination.
The presence and density of categories reveals what matters to existing systems. And by using those categories, you’re not passively describing - you’re actively reinforcing. You’re making certain distinctions seem natural, certain questions seem important.
We have rich vocabularies for divisions that serve existing power: race, gender, class, citizenship. But we lack common terms for divisions that might matter under different arrangements. People who consistently cooperate versus people who defect in collective action problems don’t have simple category markers that cut across other identities. The absence of categories for cross-cutting alliances that might threaten existing power structures isn’t accidental - language reflects what coordination patterns already exist, not what alternatives might be possible.
A mistake theorist might object: “Okay, language is shaped by power, but I can still check if statements correspond to reality. The fact that ‘citizen’ is a political category doesn’t stop me from accurately determining who is or isn’t a citizen.”
But this misses the point. The question isn’t whether you can check correspondence - it’s what you’re paying attention to by using these categories at all. Every time you build a map using terms like “citizen,” you’re reinforcing nation-state structures. Every time you evaluate people on bravery, you’re validating military virtues. The statements might be “true” in the correspondence sense, but they’re doing political work by directing attention and strengthening existing categories.
From a conflict theory perspective, asking “what should I believe?” requires asking “what categories am I using, and what do those categories reinforce?” You can’t escape bias by just checking facts more carefully. The bias is in which facts you’re checking, which distinctions you’re making, which parts of reality you’re mapping at all.
The tools themselves are conservative - they bias toward existing structures just by being the tools that exist. This isn’t something you can solve by being more rigorous about correspondence.
Why Postmodernists Are Obsessive About Language
This framework explains something that often baffles mistake theorists: why postmodernist thinkers seem pathologically obsessed with language, semiotics, and categories.
From a mistake theory perspective, this focus looks disconnected from reality. Why spend so much time analyzing language when you could be studying the actual world? Words are just labels we put on things - what matters is the underlying reality, not the labels themselves.
But from a conflict theory perspective, the postmodernist obsession makes perfect sense. If language and categories shape perception, control what questions seem important, reinforce existing power structures, and actively reshape reality through use - then language isn’t just labels. It’s territory, not map. Controlling language is controlling reality itself.
When postmodernists deconstruct terms, question categories, and insist on examining the political implications of seemingly neutral language, they’re not being needlessly academic. They’re recognizing that the battle over how we describe the world is the battle over what the world becomes. Categories aren’t discovered - they’re imposed. And once imposed, they shape everything downstream.
Why This Is Epistemology (Not Just Strategy)
A natural objection at this point: “Okay, but this isn’t really epistemology. You can just do mistake theory internally - build accurate maps - and then do conflict theory externally when you communicate. Epistemology is about what you actually believe, not how you present yourself.”
This objection misses something important. Conflict theory applies internally too. The same attention-management dynamics operate on your own beliefs, not just your public statements.
The Epistemological Progression
Epistemology asks: What should you believe? Different traditions give different answers:
Correspondence Theory /Mistake Theory: Believe what corresponds to reality.
Pragmatism: Believe what is useful.
Conflict Theory/ Post Modernism: Believe what is useful but ask first useful to whom?
This last move transforms epistemology from an individual question into a political one. When people and groups have conflicting interests, the same belief can be useful for one party and harmful for another. Epistemology becomes inseparable from whose interests you’re serving.
Internal Conflict Theory: You Manage Your Own Attention
Here’s where it becomes genuine epistemology rather than just communication strategy: you don’t just choose what to say - you choose what to believe, what to focus on, which parts of reality to map in detail.
Consider elite athletes. Many genuinely believe they’re the best in the world even where there is no evidence to support this belief - but the belief itself improves performance. They’re not just saying they’re confident for strategic reasons. They’ve actually constructed internal maps that serve their goals rather than accuracy. The epistemology is happening inside.
This is common in coaching and self-help contexts: “what you focus on expands,” “energy flows where attention goes.” These aren’t just motivational slogans. They’re describing internal attention management - choosing which parts of reality to dwell on based on what serves you, not what’s most accurate.
People do this constantly. Someone going through a difficult period might deliberately avoid thinking about certain problems - not denial exactly, but active attention management. Someone building a startup might cultivate unrealistic optimism because that’s what the situation requires. The question isn’t “is this accurate?” but “does focusing on this serve me?”
The Darker Version: Beliefs That Serve External Entities
Here’s where it gets more complex: internal beliefs don’t always serve the organism holding them.
Consider religious belief systems. They create attention patterns - what to think about, what to feel guilty about, what to hope for. These patterns might serve the religion’s propagation (the memetic structure, the institution, what you might call the egregore) rather than the individual carrying those beliefs.
A soldier who genuinely believes he should die for his country holds a belief that serves the nation-state but potentially harms him. He’s not being strategically patriotic - he’s internalized an epistemology that evaluates beliefs by their service to something larger than himself. The “useful to whom?” question has an answer, and the answer isn’t him.
This is the fractal nature of conflict theory epistemology: the same dynamics that operate between groups in public discourse also operate within individuals managing their own beliefs. And in both cases, you can ask whose interests are being served - and sometimes the answer is something outside the person doing the believing.
The Education Example
This plays out at scale in how societies shape belief. Consider two curricula:
Patriotic education: History centered on national achievements, values emphasizing loyalty and sacrifice, literature celebrating national heroes.
Pluralistic education: History showing multiple perspectives, values emphasizing universal human rights, literature from diverse traditions.
Both can present factually accurate information. Both create coherent maps. But they serve different interests - the first serves state power, the second serves cosmopolitan institutions. The question of which to teach is inherently political.
And here’s the key point: the officials promoting patriotic education probably genuinely believe in it. They’ve internalized an epistemology that evaluates beliefs partly by their service to the nation. The cosmopolitan educators have internalized a different epistemology. Neither group sees themselves as serving interests - they see themselves as simply correct.
Entangled With Strategy, But Still Epistemology
I should be clear: in practice, conflict theory blurs the line between genuine belief and strategic positioning. Conflict theorists don’t just evaluate statements by different criteria - they may lie outright, believe contradictory things2 in different contexts, and deceive themselves to deceive others more effectively.
This works in political, religious, and social domains where reality doesn’t provide immediate harsh feedback. It fails completely for physical questions - most conflict theorists agree that bridges need sound engineering and code with logic errors crashes. The approach works where you can sustain contradictions; it breaks where reality punishes inaccuracy too quickly.
But the entanglement with strategy doesn’t make it not epistemology. When someone genuinely shapes their internal beliefs based on what serves their interests (or their tribe’s interests, or their religion’s interests), they’re answering the epistemological question “what should I believe?” - just with different criteria than correspondence to reality.
These aren’t just different attitudes - they’re adaptations to different types of games.
Mistake Theory = Cooperative Games (PVE)
Think about situations where people share goals and need accurate information to succeed together:
A family: When you ask your wife where she’ll be in the evening, you expect an accurate answer. You’re both cooperating, trying to coordinate. Truth matters because mistakes hurt both of you. If she’s wrong about when she’ll be home, you both suffer the consequences of poor planning. There’s no incentive to manipulate - you’re on the same team.
Engineering and science: If you’re doing nuclear engineering, parameters need to be extremely accurate or nothing works. Writing computer code, the logic must be precise or the program crashes. You’re building complex systems on top of previous work. Mistakes compound. Everyone involved needs accurate maps because they’re cooperating against nature, against technical challenges. The “opponent” is reality itself, not each other.
This is why STEM fields can largely avoid the problems I described earlier. Most theories in physics or chemistry aren’t politically loaded because they don’t relate to social arrangements. When you’re measuring electron behavior or chemical reactions, loaded language and biased categories matter much less. Reality provides clear feedback independent of human interests.
Mistake theory epistemology evolved for these cooperative situations. When you’re playing together against the environment, you want to help each other build accurate maps and correct mistakes. The framework is optimized for collaborative truth-seeking.
Conflict Theory = Zero-Sum Games (PVP)
Now consider situations where people compete for scarce resources, status, or power:
Political discourse: Imagine two candidates in a debate. Neither is trying to help the other understand reality better. They’re trying to win votes, which is zero-sum - votes you gain, your opponent loses. Both know the other isn’t being fully honest. Both are saying whatever optimizes their chances of winning. Truth matters, but only instrumentally - as ammunition for attacking your opponent’s reputation or as a constraint you must work around.
In this environment, everything said is “dishonest at a deep level” - not necessarily lying, but not attempting genuine collaborative map-building either. You’re using loaded language strategically, directing attention toward facts that help you and away from facts that don’t, framing issues in categories that serve your interests.
Public discourse generally: Twitter, cable news, opinion journalism - these are zero-sum arenas with too many unaligned actors to maintain cooperative epistemology. When you make a public statement, you’re not trying to help everyone build better maps. You’re positioning yourself, your tribe, your cause.
The Alignment Gradient: It’s Not Binary
Epistemology scales with interest alignment. In high-trust zones (family3, startups), Mistake Theory dominates because you share a fate—bad maps hurt everyone. In mixed zones (corporate departments), you use Strategic Emphasis—spinning facts without breaking the system. In zero-sum zones (politics, war), you shift to Conflict Theory, where helping an opponent see reality is a strategic error.
This explains why leaders sound “dishonest” in public but “sensible” in private. They aren’t hypocrites; they are just adjusting their tactics to the specific level of alignment in the current game.
The Alliance Question: Which Side Are You On?
In conflict theory, statements get evaluated primarily by tribal loyalty. Think about communism versus capitalism as competing alliances, each fighting for status, power, and resources.
Someone criticizes worker unions for corruption. Maybe the criticism is completely accurate. Doesn’t matter. From a conflict theory perspective, the question is: whose interests does this criticism serve? It helps capitalists and hurts labor movements. Therefore, the person making it is a soldier for capitalism, whether they realize it or not.
This is the essence of low decoupling. While a mistake theorist tries to separate the argument from the speaker (high decoupling), a conflict theorist insists they are inseparable. You cannot evaluate the text without the context.
This is why “Who is saying this?” is the first question a conflict theorist asks. It isn’t an ad hominem fallacy to them. It’s a necessary step to decode the message. A union leader criticizing the union is “internal reform,” while a CEO saying the exact same words is “union busting.” The identity of the speaker changes the meaning of the statement.
The Building vs. Warfare Trade-off
Here’s the core tension: these epistemologies have opposite strengths and weaknesses.
Mistake theorists are good at building. They can create complex technology, advance science, engineer solutions to technical problems. This requires genuine accuracy - you can’t fake your way through nuclear physics or software development. Reality punishes deviations from truth too quickly and consistently.
But mistake theorists are terrible at zero-sum conflicts. Someone who thinks discourse is about collaborative truth-seeking will get crushed in an arena where everyone else treats it as warfare. The honest participant in a dishonest game is simply exploited.
Conflict theorists are good at coordination and warfare. They understand propaganda, framing, attention control. They know how to win political battles, mobilize tribes, and accumulate power. Sometimes these skills serve good causes - civil rights movements had to master conflict theory to fight entrenched power.
But conflict theorists can’t build as well. When accuracy becomes secondary to effect, you can’t construct the complex systems that require genuine precision. You can’t do engineering or science if your epistemology prioritizes tribal loyalty over correspondence to reality.
The Power Scaling Problem
Political power scales better than technical power. This is a crucial asymmetry that explains why conflict theory dominates even when mistake theory produces better long-term outcomes.
Consider: a charismatic leader who can organize and control a million people has more immediate power than a brilliant engineer who builds something that helps a million people. The leader can direct those million people toward whatever goals serve their interests. The engineer provides value but doesn’t control behavior.
Think of the relationship between political leadership and technical expertise. The president of the United States has vastly more power than the most brilliant scientist or engineer in the country. Franklin D. Roosevelt had more power than Robert Oppenheimer. Political skills - the ability to mobilize people, control narratives, accumulate authority - translate to power more directly than technical skills.
This creates a selection pressure. In environments where controlling people matters more than building things, conflict theory skills are simply more valuable. Someone optimizing for personal power naturally develops conflict theory competencies because they provide faster returns.
This is why most people in substantial political power are conflict theorists. Politics is largely zero-sum competition for authority and resources. Someone who approaches this with mistake theory epistemology - treating discourse as collaborative truth-seeking - will get destroyed by opponents who understand it as warfare. The honest player in a dishonest game loses.
Let’s return to the Dawkins tweet with our full framework in place. The question wasn’t “is this a good argument?” The question is: what game was being played, and which epistemology correctly maps that game?
The Mistake Theory Reading
From a mistake theory perspective, Dawkins raised a legitimate question about logical consistency. If we accept self-identification as valid for gender, what’s the principled distinction that makes it invalid for race? This is straightforward philosophical inquiry - exactly the kind of question that academic discourse is built to handle.
The defenders of Dawkins saw him engaging in collaborative truth-seeking. He wasn’t attacking anyone; he was identifying what looked like an inconsistency in widely-held beliefs. The proper response would be to explain the distinction (if one exists) or acknowledge the tension and work through its implications. Punishing someone for asking questions corrupts the entire enterprise of intellectual inquiry.
This reading treats Twitter like an academic seminar - a space where people share the goal of building accurate maps and where questions are evaluated on their logical merit.
The Conflict Theory Reading
From a conflict theory perspective, Twitter is a zero-sum public arena, not a collaborative space. When Dawkins tweeted, he wasn’t entering a philosophy seminar. He was making a public statement that would be read by millions, including people actively fighting over trans rights, inclusion policies, and social recognition.
The statement’s effect matters more than its logical structure. By drawing a comparison between trans identity and Rachel Dolezal - a case widely understood as fraudulent appropriation - Dawkins directed attention toward questioning trans legitimacy. Whether he intended this or not is secondary. The impact was to provide intellectual cover for people who oppose trans rights.
Conflict theorists notice something mistake theorists often miss: Dawkins had an agenda. Maybe not a fully conscious one, but an agenda nonetheless. He didn’t randomly pick this comparison. He chose a loaded example that frames trans identity as potentially fraudulent. The “just asking questions” posture obscures this, but conflict theory sees through it - people don’t ask neutral questions about politically charged topics. The choice of what to question reveals your commitments.
The conflict theory reading treats Twitter accurately: as a space where statements are moves in ongoing battles between groups with opposed interests. You’re evaluated not by whether your logic is sound, but by whose cause your statement advances.
Which Analysis Is Correct?
For Twitter specifically, the conflict theory analysis maps reality more accurately. Twitter is a conflict arena. You cannot impose mistake theory norms on such a space without enforcement mechanisms that don’t exist there.
When someone tries to play mistake theory in a conflict theory environment, they’re pretending to “cooperate” in a defect-defect equilibrium. The result is predictable: they get treated as an enemy by those who correctly recognize that their “questions” serve opposing interests, while they remain confused about why their good-faith inquiry generated hostility.
The mistake theory crowd sees this as intellectual oppression - punishing someone for asking legitimate questions. But they’re wrong about the game being played.
Here’s the deeper issue: the mistake theorist thinks that being logical and factually correct is enough to count as benevolent. If I’m saying true things using sound reasoning, I’m playing fair, right? But from a conflict theory perspective, this is aggression disguised as benevolence. Selective attention on true facts that harm specific groups, while claiming neutrality, is not a benevolent act.
Saying “I’m just asking questions” or “I’m just being logical” doesn’t exempt you from the impact of directing attention toward claims that hurt others. Truth alone isn’t virtuous when you’re choosing which truths to emphasize in a zero-sum conflict. The conflict theorist asks: negative impact on whom? If you accept that trans rights are a live political battle, then highlighting comparisons that undermine trans legitimacy is taking a side, regardless of the logical merit of your question.
The conflict theorist correctly identifies that Dawkins’s statement serves anti-trans interests. The mistake theorist doesn’t even see this dimension. They’re so focused on evaluating logical consistency that they miss the obvious: powerful public intellectuals don’t accidentally stumble into politically charged comparisons. The statement does work in the world, and that work can be evaluated independently of whether the underlying question has merit.
This doesn’t mean conflict theory is always correct - it means it’s correct for mapping conflict arenas. When you’re actually in a cooperative space focused on building accurate models, mistake theory applies. But Twitter isn’t that space, and pretending it is doesn’t just make you vulnerable - it makes you fake.
Why Esoteric Writing Exists
This dynamic explains a puzzle about intellectual discourse: why do some thinkers resort to esoteric or Straussian writing - deliberately obscuring their ideas from public view?
The answer becomes clear through this framework. Academic seminars, private discussions, and carefully-gated intellectual spaces allow more tolerance for idea exploration with mistake theory norms. You can ask uncomfortable questions, probe at sacred beliefs, and follow arguments wherever they lead, because you’re genuinely trying to build accurate maps with others who share that goal.
But once an idea enters public discourse - once it hits Twitter or becomes a news story - it immediately gets pulled into conflict territory. The idea will be evaluated not on its merit but on whose interests it serves. Nuance collapses. Context disappears. The question shifts from “is this true?” to “which side is this helping?”
Esoteric writing is a mechanism for keeping ideas in the mistake theory realm for as long as possible. By writing in ways that only specialists can understand, by hiding controversial insights in dense academic prose, by using coded language that signals “this is for insiders,” thinkers create a buffer zone where ideas can be developed and tested before they get weaponized in public conflicts.
There’s a paradox in this post itself. I’ve written an analysis of conflict theory in mistake theory style - systematic, building concepts step by step, trying to help you understand how these frameworks work. I’m not advocating for any specific political position or tribal allegiance. From a mistake theory perspective, this looks like pure correspondence-seeking, just trying to map reality accurately.
But from a conflict theory perspective, this post has an agenda. I’m trying to achieve something specific in the world, and it’s worth being explicit about what.
What I’m Actually Trying to Do
I prefer a world where we mostly play win-win games. Where people are genuinely cooperating to build accurate maps and solve problems together. Where mistake theory epistemology dominates because we’re mostly aligned. That world would be better - more productive, more innovative, more humane.
But that’s not the world we live in. Public discourse is largely conflict territory. Politics is zero-sum competition. Power structures protect themselves through mechanisms that don’t care about correspondence to reality. And most mistake theorists - rationalists, STEM types, analytical thinkers - are walking around somewhat blind to these dynamics.
This post exists because I think people who are good at building, who genuinely want to play positive-sum games, who operate primarily with mistake theory epistemology, need to understand the situation they’re actually in. Not understanding conflict dynamics doesn’t make you more virtuous - it just makes you exploitable.
Look at the rationalist community. These are often good people. Effective altruism came from this scene - people actually trying to figure out how to do the most good with available resources. They’re thoughtful, they’re serious about getting things right, they genuinely care about accuracy and impact. These are people I want to have more influence in the world.
But they produce almost no successful politicians. They get confused and hurt when their logical arguments get dismissed. They don’t understand why stating true facts sometimes makes people angrier rather than updating their beliefs. They think if they just build better maps, eventually everyone will see the light.
This is naive. Not morally naive - strategically naive. You can’t win games you don’t understand you’re playing. If you want mistake theorists to have more power, if you want people who care about building accurate models to shape policy and institutions, they need to understand conflict theory. Not to become full conflict theorists, but to recognize when they’re in cooperative versus competitive environments and adjust accordingly.
The Development Path
I think it’s better to grow up as a mistake theorist and then learn conflict theory than the reverse. Mistake theory requires more cognitive load - it’s slow system thinking. You’re building complex models, tracking evidence, updating beliefs based on new information. It’s harder, and it develops capacities that are valuable even when you’re not using pure mistake theory.
Conflict theory is mostly fast system (In the thinking fast and slow sense). Most people doing politics aren’t explicitly Machiavellian with conscious strategies. They’re running on intuition - they just understand power dynamics, coalition building, when to attack and when to cooperate, without necessarily being able to articulate the rules. It’s pattern matching rather than systematic analysis.
The mistake theorist who learns conflict theory can be explicit about the dynamics, can choose when to cooperate and when to compete, can build alliances strategically while maintaining their ability to think clearly. The conflict theorist who tries to learn mistake theory late usually struggles - they’ve spent too long optimizing for winning political battles rather than building accurate models, and the cognitive habits are hard to reverse.
This is part of why I wrote this post analytically rather than politically. I want mistake theorists to understand conflict theory on their own terms first - as a framework that can be analyzed, understood, and applied when appropriate. Once you see the structure, you can decide how to use it.
A Warning About Fake Cooperation
One strategy I see from some rationalists, and I think it’s terrible: pretending to play cooperative games while actually playing competitive ones. The Dawkins move - going on Twitter, making politically charged statements, and then claiming “I’m just asking questions” or “I’m just being logical.”
This doesn’t work. People see through it. It comes across as either dishonest or oblivious, and both make you look bad. If you want to genuinely engage in mistake theory discourse, do it in spaces where that’s actually the norm - academic seminars, private discussions, carefully-gated intellectual communities. Don’t perform neutrality in conflict arenas while making moves that clearly serve specific interests.
If you want to play conflict theory, play it explicitly. Build coalitions, advocate for your positions, recognize you’re in zero-sum competition and act accordingly. But don’t pretend you’re just trying to have a productive discourse. It’s strategically weak and makes you look fake.
If you want pure mistake theory, stay in the lab. Work on technical problems where reality provides immediate feedback. Build things where accuracy is load-bearing. Those spaces still exist, and they’re valuable. Not everything needs to be politics.