Scott Alexander once wrote about the difference between "mistake theorists" who treat politics as an engineering discipline (a symmetrical collaboration in which everyone ultimately just wants the best ideas to win) and "conflict theorists" who treat politics as war (an asymmetrical conflict between sides with fundamentally different interests). Essentially, "[m]istake theorists naturally think conflict theorists are making a mistake"; "[c]onflict theorists naturally think mistake theorists are the enemy in their conflict."
More recently, Alexander considered the phenomenon of "bounded distrust": science and media authorities aren't completely honest, but are only willing to bend the truth so far, and can be trusted on the things they wouldn't lie about. Fox News wants to fuel xenophobia, but they wouldn't make up a terrorist attack out of whole cloth; liberal academics want to combat xenophobia, but they wouldn't outright fabricate crime statistics.
Alexander explains that savvy people who can figure out what kinds of dishonesty an authority will engage in, end up mostly trusting the authority, whereas clueless people become more distrustful. Sufficiently savvy people end up inhabiting a mental universe where the authority is trustworthy, as when Dan Quayle denied that characterizing tax increases as "revenue enhancements" constituted fooling the public—because "no one was fooled".
Alexander concludes with a characteristically mistake-theoretic plea for mutual understanding:
The savvy people need to realize that the clueless people aren't always paranoid, just less experienced than they are at dealing with a hostile environment that lies to them all the time.
And the clueless people need to realize that the savvy people aren't always gullible, just more optimistic about their ability to extract signal from same.
But "a hostile environment that lies to them all the time" is exactly the kind of situation where we would expect a conflict theory to be correct and mistake theories to be wrong!—or at least very incomplete. To speak as if the savvy merely have more skills to extract signal from a "naturally" occurring source of lies, obscures the critical question of what all the lying is for.
In a paper on "the logic of indirect speech", Pinker, Nowak, and Lee give the example of a pulled-over motorist telling a police officer, "Gee, officer, is there some way we could take care of the ticket here?"
This is, of course, a bribery attempt. The reason the driver doesn't just say that ("Can I bribe you into not giving me a ticket?"), is because the driver doesn't know whether this is a corrupt police officer that accepts bribes, or an honest officer who will charge the driver with attempted bribery. The indirect language lets the driver communicate to the corrupt cop (in the possible world where this cop is corrupt), without being arrested by the honest cop who doesn't think he can make an attempted-bribery charge stick in court on the evidence of such vague language (in the possible world where this cop is honest).
We need a conflict theory to understand this type of situation. Someone who assumed that all police officers had the same utility function would be fundamentally out of touch with reality: it's not that the corrupt cops are just "savvier", better able to "extract signal" from the driver's speech. The honest cops can probably do that, too. Rather, corrupt and honest cops are trying to do different things, and the driver's speech is optimized to help the corrupt cops in a way that honest cops can't interfere with (because the honest cops' objective requires working with a court system that is less savvy).
This kind of analysis carries over to Alexander's discussion of government lies—maybe even isomorphically. When a government denies tax increases but announces "revenue enhancements", and supporters of the regime effortlessly know what they mean, while dissidents consider it a lie, it's not that regime supporters are just savvier. The dissidents can probably figure it out, too. Rather, regime supporters and dissidents are trying to do different things. Dissidents want to create common knowledge of the regime's shortcomings: in order to organize a revolt, it's not enough for everyone to hate the government; everyone has to know that everyone else hates the government in order to confidently act in unison, rather than fear being crushed as an individual. The regime's proclamations are optimized to communicate to its supporters in a way that doesn't give moral support to the dissident cause (because the dissidents' objective requires common knowledge, not just savvy individual knowledge, and common knowledge requires unobfuscated language).
This kind of analysis is about behavior, information, and the incentives that shape them. Conscious subjectivity or any awareness of the game dynamics are irrelevant. In the minds of regime supporters, "no one was fooled", because if you were fooled, then you aren't anyone: failing to be complicit with the reigning Power's law would be as insane as trying to defy the law of gravity.
On the other side, if blindness to Power has the same input–output behavior as conscious service to Power, then opponents of the reigning Power have no reason to care about the distinction. In the same way, when a predator firefly sends the mating signal of its prey species, we consider it deception, even if the predator is acting on instinct and can't consciously "intend" to deceive.
Thus, supporters of the regime naturally think dissidents are making a mistake; dissidents naturally think regime supporters are the enemy in their conflict.
It doesn't seem to me like the setting of the illustrative examples should matter, though? The problem of bounded distrust should be qualitatively the same whether your your local authorities lie a lot or only a little. Any claims I advance about human rationality in Berkeley 2023 should also hold in Stalingrad 1933, or African Savanna −20,003, or Dyson Sphere Whole-Brain Emulation Nature Preserve 2133.
I think they're related! The general situation is: agent A broadcasts claim K, either because K is true and A wants Society to benefit from knowing this, or because A benefits from Society believing K. Agents B and C have bounded distrust towards A, and are deciding whether they should believe K. B says that K doesn't seem like the sort of thing A would lie about. From C's perspective, this could be because it really is true that K isn't the sort of thing that A would lie about—or it could be that A and B are in cahoots.
Section IV. of "Bounded Distrust" opens with the case where A = "credentialed experts", K = "ivermectin doesn't work for COVID", B = "Scott Alexander", and C = "Alexandros Marinos". But the problem should be the same if A = "Chief Ugg", K = "there's a lion across the river", or A = "the Dyson Sphere Whole-Brain Emulation Nature Preserve Tourism Board", K = "Norton AntiVirus works for cyber-shingles", &c.
The general problem is that agents with different interests sometimes have an incentive to distort shared maps, so it's very naïve to say "it's important for these two types of people to understand each other" as if differences in who one trusts were solely due to differences in map-correction skill (mistake theory), rather than differences in who one trusts to not distort shared maps to one's own detriment (conflict theory).
(Thanks for commenting! You're really challenging me to think about this more deeply. This post came about as a 20x wordcount expansion of a Tweet, but now that your criticism has forced me to generalize it, I'm a little worried that my presentation of the core rationality insight got "contaminated" by inessential details of my political differences with Scott; it seems like there should be a clearer explanation for my intuition that mistake theory corresponds with the "loyalist" rather than the "dissident" side of a conflict—something about how power can make contingent arrangements seem more "natural" than they really are?—and I'm not immediately sure how to make that crisp, which means my intuition might be wrong.)