Great post, I enjoyed it.
#1 - I hadn't thought of it in those terms, but that's a great example.
#2 - I think this relates to the involvement of the third-party audience. Free speech will be "an effective arena of battle for your group" if you think the audience will side with you once they learn the truth about what [outgroup] is up to. Suppose Alice and Bob are the rival groups, and Carol is the audience, and:
If this is really what's going on, Alice will be in favor of the debate continuing because she thinks it'll persuade Carol to join her, while Bob is opposed to the debate for the same reason. This is why I personally am pro-free-speech - because I think I'm often in the role of Carol, and supporting free speech is a "tell" for who's really on my side.
I think it has a lot more to do with status quo preservation than truthseeking. If I'm Martha Corey living in Salem, I'm obviously not going to support the continued investigations into the witching activities of my neighbours and husband, and the last reason for that being the case is fear of the exposed truth that I've been casting hexes on the townsfolk all this time.
I think a much simpler explanation is that continued debate increases the chances I'm put on trial, and I'd much rather have the status quo of not debating whether I'm a witch preserved. If it were a social norm in Salem to run annual witching audits on the townsfolk, perhaps I'd support debate for not doing that any more. The witch hunting guild might point a kafkaesque finger at me in return because they'd much rather keep up the audits.
Up stands Elizabeth Hubbard who calmly explains that if no wrongdoing has taken place then no negative consequences will occur, and that she is concerned by the lack of clarity and accountability displayed by those who would shut down such discussions before they've even begun.
In your example, what makes Alice (Elizabeth) the guru and Bob (Martha) the siren?
Isn't the fact that the buyer wants a lower price proof that the seller and buyer's values aren't aligned?
In almost all cases, the buyer will grossly exaggerate the degree to which values are not aligned in the hopes of driving the seller down in price. In most cases, the buyer has voluntarily engaged the seller (or even if they haven't, if they consider the deal worth negotiating then there must be some alignment of values).
Even if I think the price is already acceptable to me, I will still haggle insincerely because of the prospect of an even better deal.
It seems weird to me to call a buyer and seller's values aligned just because they both prefer outcome A to outcome B, when the buyer prefers C > A > B > D and the seller prefers D > A > B > C, which are almost exactly misaligned. (Here A = sell at current price, B = don't sell, C = sell at lower price, D = sell at higher price.)
I think the important value here is not the assets changing hands as part of the exchange, but rather the value each party stands to gain from the exchange. Both parties are aligned that shaking hands on the current terms is acceptable to them, but they will both lie about that fact if they think it helps them move towards C or D.
Or to put it another way, in your frame I don't think any kind of collaboration can ever be in anyone's interests unless you are aligned in Every Single Thing.
If I save a drowning person, in a mercenary way it is preferable to them that I not only save them but also give them my wallet. Therefore my saving them was not a product of aligned interests (desire to not drown + desire to help others) since the poor fellow must now continue to pay off his credit card debt when his preference is to not do that.
For me, B > A > D > C, and for the drowning man, A > B > C > D (Here A = rescue + give wallet, B = rescue, no wallet, C = no rescue, throw wallet into water, D = walk away)
What matters in the drowning relationship (and the reason for our alignment) is B > C. Whether or not I give him my wallet is an independent variable from whether I save him and the resulting alignment should be considered separately.
In your example, I'm focusing on the alignment of A and B. Both parties will be dishonest about their views on A and B if they think it gets them closer to alignment on C and D. That's the insincerity.
Hmm, the fact that C and D are even on the table makes it seem less collaborative to me, even if you are only explicitly comparing A and B. But I guess it is kind of subjective.
It's a question of whether drawing a boundary on the "aligned vs. unaligned" continuum produces an empirically-valid category; and to this end, I think we need to restrict the scope to the issues actually being discussed by the parties, or else every case will land on the "unaligned" side. Here, both parties agree on where they stand vis-a-vis C and D, and so would be "Antagonistic" in any discussion of those options, but since nobody is proposing them, the conversation they actually have shouldn't be characterized as such.
As I understood it, the whole point is that the buyer is proposing C as an alternative to A and B. Otherwise, there is no advantage to him downplaying how much he prefers A to B / pretending to prefer B to A.
IF/IE (Yandere/Tsundere): Alice (the Yandere) pretends to like Bob but in fact is trying to manipulate him into doing what she wants, while Bob (the Tsundere) pretends to hate Alice but in fact is totally on-board with her agenda.
- This description is a bit of a joke - I can't even imagine what this mode would look like, let alone think of any real-world examples.
Maybe love things? Or female things?
Cassandra/Mule: If Alice knew she were talking to a brick wall, she would give up; and if Bob knew Alice was trying to help, he would actually listen.
I've seen mules in the wild in internet forums (which, admittedly is outside the scope of your post). They usually present as ardent defenders of the faith, repeating well-known talking points…and never updating, ever.
On the contrary, I'd say internet forum debating is a central example of what I'm talking about.
Do Cassandras always believe they are Gurus? What happens if a Cassandra catches on and tries to convince the Mule they're being sincere?
This "trying to convince" is where the discussion will inevitably lead, at least if Alice and Bob are somewhat self-aware. After the object-level issues have been tabled and the debate is now about whether Alice is really on Bob's side, Bob will view this as just another sophisticated trick by Alice. In my experience, Bob-as-the-Mule can only be dislodged when someone other than Alice comes along, who already has a credible stance of sincere friendship towards him, and repeats the same object-level points that Alice made. Only then will Bob realize that his conversation with Alice had been Cassandra/Mule.
(Example I've heard: "At first I was indifferent about whether I should get the COVID vaccine, but then I heard [detestable left-wing personalities] saying I should get it, so I decided not to out of spite. Only when [heroic right-wing personality] told me it was safe did I get it.")
Overview
This article is an extended reply to Scott Alexander's Conflict vs. Mistake.
Whenever the topic has come up in the past, I have always said I lean more towards conflict theory over mistake theory; however, on revisiting the original article, I realize that either I've been using those terms in a confusing way, and/or the usage of the terms has morphed in such a way that confusion is inevitable. My opinion now is that the conflict/mistake dichotomy is overly simplistic because:
Instead, I suggest a model where there are 10 distinct modes of discourse, which are defined by which of the 16 roles each participant occupies in the conversation. The interplay between these modes, and the extent to which people may falsely believe themselves to occupy a certain role while in fact they occupy another, is (in my view) a more helpful way of understanding the issues raised in the Conflict/Mistake article.
The chart
Explanation of the chart
The bold labels in the chart are discursive roles. The roles are defined entirely by the mode of discourse they participate in (marked with the double lines), so for example there's no such thing as a "Troll/Wormtongue discourse," since the role of Troll only exists as part of a Feeder/Troll discourse, and Wormtongue as part of Quokka/Wormtongue. For the same reason, you can't say that someone "is a Quokka" full stop. (It's almost inevitable that people will try to interpret the roles in this way, as if they were personality archetypes, so I'll emphasize again that this is not what a role is - the same person may adopt different roles from one situation to the next.)
The roles are placed into quadrants based on which stance (sincere/insincere friendship/enmity) the person playing that role is taking towards their conversation partner.
The double arrows connect confusable roles - someone who is in fact playing one role might mistakenly believe they're playing the other, and vice-versa. The one-way arrows indicate one-way confusions - the person playing the role at the open end will always believe that they're playing the role at the pointed end, and never vice-versa. In other words, you will never think of yourself as occupying the role of Mule, Cassandra, Quokka, or Feeder (at least not while it's happening, although you may later realize it in retrospect).
Constructing the model
This model is not an empirical catalogue of conversations I've personally seen out in the wild, but an a priori derivation from a few basic assumptions. While in some regards this is a point in its favor, it's also it weakness - there are certain modes of discourse that the model "predicts" must exist, but where I have trouble thinking of any real-world examples, or even imagining hypothetically how such a conversation might go.
Four stances
We will start with the most basic kind of conversation - Alice and Bob are discussing some issue, and there are no other parties. On Alice's part, we can ask two questions:
Answering both questions creates a 2×2 grid with the 4 stances that Alice can adopt:
What do we mean by "sincerity"?
When we ask "Does Alice think...," we are sweeping a lot of complexity under the rug and effectively treating her mind as a black-box with no internal structure. We are taking a behaviorist/functionalist approach ("X is as X does") and leaving aside all questions of self-deception, motivated reasoning, elephant/rider relations, etc. So when we consider what Alice says aloud versus what the whole of her "elephant+rider+whatever apparatus" thinks, if the two line up, we say she's being "sincere," and if not "insincere".
This is obviously a gross oversimplification, but I think it's reasonable here because (a) it's necessary for keeping the already large number of combinations manageable, and (b) when you're conversing with someone, you often don't really care what's going on inside their head; what you want to know is what kinds of responses to expect from whatever you say to them.
An example to illustrate the point:
This is still called "insincerity" in the current framework, because the effect from your boss's perspective is the same as if you were deliberately lying - i.e. your boss should discount the truth-trackingness of your arguments in the same way.
Limitation to two-party discussions
As mentioned, we are only considering two-party discussions. Three- or more-party discourse is not covered, such as:
(I might be able to get into those cases in a follow-up article, but let's keep it simple for now.)
However, the case of what you might call a "1½-party discussion" (where the speaker aims their message at a particular listener or group of listeners, but they're not in a position to respond) is similar enough to two-party that we can still accommodate it here.
Sixteen roles / ten modes
Now, we can also ask the same questions to determine which stance Bob is employing. This means we now have a 4×4 grid with 16 roles which they may each occupy. The respective pairs of roles define the 10 possible modes of discourse. (There are only 10, because 6 of these pairings are just the same as others with Alice and Bob reversed, so we don't need to consider them separately.)
In the 4 symmetric modes, both Alice and Bob take the same stance towards each other and thus play the same role:
In the 6 asymmetric modes, Alice and Bob take different stances and thus play different roles. (Here, Alice's stance or role is given first, followed by a slash, then Bob's):
Explaining confusability
Two roles may be confused for one another when they differ only in their counterpart's sincerity. In other words, you know (a) whether you're being sincere or insincere, (b) whether you're expressing friendship or enmity, and (c) whether your counterpart is expressing friendship or enmity; but you can't really be sure of (d) whether your counterpart is being sincere or insincere. By toggling this unknown bit, you can see that there are two roles you might be playing, which, from your perspective, seem identical. So, for example, perhaps Alice thinks she's playing Wormtongue to Bob's Quokka. But maybe Bob is just as savvy as she is and is also manipulating her in return, which would make it a Chameleonic discourse. So, Alice is never entirely sure whether she's being a Wormtongue or a Chameleon.
However, four of the confusability relations are one-way only, because in each pair there's a "chump" role that nobody would intentionally take on:
But granted, there's something strangely arbitrary about these one-wayness arguments. Sure, the placement of the one-way arrows makes a nice symmetry on the chart, but is there some underlying principle behind which confusions are one-way and which are two-way? Is it possible that some people will reject the above arguments and insist "No, I'm going to be (Quokka, Cassandra, Mule, Feeder) intentionally"? Why can't we come up with similar arguments for the other four confusions? (Or can we?)
Prior art
This is not the first attempt at a taxonomy of discourse types. See also:
Summary of open questions
What exactly is going on with Yandere/Tsundere? Do such conversations ever occur in practice? Are they even imaginable?
It's clear that your judgment of whether your values are aligned with the other person's may change over the course of the conversation as you learn more about what their values actually are. Can this model capture that? If there's a certain kind of conversation that invariably follows the same sequence of modes, then perhaps that's a more empirically-valid category than these 10 modes.
How do we classify "agreeing for the wrong reasons"? Suppose Alice is leading a crusade against high-fructose-corn-syrup because she thinks it's a plot by the Illuminati to turn everyone into lizardpeople, while Bob thinks to himself "Well, her heart's in the right place; we'd all be healthier if we consumed less HFCS" and so he joins Alice's group while going along with the Illuminati story to avoid starting a pointless debate with her. What is this? I guess this is a certain flavor of Collegial, with bits of Rebel/Guru to the extent that Bob tries to subtly manipulate Alice into agreeing with him for the right reasons. (But this may be a situation where the simulacrum framework is more helpful.)
Are the four one-way-confusability arguments compelling? Are there any other confusions which are really one-way?
Is "Feeder/Troll" really the best way of characterizing SE/IE? The term "Troll" is maybe fraught with connotations of nihilism, and it's not clear how nihilism fits in here. (In theory, a nihilist has no friends or enemies.)
In general, how should we understand the Insincere Enmity stance? It seems pretty obvious what the other three stances mean, but this one gives rise to confusion.
Sincerity and value-alignment aren't binary, but a continuum. Does it make sense to simplify them to yes-or-no questions?
Revisiting the original article
(Link again for convenience)
In this section I'm going to use the terms "mistake theory/-ist" and "conflict theory/-ist" in the way they're used in each respective quote that I'm responding to, even though it's not clear whether they have the same meaning in each quote, and I would prefer to avoid using the terms at all (as mentioned earlier).
Cassandra/Mule discourse is the most frustrating kind
Of course, it's possible that in this situation, Alice (i.e. the person referred to here as the "mistake theorist") is actually correct. And alternatively, it's also possible that Bob (the "conflict theorist") is correct. But now we see a third alternative - maybe neither Alice nor Bob are correct, and in fact this is a Cassandra/Mule discourse. Then, the conversation will go nowhere until one or both of them storm off in frustration.
(Now you can see why it was useful to coin all that jargon!)
"I'm not misanthropic, I just don't like you"
To the extreme "mistake theorist", the world looks like this:
They may therefore project their view onto anyone who disagrees with them (which they call "conflict theorists") and assume they view the world like this:
But to me, "conflict theory" (= the negation of mistake theory) is nothing more than the acknowledgement that the whole chart exists in the real world - that some conversations are productive, some aren't, and others are worse than useless. The fact that enemies exist doesn't mean that friends don't also exist; the secondary-diagonal view ("Hobbesian individualism") is an extreme strawman that almost nobody actually believes. In fact, posing it as a "refutation" of conflict theory is sure to raise all kinds of alarm-bells in the minds of anyone with a bit of world-wariness - if they didn't already have a reason to be skeptical, the use of this classic confidence trick ("It's not good to be so distrustful of everyone you meet...") will certainly seal the deal.
In particular, the two extreme "diagonalist" views illustrated above both leave no space for the non-diagonal modes (Cassandra/Mule, Quokka/Wormtongue, and Feeder/Troll at least - although I continue to be unsure of the explanatory value of Yandere/Tsundere), without which the meta-level disagreements alluded to earlier ("What kind of conversation are we having right now?") cannot be understood.
Free speech
I found this part disorienting when I got to it:
Up until that point I had been mostly siding with the "conflict theorist" in each example, and on the topic of free speech I was thinking to myself: "Yes, obviously, as a conflict theorist I'm pro-free speech. How could I not be? The elite is full of evil people expressing Insincere Friendship, using their position of authority to spread false information, the believing of which will cause people to act in the elites' interest and not their own. Therefore it's essential that we have the right to speak up and expose their lies. The only people who could possibly be against this are either mistake theorists (who naïvely think the people doing the censorship will be well-intentioned guards against misinformation) or the Wormtongues who've gained their ears."
The reason for this discrepancy is that "free speech" may be referring to two distinct things. It may be a bit tricky to explain since a full analysis would require a treatment of three-party discussions, but briefly:
In one sense, we have a situation where a bunch of people have come together to work on a goal that they all share ("People for the Promotion of X"), and then Alice joins the club saying "How do you do, fellow pro-X-ers! I have some ideas for how we can achieve X more effectively," and then proceeds to give a bunch of proposals that are so egregiously bad that they would actually be harmful for X. Then, any "mistake theorists" in the club will say "Let's hear her out and respond with our counterarguments. If we succeed in convincing Alice, then we've gained a more effective supporter of X; if she convinces us, then we can fix our strategy," whereas the "conflict theorists" will say "No, this person is a bad actor trying to trick us into working contrary to X; she should be ejected from the club." (Confusingly, this kind of activity is commonly called "concern trolling" although it has nothing to do with the "Troll" role as I've defined it here. Oh well; the terminology is overloaded.)
In another sense, however, we might be thinking of free public speech, i.e. speech that's open for everyone to hear (but which you can tune out if you're not interested). The mistake theorist will regard this as the same as the club case, because they assume that everyone in the society, just like in the club, is pro-X. I, on the other hand, would respond with the pro-free-speech argument I outlined 3 paragraphs ago. But that's not because I'm a conflict theorist per se; the argument only makes sense when the conflict is thought to be "me and the audience versus the elites" as opposed to "me and the elites versus the audience." If one believed the latter, then one would be anti-free-speech for the reason given in the quotation; however, this is only one manifestation of conflict theory, and not the most common one nowadays - at least as far as I can tell!
Personal note: I'll take a Chavrusa over a Guru any day
This is more of an aesthetic preference than a rational argument, but I personally have a distaste for self-proclaimed Gurus (e.g.). It strikes me as sleazy and evasive. If someone thinks my professed values are bad but that I might be convinced to change them, then I'd much rather they challenge me with a stance of "Your values are bad and here's why" rather than tell me that those are not, in fact, my values, and that if I looked deep within myself I would realize that giving the Guru all my money was what I wanted to do all along. But hey, that's just me.
Concluding remarks
As with any other "insight"-style post, you should take this model as a starting point and not a conclusion. Its usefulness will depend on whether you can readily think of examples of the 10 modes of discourse, or whether on the contrary it seems like experience needs to be forced into the model. I happen to think that the 12-role interplay described in the "Cassandra/Mule" section above is an accurate description of the debates I've seen between rationalists and non- or post-rationalists. I see Chameleonic discourse in big-city local politics when debates are framed as about what's best for "the" community, while Chavrusic discourse is what you might get at rationalist parties after having a bit too much to drink.
What do you think?