Practical Conflict Resolution: A Taxonomy of Disagreement

by eapache9 min read19th Jun 20208 comments

10

DisagreementPractical
Frontpage

The previous posts in this sequence were largely setup; still important, but not intended to contain significant new material. Now, finally, we get to the good stuff. Once you’ve got the right attitude, good communication, and a decent grasp of what both sides are saying, how do you actually get to a successful resolution? For me, it boils down to understanding the building blocks of how we argue, and how we disagree.

Philosophers and linguists have spent millennia studying the nature of logic, rhetoric, and argument, all the way from Aristotle through to predicate logic and beyond. However, while rhetoric and disagreement are obviously related, the nature of disagreement itself is much less studied. The rationalist community has explored parts of it, coming up with some interesting ideas like double-cruxing, but as far as I know has only a few disconnected heuristics - no comprehensive theory.

In a very brief post in 2017 (well before I joined Less Wrong) I sketched out a basic categorization of disagreements with almost no explanation. Several years later, my core model remains almost the same. While there can be many forms of valid argument and many kinds of propositions to slot into those arguments, there are in fact only three kinds of atomic disagreement: fact, value, and meaning. As far as I can tell every disagreement must either belong to one of these categories, be "empty" (meaning there's technically no disagreement at all), or be a complex combination of smaller disagreements. I’ll tackle them one at a time, including tips for resolving each type, and then talk about how to understand and break down the more complex combinations.

Disagreements of Fact

Disagreements of fact are disagreements over how the world was, is, or will be. They are fundamentally empirical in nature: if I believe that there are only ten chickens on the planet and you believe that there are more, that’s something we can physically check; we just have to go out and count enough chickens. Disagreements about historical facts are often harder to resolve (we can’t just count the chickens alive in the year 1500 to see how many there were) but the factual nature of the disagreement remains; there is a single right answer, and we just have to find it.

Resolving disagreements of fact is the specialty of science and the scientific method. When a disagreement of fact is not directly resolvable through empirical observation, hunt for places where the core disagreement results in differing predictions about something that is directly observable. Maybe if there were as many chickens as you believe, the nutrient content of human skeletons from that era will back you up (I really don’t know, historical chicken population is not my specialty and this example is getting out of hand).

Of course, some disagreements of fact may not be perfectly resolvable with the technology we have available to us. The nutrient content of skeletons may give some indication of chicken population, but it’s not going to give us a precise count. In these cases, it’s best to fall back on reasoning based on Bayesian statistics. What are your prior confidence levels, and how do the various pieces of evidence affect them? What else can you easily empirically check which will impact those confidence levels?

Even then, there are some cases where there just doesn’t seem to be any checkable predictions that come out of a conflict of fact (the various debates around string theory were like this for a while). The nice thing is that when you hit a disagreement like this, it somehow stops mattering. If there are no differences in the predictions that can be tested with current technology, then until that technology exists, the two possible worlds are by definition indistinguishable.

Finally, for cases about the future, it’s important to distinguish between disagreements about how the world will be (for example whether there will be more or fewer chickens tomorrow), and disagreements about how the world should be (for example whether we ought to breed more chickens). Disagreements about how the world will be can sometimes be resolved like historical facts, by looking for more immediately checkable predictions. They can also be resolved just by waiting until the future comes to pass. On the other hand, disagreements about how the world should be take us into our next type of disagreement: disagreements of value.

Disagreements of Value

Disagreements of value are disagreements over what we ought to value. This tends to play out more concretely in disagreements over how the world ought to be, and what we ought to do to get there. For example, if I believe that we should value chickens’ lives as much as human lives and you believe we should value them less, that is inherently a disagreement over value. There’s no checkable fact or testable prediction, now or in the future; the disagreement is fundamentally about what is important. Of course in practice you’re unlikely to see a direct disagreement over the value of chicken lives; you’re more likely to see a disagreement over whether humans should eat chickens or not, but it’s often the same thing.

Disagreements of value are difficult to deal with. This is often because there is really a complex multi-part disagreement masquerading as a simple value disagreement (for example a disagreement over whether we “ought” to be vegetarian may be about environmental factors as much as it is about the value of a chicken’s life). The key thing to pay attention to is whether the values under debate are instrumental or terminal.

If the values under debate are instrumental (for example vegetarianism as a means to value chicken life), then things are by definition complex, as there are at least two possible underlying disagreements. The root cause could be a disagreement over the terminal value (whether a chicken’s life should be valued) or a disagreement over the best way to achieve that terminal value (our consumption of chicken has caused a great increase in the total number of chickens, which might be a more effective way to value their lives). When you see a debate over an instrumental value, apply Hume’s guillotine to slice apart the pieces and find the more fundamental disagreement. Keep in mind that there’s nothing to stop both pieces from being sources of disagreement at once, in which case you should at least try and take them one at a time.

Recognizing instrumental value debates can be tricky, as can breaking them down into their constituent parts. In practice, one of the best ways to do both of these things is to simply try the question “Why does that matter?”, and not accept “it just does” as an answer. When pressed, most people will be able to articulate that, for example, they actually value vegetarianism because they value the lives of animals.

The other way to recognize many instrumental value debates is to look for two apparently-unrelated values being traded off against one another. Imagine we’re building a coop for all of these chickens; if one person thinks we should prioritize security against foxes, while the other thinks we should prioritize the number of chickens it can hold, it might seem like they’re at an impasse. But this is actually an instrumental value debate that can easily be resolved; all we have to do is “normalize” the units under debate. Fox-security and number-of-chickens are not directly comparable values, but in practice they’re probably both backed by the same terminal value: maximizing the number of eggs we can collect per day. By normalizing the two sides into a single terminal value unit, we’re left with a simple disagreement of fact which can be resolved via experimentation: which approach results in more eggs?

Unfortunately, if the values under debate are truly terminal (back to whether chickens’ lives should be valued as human lives) then there isn’t a good way to resolve this conflict. The conflict will exist until somebody changes their core values, and that’s incredibly hard to do. The best “hack” I’ve found is to come up with an unrelated value or problem which both participants agree is more important, and thus makes the current conflict either irrelevant or at least not worth arguing over. Whether a chicken’s life is worth a human life tends to take a backseat when the human’s house is on fire.

(note: I am not advocating arson as a means of avoiding debates about vegetarianism)

Disagreements of Meaning

The third kind of disagreement is a disagreement over meaning. This is best understood by examining the classic question: if a chicken tree falls in the forest and nobody hears it, does it make a sound? While on the surface a disagreement on this point may seem to be a disagreement of fact, it’s almost always instead a disagreement of meaning.

Most reasonable people will agree to the same core facts of what happens when a tree falls in the forest. First, they’ll agree that it produces vibrations in the air, also known as sound waves. Second, they’ll agree that those sound waves dissipate before reaching anybody’s ears, as stipulated in the question. These two points actually cover all of the questions of fact relevant to the disagreement; the conflict is really over the meaning of the word “sound”. Does it refer to the simple production of sound waves (in which case the tree makes a sound) or does it refer to the sensation created by sound waves heard by a person (in which case it does not).

The nice thing about disagreements of meaning is that they almost never matter. Language is socially negotiated, and at the end of the day word meanings are entirely arbitrary. The only thing you need to do to resolve a conflict like this is be very clear about your definitions, and the conflict magically evaporates. Replacing problem words with new nonsense words that have clear definitions is a great trick for this (borrowed from this Less Wrong post on the same topic).

The one case where the meaning of words does legitimately matter is in law. As a friend of mine so nicely put it, “laws are stored in words”, and interpreting the meaning of those words can impact how the law is applied, who goes to jail, etc. Ultimately though, word definitions are still arbitrary and will even shift over time, meaning that these disagreements are not resolvable without getting really deep into the philosophy of law (the question of literal meaning vs author’s intent, just to start). Fortunately we have a standard method for making these decisions anyway: judges and juries. The result is that the law evolves over time, just like the people that interpret it, and the language that stores it.

The other case where people like to argue that word meanings matter is when certain words are offensive, disrespectful, or even harmful (if that’s a thing you believe words can be). Fortunately this one is a bit more clear-cut: the use of these words is a thing people can disagree about, but it’s not a disagreement of meaning. It actually has two parts, tying up an instrumental or potentially terminal value (we should not offend or harm people) with a factual claim (some proportion or group of people are offended or harmed by a given word). The meaning of the word no longer matters at all.

Empty Disagreements

Empty disagreements are a later addition to this post, and are quite different from the other three types. In a certain sense they are not real disagreements at all, and are merely what happens when disagreement becomes disconnected from any tangible point. But in practice they are fairly common, and my goal with this sequence is ultimately a practical one.

Empty disagreement happens when there is no fundamental disagreement of fact, value, or meaning between two parties, but something in the situation causes them to start or continue a conflict regardless. This is usually related either to social status (when someone knows they’re wrong but won’t back down to avoid losing face), or to internal emotional state (when someone is caught up in the heat of the moment). In both cases, it is ideas from the prior posts of this sequence that are the key to a successful resolution.

Status-based conflicts are frequently best-solved by changing venue, usually to one with a smaller audience. In most cases people are happy to resolve the conflict themselves once doing so would no longer cost them status. Things become trickier if this isn’t possible, or if the status issue is actually between the two people involved in the conflict. You can try to build enough trust to overcome the status issue, or compensate for it by making an unrelated concession, but ultimately you’ll have to resolve the status issue to resolve the conflict.

Similarly, heat-of-the-moment conflicts are usually best solved by committing more strongly to the four attitudes I described in the first post on attitude. Breathe deep, and aim for success instead of victory. Use humility to build the trust necessary to reach that point, and never lose sight of the fact that both sides are operating in good faith (mistakes in the heat of the moment are still fundamentally different from malice). If necessary, suggest taking a five-minute break to go to the washroom or get a drink of water; time away is often all that is really needed for people to cool down.

Complex Disagreements

As we’ve gone through the types so far, we’ve seen a couple of examples of complex disagreements masquerading as simpler forms of disagreement. This is typically how they show up in practice, since if the complexity is obvious the participants will break it apart themselves without thinking about it. The fact that instrumental values show up frequently in this way is also not a coincidence; the combination of a value with a fact to produce an instrumental value is one of the easiest signs of a complex disagreement that needs to be split up.

The other major sign of a complex disagreement is the use of the forms of propositional and predicate logic. Argument forms like modus ponens are how complex arguments get built up, and thus naturally how complex disagreements can be broken down. Of course, people rarely phrase their arguments in pure logical form, so you’ll probably have to do some steelmanning along the way, but if you’re lucky somebody will make their arguments in roughly the right shape.

As mentioned in the post on comprehension, regular practice is the best way to build these skills. Even when an argument is really trivial, (for example “A five ounce bird could not carry a one pound coconut!” while talking about the carrying capacity of swallows) it can be worth breaking down. In its pure logical form, that example becomes something like:

  • P1: If a bird weighs five ounces, it cannot carry a coconut.
  • P2: Swallows weigh five ounces.
  • C: Swallows cannot carry coconuts.

Just like with instrumental values, we now have two different pieces (P1 and P2) where either could be the source of disagreement. By narrowing in on the root cause, or at least taking them one at a time, you’ve made the conflict smaller and more focused. Once you’ve gone down a few layers you’ll usually end up either at a testable disagreement of fact or a shared terminal value, and will be able to resolve it appropriately. The goal with a complex disagreement is always to break it down and deal with the pieces, not to swallow it whole.

Conclusion

When faced with a disagreement, I try to approach it the way I would approach a newly synthesized molecule - I know that it must be made of atoms, somehow, but I don't yet know which atoms, or in what structure. Every disagreement I have come across is made up of one or more of the atomic types in this post, stuck together (mostly) using the forms of propositional and predicate logic.

Typically, once you start peeling back the layers, you'll find that most parts of a disagreement aren't disagreements at all. Yes, you started out by arguing about X, but in fact X was a complex type, and three of its four component pieces were ones you actually agreed on. The underlying disagreement was about Y, which is either complex itself (break it down again!) or atomic. Wrapping it all up into a bullet-list algorithm, this is roughly how I approach conflict resolution in my real life:

  • Use logical forms to break down a complex disagreement into simpler parts.
  • Discard the parts that are actually agreed upon. Tackle the remaining parts one at a time.
  • Repeat.
  • Once you have atomic disagreements and can't repeat:
    • Use science and Bayesian statistics to resolve disagreements of fact.
    • Use clear definitions to resolve disagreements of meaning.
    • Use overriding values to try to avoid disagreements of terminal value (but watch out for values that are actually instrumental).
  • If there are no atomic disagreements, resolve the empty conflict by building trust, addressing status issues, and cooling runaway emotions.

I hope reading this sequence proves as helpful to you as writing it was for me. I want to once again thank the person who prompted me to write it, as well as all the other people who read early drafts and provided invaluable feedback. You make me better.

10

8 comments, sorted by Highlighting new comments since Today at 2:14 PM
New Comment

People who agree on Is and Ought can disagree on strategy. Disagreement about strategy takes two broad categories: disagreement about the vocabulary of relationships to represent the strategy, and disagreement about the specific construction and execution of the strategy.

Now you can say that this is secretly just is and ought problems all the way down. but this is like Objectivists being able to come up with a selfish interpretation for everything, it says more about the flexibility (degrees of freedom) in their explanatory schema than it is actually predictive. I can equally claim that Is and Ought are just two different kinds of data that strategies can return. Which I do think is more reasonable since we have direct access to neither Is nor Ought but can compare strategies' performance to one another.

As far as being predictive, I think I’ve done a clear job of that already. I’m not just saying you can fit any disagreement into my model with enough mental gymnastics; I’m saying that doing so is concretely useful in guiding the resolution of that disagreement. My model could very well be overly flexible or generally incorrect in some cases, but it’s the most useful model for this topic that I’ve come up with. If you think modelling disagreements at the strategy level is more useful, I would greatly enjoy reading your post on how to make use of that for conflict resolution.

If Is considerations are like Aristotle's Material Cause

Ought considerations like Final Cause

Language considerations like Formal Cause (if we extend language to include non-verbal but still symbolic representations of other sorts as well)

Then we only need to add Efficient Cause.

we have direct access to neither Is nor Ought but can compare strategies' performance to one another

I don’t understand this part. The only way in which we don’t have direct access to Is or Ought is a fairly philosophical one, and on that level we don’t have direct access to the performance of our strategies either?

>The only way in which we don’t have direct access to Is or Ought is a fairly philosophical one

Sounds like maybe you're not an indirect realist and if so it would take a bunch to reconcile on this.

Assuming indirect realism, then we don’t have direct access to the performance of our strategies either, so I’m not sure how that ends up being more useful.

We use comparison, which is itself a strategy. We do the same when we investigate an Is or an Ought.

I would be very interested in learning more about how to resolve the disagreements founded in social status or emotion. As an example, how does one build trust (in the moment or in advance) to overcome the status based disagreement? I personally find most of the arguments/disagreements I struggle with to be in these buckets.