Previously:

2 years ago, CFAR's doublecrux technique seemed "probably good" to me, but I hadn't really stress tested it. And it was particularly hard to learn in isolation without a "real" disagreement to work on.

Meanwhile, some people seemed skeptical about it, and I wasn't sure what to say to them other than "I dunno man this just seems obviously good? Of *course* you want to treat disagreements like an opportunity to find truth together, share models, and look for empirical tests you can run?"

But for people who didn't share that "of course", that wasn't very helpful.

For the past two years I've worked on a team where big disagreements come up pretty frequently, and where doublecrux has been more demonstrably helpful. I have a clearer sense of where and when the technique is important.

Intractable Disagreements

Some intractable disagreements are fine

If you disagree with someone on the internet, or a random coworker or something, often the disagreement doesn't matter. You and your colleague will go about their lives, one way or another. If you and your friends are fighting over "Who would win, Batman or Superman?", coming to a clear resolution just isn't the point.

It might also be that you and your colleague are doing some sort of coalition-politics fight over the overton window, and most of the debate might be for the purpose of influencing the public. Or, you might be arguing about the Blue Tribe vs Red tribe as a way of signaling group affiliation, and earnestly understanding people isn't the point.

This makes me sad, but I think it's understandable and sometimes it's even actually important.

Such conversations are don't need to be doublecrux shaped, unless both participants want them to be.

Some disagreements are not fine

When you're building a product together, it actually matters that you figure out how to resolve intractable disagreements.

I mean "product" here pretty broadly – anything that somebody is actually going to use. It could be a literal app or widget, or an event, or a set of community norms, or a philosophical idea. You might literally sell it or just use it yourself. But I think there is something helpful about the "what if we were coworkers, how would we resolve this?" frame.

The important thing is "there is a group of people collaborating on it" and "there is a stakeholder who cares about it getting built."

If you're building a website, and one person thinks it should present all information very densely, and another person thinks it should be sleek and minimalist... somehow you need to actually decide what design philosophy to pursue. Options include (not necessarily limited to)

  • Anarchy
  • One person is in charge
  • Two or more people come to consensus
  • People have domain specializations in which one person is in charge (or gets veto power).

Anarchy

To start with, what's wrong with the "everyone just builds what seems right to them and you hope it works out" option? Sometimes you're building a bazaar, not a cathedral, and this is actually fine. But it often results in different teams building different tools at cross purpose, wasting motion.

One person in charge?

In a hierarchical company, maybe there's a boss. If the decision is about whether to paint a bikeshed red or blue, the boss can just say "red", and things move on.

This is less straightforward in the case of "minimalism" vs "high information density."

First, is the boss even doing any design work? What if the boss and the lead designer disagree about aesthetics? If the lead designer hates minimalism they're gonna have a bad time.

Maybe the boss trusts the lead designer enough to differ to them on aesthetics. Now the lead designer is the decision maker. This is an improvement, but just punts the problem down one level. If the lead designer is Just In Charge, a few things can still go wrong:

Other workers don't actually understand minimalism

"Minimalist websites" and "information dense websites" are designed very differently. This filters into lots of small design decisions. Sometimes you can solve this with a comprehensive style guide. But those are a lot of work to create. And if you're a small startup (or a small team within a larger company), you may not have have the resources for that. It'd be nice if your employees just actually understood minimalism so they could build good minimalist components.

The lead designer is wrong

Sometimes the boss's aesthetic isn't locally optimal, and this actually needs to be pointed out. If lead-designer Alice says "we're building a minimalist website" it might be important for another engineer or designer to say "Alice, you're making weird tradeoffs for minimalism that are harming the user experience."

Alice might think "Nah, you're wrong about those tradeoffs. Minimalism is great and history will bear me out on this." But Alice might also respect Bob's opinion enough to want to come to some kind of principled resolution. If Bob's been right about similar things before, what should Alice and Bob do, if Alice wants to find out she's wrong – if and only if she's actually wrong, and that her minimalist aesthetic is harming the user experience.

The lead designer is right, but other major stakeholders think she's wrong

Alternately, maybe Bob thinks Alice is making bad design calls, but Alice is actually just making the right calls. Bob has rare preferences that don't overlap much with the average user, that shouldn't necessitate a major design overhaul.

Initially, this will look the same to both parties as the previous option.

If Alice has listened to Bob's complaints a bunch, and Alice generally respects Bob but thinks he's wrong here, at some point she needs to say "Look Bob, we just need to actually build the damn product now, we can't rehash the minimalism argument every time we build a new widget."

I think it's useful for Bob to gain the skill of saying "Okay. fine." Let go of his frustration and embrace the design paradigm.

But that's a tough skill. And meanwhile, Bob is probably going to spend a fair amount of time and energy being annoyed about having to build a product they're less excited about. And sometimes, Bob's work is less efficient because he doesn't understand minimalism and keeps building site-components subtly incompatible with it.

What if there was a process by which either Alice would update or Bob would update, that both Alice and Bob considered fair?

You might just call that process "regular debate." But the problem is that regular debate just often doesn't work. Alice says "We need X, because Y". Bob says "No, we need A, because B", and they somehow both repeat those points over and over without ever changing each other's mind.

This wastes loads of time, which could have been better spent building new site features if they were able to do it faster.

Even if Alice is in charge and gets final say, it's still suboptimal for Bob to have lower morale and keep making subtly wrong widgets.

And even if Bob understands that Alice is in charge, it might still be suboptimal for Bob to feel like Alice never really understood exactly what Bob's concerns were.

What if there's no boss?

Maybe your "company" is just two friends in a basement doing a project together, and there isn't really a boss. In this case, the problem is much sharper – somehow you need to actually make a call.

You might solve this by deciding to appoint a decision-maker – change the situation from a "no boss" to "boss" problem. But if you were just two friends making a game together in their spare time, for fun, this might kinda suck. (If the whole point was to make it together as friends, a hierarchical system may be fundamentally un-fun and defeat the point)

You might be doing a more serious project, where you agree that it's important to have clear coordination protocols and hierarchy. But it nonetheless feels premature to commit to "Alice is always in charge of design decisions." Especially if Bob and Alice both have reasonable design skills. And especially if it's early on in the project and they haven't yet decided what their product's design philosophy should be.

In that case, you can start with straightforward debate, or making a pros/cons list, or exploring the space a bit and hoping you come to agreement. But if you're not coming to agreement... well, you need to do something.

If "regular debate" is working for you, cool.

If "just talking about the problem" is working, obviously you don't have an issue. Sometimes the boss actually just says "we're doing it this way" and it doesn't require any extensive model sharing.

If you've never run into the problem of intractable-disagreement while collaborating on something important, this blogpost is not for you. (But, maybe keep it in the back of your mind in case you do run into such an issue)

But working on the LessWrong team for about 1.5 years, I've run into numerous deep disagreements, and my impression is that such disagreements are common – especially in domains where you're solving a novel problem. We've literally argued a bunch about minimalism, which isn't an especially unique design decision. We've also had much weirder disagreements about integrity and intellectual progress and AI timelines and more.

We've resolved many (although not all) of those disagreements. In many cases, doublecrux has been helpful as a framework.

What's Doublecrux again?

If you've made it this far, presumably it seems useful to have some kind of process-for-consensus that works better than whatever you and your colleagues were doing by default.

Desiderata that I personally have for such a process:

  • Both parties can agree that it's worth doing
  • It should save more time than it costs (or produce value commensurate with the time you put in)
  • It works even when both parties have different frames or values
  • If necessary, it untangles confused questions, and replaces them with better ones
  • If necessary, it untangles confused goals, and replaces them with better ones
  • If people are disagreeing because of aesthetic differences like "what it beautiful/good/obviously-right", it provides a framework wherein people can actually change their mind about "what is beautiful and good and right."
  • Ultimately, it lets you "get back to work", and actually build the damn product, confident that you are going about it the right way.

[Many of these goals were not assumptions I started with. They're listed here because I kept running into failures relating to each one. Over the past 2 years I've had some success with each of those points]

Importantly, it's not necessarily needed for such a process to answer the original question you asked. In the context of building a product, what's important is that you figure out a model of the world which you both agree on, which informs which actions to take.

Doublecrux is a framework that I've found helpful for the above concerns. But I think I'd consider it a win for this essay if I've at least clarified why it's desirable to have some such system. I share Duncan's belief that it's more promising to repair or improve doublecrux than to start from scratch. But if you'd rather start from scratch, that's cool.

Components of Doublecrux – Cognitive Motions vs Attitudes

There are two core concepts behind the doublecrux framework:

  • A set of cognitive motions:
    • Looking for the cruxes of your beliefs, and asking what empirical observations would change your mind about them. (Recursing until you find a crux you and your partner both share, the "doublecrux")
  • A set of attitudes
    • Epistemic humility
      • "maybe I'm the wrong one"
    • Good faith
      • "I trust my partner to be cooperating with me"
    • Belief that objective reality is real
      • "there's an actual right answer here, and it's better for each of us if we've both found it"
    • Earnest curiosity

Of those, I think the set of attitudes is more important than the cognitive motions. If the "search for cruxes and empirical tests" thing isn't working, but you have the four attitudes, you can probably find other ways to make progress. Meanwhile, if you don't each have those four attitudes, you don't have the foundations necessary to doublecrux.

Using language for truthseeking, not politics

But I think the cognitive motions are helpful, for this reason: much of human language is by default politics rather than truthseeking. "Regular debate" often reinforces the use of language-as-politics, which actives brain modules that are optimizing to win, which involves strategic blindness. (I mean something a bit nuanced by "politics" here, beyond scope of this post. But basically, optimizing beliefs and words for how you fit into the social landscape, rather than optimizing for what corresponds to objective reality).

The "search for empirical tests is and cruxes-of-beliefs" motion is designed to keep each participant's brain in a "language-as-truthseeking" mode. If you're asking yourself "why would I change my mind?", it's more natural to be honest to yourself and your partner than if you're asking "how can I change their mind?"

Meanwhile, the focus on mutual, opposing cruxes keeps things fruitful. Disagreement is more interesting and useful than agreement – it provides an opportunity to actually learn. If people are doing language-as-politics, then disagreement is a red flag that you are on opposing sides and might be threatening each other (which might either prompt you to fight, or prompt you to "agree to disagree", preserving the social fabric by sweeping the problem under the rug).

But if you can both trust that everyone's truthseeking, then you can drill directly into disagreements without worrying about that, optimizing for learning, and then for building a shared model that lets you actually make progress on your product.

Trigger Action Plans

Knowing this is all well and good, but what might this translate into in terms of actions?

If happen to have a live disagreement right now, maybe you can try doublecrux. But if not, what circumstances should prompt

I've found the "Trigger Action Plan" framework useful for this sort of thing, as a basic rationality building-block skill. If you notice an unhelpful conversational pattern, you can build an association where you take some particular action that seems useful in that circumstance. (Sometimes, the generic trigger-action of "notice something unhelpful is happening ----> stop and think" is good enough)

In this case, a trigger-action I've found useful is:

TRIGGER: Notice that we've been arguing awhile, and someone has just repeated the same argument they said a little while ago (for the second, or especially third time)

ACTION: Say something like: "Hey, I notice that we've been repeating ourselves a bit. I feel like conversation is kinda going in circles...." followed by either "Would you be up for trying to formally doublecrux about this?" or following Duncan's vaguer suggestions about how to unilaterally improve a conversation (depending on how much shared context you and your partner have).

Summary

  • Intractable disagreements don't always matter. But if you're trying to build something together, and disagreeing substantially about how to go about it, you will need some way to resolve that disagreement.
  • Hierarchy can obviate the need for resolution if the disagreement is simple, and if everyone agrees to respect the boss's decision.
  • If the disagreement has persisted awhile and it's still wasting motion, at the very least it's probably useful to do something differently. In particular, if you've been repeating the
  • Doublecrux is a particular framework I've found helpful for resolving intractable disagreements (when they are important enough to invest serious energy and time into). It focuses the conversation into "truthseeking" mode, and in particular strives to avoid "political mode"
New Comment
14 comments, sorted by Click to highlight new comments since: Today at 2:25 AM

I think this is missing one of the most important benefits of things like double crux: the potential for strong updates outside the domain of the initial agreement, for both parties. See also Benito's A Sketch of Good Communication.

Thanks – yeah, this is not at all meant to be comprehensive, and some future posts will delve into more details. But I hadn't had that particular connection in mind and think the Sketch of Good Communication is a particular crisp take on it.

"Double crux is for building products" is true mostly because of the more general fact that epistemic rationality is for shared production relationships.

A few points. I am currently in the situation where the company I work for has been wasting millions, slipping schedule and increasing risk due to picking a wrong environment for the project (Android instead of Linux. Android is necessary for apps, but terrible beyond belief for IoT, where one does not need the playstore.) The owner of the company drank the Android Kool Aid and the project manager refuses to tell him what this marketing gimmick ending up costing. It would be impolitic for me to challenge the two of them, and it's not my money that is wasted. We have an occasional "this would be so much easier on Linux" moment, but it goes nowhere, because the decision has been made and any change in the course, even if projected to save money, would be perceived as risky and would expose someone's incompetence. So we are stuck at the "agree to disagree" stage, and the project manager makes the call, without any interest in discussing the merits and getting angry at any mention of an alternative.

Re your set of attitudes. I find that one does not need to believe in anything like "objective reality is real" to use the technique. So, let me modify your list a bit

Epistemic humility: "maybe I'm the wrong one" -> "Maybe my approach is not the optimal one"
Good faith "I trust my partner to be cooperating with me"
Belief that objective reality is real -> Belief that better approaches are possible
"there's an actual right answer here -> "there is a chance of a better answer, where "better" can be agreed on by all parties" , and it's better for each of us if we've both found it"
Earnest curiosity

Agreed, and I'd be more specific with the modifications:

"maybe I'm the wrong one" -> "Maybe my approach is not the optimal one" -> Maybe there are dimensions of optimization (like solution search costs or budget justification) that I'm weighting differently from my boss.

"I trust my partner to be cooperating with me" -> "I trust my partner (and am willing myself) to spend a bit of effort in finding the causes of disagreement, not just arguing the result"

And it goes both directions - be honest with yourself about what dimensions you're weighting more heavily than others for the decision, and what optimization outcomes might be different for you than for your customers and boss. A clear discussion about the various impacts and their relative importance can be very effective (true in some companies/teams, not in others. In some places, you have to either trust that the higher-ups are having these discussions, or convince yourself that the decisions aren't intolerable (or seek work elsewhere)).

On the object level, I write software that supports lots of IoT devices, some using Linux, some Android, some FreeRTOS, some Windows-ish, and a whole lot of "other" - microcontrollers with no real OS at all, just a baked-together event loop in firmware built very specifically for the device. There are very good reasons to choose any of them, depending on actual needs, and it's simply incorrect to say that Android is terrible for IoT. Very specifically, if you want a decent built-in update mechanism, want to support a lot of off-the-shelf i/o and touchscreens, and/or need some kinds of local connectivity (bluetooth audio, for instance), Android's a very solid choice over Linux.



Thanks, all good points. Wish we all cared to apply modifications like that.

I also agree with off-the-shelf support advantages of Android, though the update mechanism outside of the play store seems to be nothing special. As for weighing the dimensions differently, there is definitely a significant difference: he puts premium on not rocking the boat, while mine is on delivering simple maintainable low-risk solutions. In general, simplicity is greatly undervalued. You probably can relate.

So, let me modify your list a bit

This sounds plausible (although curious if this is something that you've seen successful, or just seems like it should work?)

Have you tried anything like "unilaterally making the conversation better" with your boss(es), or does it just seem too entrenched?

1. I'm just removing an unnecessary assumption, to avoid the discussion about what it means to be right or wrong, and whether there is a single right answer.

2. I don't have the clout to change the boss's mind. Making suboptimal decisions based on implicit unjustified assumptions and incomplete information, and then getting angry when challenged is something most of humans do at some point.

I find myself wondering about disagreements (or subcomponents of disagreements) where appealing to objective reality may not be possible.

It seems like this is a special case of a broader type of process, fitting into the more general category of collaborative decision-making. (Here I'm thinking of the 5 approaches to conflict: compete, collaborate, avoid, accommodate, and compromise).

In the explicit product-as-widget case, there may always be an appeal to some objectively frameable question: what will make us more money? But even this can ignite debate: which is more important, short-term revenue or long-term revenue? I can imagine two people (perhaps one very young, and one very old) in dispute over a product design where they realize the root of the disagreement is this different personal timeline.

This example may be (or seem) intractable, but it's a toy example to illustrate the possibility that disagreements can arise over matters which are not purely objective. In such cases, I would imagine that doublecrux would pair extremely well with other established methods for collaborative problem-solving (e.g. interest-based negotiation). I suspect even that this method could enhance the resolution of strictly value-based disagreements, since values can be converted back into objectively measurable outcomes and therefore become the subject of doublecrux inquiry.

I think the method would be basically the same, replacing "why do you believe X" with "what is important to you about X" in the process of inquiry.

This is interesting because mediators (who are essentially facilitating interest-based negotiation) are generally trained not to seek factual truth; but that's usually facts about the past whereas doublecrux deals with facts about the future.

I think the method would be basically the same, replacing "why do you believe X" with "what is important to you about X" in the process of inquiry.

"Why do you believe X?" -> "Why do you want X?"

doublecrux deals with facts about the future.

That's a useful distinction, thanks for pointing it out.

But even this can ignite debate: which is more important, short-term revenue or long-term revenue? I can imagine two people (perhaps one very young, and one very old) in dispute over a product design where they realize the root of the disagreement is this different personal timeline.

Yeah, there are plenty of cases where people actually want different things. I think I agree that some kind of hybrid technique involving negotiation and doublecrux (among other things) might help.

Random exploration, don't really have a point yet:

Another case might be two people arguing over how to design a widget, where Carl wants to build a widget using the special Widget Design Technique that he invented. Damien wants to build the widget using Some Other Technique. And maybe it turns out Carl's crux is that if they use Carl's Special Widget Design Technique, Carl will look better and get more promotions.

I think resolving that sort of situation depends on other background elements that Double Crux won't directly help with.

If you're the CEO of a small organization, maybe you can manage to hire people who buy into the company's mission so thoroughly they won't try to coopt Widget Design processes for their personal gain. Or, you might also somehow construct incentives to keep skin-in-the-game, such that it's more in Carl's interest to have the company do well than to get to look good using his Special Widget Design Technique. Ideally, he's incentivized to actually have good epistemics about his technique, and see clearly whether it's better than Damien's Generic Technique (or Damien's own special technique).

This is all pretty hard though (especially as the company grows). And there's a bunch stuff outside your control as CEO because the outside world might still reward Carl more if he can tell a compelling story about how his special technique saved the day.

Perhaps it is possible in practice/process to disentangle value alignment issues from factual disagreements. Double-crux seems optimal at reaching consensus on factual truths (e.g., which widget will have a lower error rate?) and would at least *uncover* Carl's crux, if everybody participates in good faith, and therefore make it possible to nonetheless discover the factual truth. Then maybe punt the non-objective argument to a different process like incentive alignment as you discuss.

There's a pet peeve I have about how people colloquially use the phrase "doublecrux", that might be worth a full post someday but commenting here for now.

A full doublecrux necessarily begins with a lot of model sharing. If you've never considered why you might be wrong about a thing, there may be a lot of ground to cover as your colleague shares a deep, multifacted model that looks at the world quite differently from you. Only afterwards do you have much chance of passing each other's ITT

So, many doublecrux conversations start with "people just explaining what they think."

And, often, "explaining what you think" is enough for each person to go "oh, I see where you're coming from and what I was missing", and the conversation dissolves.

Which is fine and good! But, that means that >50% of what happens when people say "let's doublecrux" ends up just looking like... talking.

This is especially confusing if you're watching a doublecrux (without having doublecruxed before) and trying to grok what the rules or goals are.

I think, to avoid term dilution, it'd be best if doublecrux was reserved specifically for the part of the discussion where each person switches to "passing each other's Ideological Turing Test" and "taking personal responsibility for identifying why/whether they'd change their own mind". And then just call the first part of the conversation "model sharing."

i.e. I'd prefer the colloquial phrase to be "wanna model share, and maybe doublecrux?" over "wanna doublecrux about that?"

A useful technique in this (whether formally double-cruxing or just in trying to get agreement on big group decisions) is to narrow the scope of the disagreement, so you can stay with concrete outcomes of the discussion. Don't try to resolve whether minimal presentation or high information density is better as a paradigm in general. Do try to resolve, for the anticipated common (and uncommon but important) uses of our product, what range of cognitive expectations we should cater to, and how we can meet the needs of the entire (or at least the demand-weighted bulk of the) audience.