Epistemic Status: Thinking out loud, not necessarily endorsed, more of a brainstorm and hopefully discussion-prompt.

Double Crux has been making the rounds lately (mostly on Facebook but I hope for this to change). It seems like the technique has failed to take root as well as it should. What's up with that?

(If you aren't yet familiar with Double Crux I recommend checking out Duncan's post on it in full. There's a lot of nuance that might be missed with a simple description.)

Observations So Far

  • Double Crux hasn't percolated beyond circles directly adjacent to CFAR (it seems to be learned mostly be word of mouth). This might be evidence that it's too confusing or nuanced a concept to teach without word of mouth and lots of examples. It might be evidence that we have not yet taught it very well.
  • "Double Crux" seems to refer to two things: the specific action of "finding the crux(es) you both agree the debate hinges on" and "the overall pattern of behavior surrounding using Official Doublecrux Technique". (I'll be using the phrase "productive disagreement" to refer to the second, broader usage)

Double Crux seems hard to practice, for a few reasons.

Filtering Effects

  • In local meetups where rationality-folk attempt to practice productive disagreement on purpose, they often have trouble finding things to disagree about. Instead they either:
    • are already filtered to have similar beliefs,
    • quickly realize their beliefs shouldn't be that strong (i.e. they disagree on Open Borders, but soon as they start talking they admit that neither of them really have that strong an opinion)
    • they have wildly different intuitions about deep moral sentiments that are hard to make headway on in a reasonable amount of time - often untethered to anything empirical. (i.e. what's more important? Preventing suffering? Material Freedom? Accomplishing interesting things?)

Insufficient Shared Trust

  • Meanwhile in many online spaces, people disagree all the time. And even if they're both nominally rationalists, they have an (arguably justified) distrust of people on the internet who don't seem to be arguing in good faith. So there isn't enough foundation to do a productive disagreement at all.
  • One failure mode of Double Crux is when people disagree on what frame to even be using to evaluate truth, in which case the debate recurses all the way to the level of basic epistemology. It often doesn't seem to be worth the effort to resolve that.
  • Perhaps most frustratingly: it seems to me that there are many longstanding disagreements between people who should totally be able to communicate clearly, update rationally, and make useful progress together, and those disagreements don't go away, people just eventually start ignoring each other or leaving the dispute as unresolved. (An example I feel safe bringing up publicly is the argument between Hanson and Yudkowsky, although this may be a case of the 'what frame are we even using' issue above.)

That last point is one of the biggest motivators of this post. If the people I most respect can't productively disagree in a way that leads to clear progress, recognizable from both sides, then what is the rationality community even doing? (Whether you consider the primary goal to "raise the sanity waterline" or "build a small intellectual community that can solve particular hard problems", this bodes poorly).

Possible Pre-Requisites for Progress

There's a large number of sub-skills you need to productively disagree. To have public norms surrounding disagreement, you not only need individuals to have those skills - they need to trust that each other have those skills as well.

Here's a rough list of those skills. (Note: this is long, and it's less important that you read the whole list than that the list is long, which is why Double Cruxing is hard)

  • Background beliefs (listed in Duncan's original post)
    • Epistemic humility ("I could be the wrong person here")
    • Good Faith ("I trust the other person to be believing things that make sense to them, which I'd have ended up believing if I were exposed to the same stimuli, and that they are generally trying to find the the truth")
    • Confidence in the existence of objective truth
    • Curiosity / Desire to uncover truth
  • Building-Block and Meta Skills
  • (Necessary or at least very helpful to learn everything else)
  • Notice you are in a failure mode, and step out. Examples:
    • You are fighting to make sure an side/argument wins
    • You are fighting to make another side/argument lose (potentially jumping on something that seems allied to something/someone you consider bad/dangerous)
    • You are incentivized to believe something, or not to notice something, because of social or financial rewards,
    • You're incentivized not to notice something or think it's important because it'd be physically inconvenient/annoying
    • You are offended/angered/defensive/agitated
    • You're afraid you'll lose something important if you lose a belief (possibly 'bucket errors')
    • You're rounding a person's statement off to the nearest stereotype instead of trying to actually understand and response to what they're saying
    • You're arguing about definitions of words instead of ideas
    • Notice "freudian slip" ish things that hint that you're thinking about something in an unhelpful way. (for example, while writing this, I typed out "your opponent" to refer to the person you're Double Cruxing with, which is a holdover from treating it like an adversarial debate)

(The "Step Out" part can be pretty hard and would be a long series of blogposts, but hopefully this at least gets across the ideas to shoot for)

  • Social Skills (i.e. not feeding into negative spirals, noticing what emotional state or patterns other people are in [*without* accidentaly rounding them off to a stereotype])
    • Ability to tactfully disagree in a way that arouses curiosity rather than defensiveness
    • Leaving your colleague a line of retreat (i.e. not making them lose face if they change their mind)
    • Socially reward people who change their mind (in general, frequently, so that your colleague trusts that you'll do so for them)
    • Ability to listen (in a way that makes someone feel listened to) so they feel like they got to actually talk, which makes them inclined to listen as well
    • Ability to notice if someone else seems to be in one of the above failure modes (and then, ability to point it out gently)
    • Cultivate empathy and curiosity about other people so the other social skills come more naturally, and so that even if you don't expect them to be right, you can see them as helpful to at least understand their reasoning (fleshing out your model of how other people might think)
    • Ability to communicate in (and to listen to) a variety of styles of conversation, "code switching", learning another person's jargon or explaining yours without getting frustrated
    • Habit asking clarifying questions, that help your partner find the Crux of their beliefs.
  • Actually Thinking About Things
    • Understanding when and how to apply math, statistics, etc
    • Practice thinking causally
    • Practice various creativity related things that help you brainstorm ideas, notice implications of things, etc
    • Operationalize vague beliefs into concrete predictions
  • Actually Changing Your Mind
    • Notice when you are confused or surprised and treat this as a red flag that something about your models is wrong (either you have the wrong model or no model)
    • Ability to identify what the actual Crux of your beliefs are.
    • Ability to track bits of small bits of evidence that are accumulating. If enough bits of evidence have accumulated that you should at least be taking an idea *seriously* (even if not changing your mind yet), go through motions of thinking through what the implications WOULD be, to help future updates happen more easily.
    • If enough evidence has accumulated that you should change your mind about a thing... like, actually do that. See the list of failure modes above that may prevent this. (That said, if you have a vague nagging sense that something isn't right even if you can't articulate it, try to focus on that and flesh it out rather than trying to steamroll over it)
    • Explore Implications: When you change your mind on a thing, don't just acknowledge, actually think about what other concepts in your worldview should change. Do this
      • because it *should* have other implications, and it's useful to know what they are....
      • because it'll help you actually retain the update (instead of letting it slide away when it becomes socially/politically/emotionally/physically inconvenient to believe it, or just forgetting)
    • If you notice your emotions are not in line with what you now believe the truth to be (in a system-2 level), figure out why that is.
  • Noticing Disagreement and Confusion, and then putting in the work to resolve it
  • If you have all the above skills, and your partner does too, and you both trust that this is the case, you can still fail to make progress if you don't actually follow up, and schedule the time to talk through the issues thoroughly. For deep disagreement this can take years. It may or may not be worth it. But if there are longstanding disagreements that continuously cause strife, it may be worthwhile.

Building Towards Shared Norms

When smart, insightful people disagree, at least one of them is doing something wrong, and it seems like we should be trying harder to notice and resolve it.

A rough sketch of a norm I'd like to see.

Trigger: You've gotten into a heated dispute where at least one person feels the other is arguing in bad faith (especially in public/online settings)

Action: Before arguing further:

  • stop to figure out if the argument is even worth it
  • if so, each person runs through some basic checks (i.e. "am *I* being overly tribal/emotional?)
  • instead of continuing to argue in public where there's a lot more pressure to not lose face, or steer social norms, they continue the discussion privately, in whatever the most human-centric way is practical.
  • they talk until at least they succeed at Step 1 Double Crux (i.e. agree on where they disagree, and hopefully figure out a possible empirical test for it). Ideally, they also come to as much agreement as they can.
  • Regardless of how far they get, they write up a short post (maybe just a paragraph, maybe longer depending on context) on what they did end up agreeing on or figuring out. (The post should be something they both sign off on)

New to LessWrong?

New Comment
72 comments, sorted by Click to highlight new comments since: Today at 12:09 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I am genuinely confused by the discourse around double crux. Several people I respect seem to think of DC as a key intellectual method. Duncan (curriculum director at CFAr) explicitly considers DC to be a cornerstone CFAR technique. However I have tried to use the technique and gotten nowhere.

Ray deserves credit for identifying and explicitly discussing some of the failure modes I ran into. In particular DC style discussion frequently seems to recurse down to very fundamental issues in philosophy and epistemology. Twice I have tried to discuss a concrete practical issue via DC and wound up discussing utility aggregation; in these cases we were both utilitarians and we still couldn't get the method to work.

I have to second Said Achmiz's request for public examples of double crux going well. I once asked Ray for an example via email and received the following link to Sarah Constantin's blogpost . This post is quite good and caused me to update towards the view that DC can be productive. But this post doesn't contain the actual DC conversation, just a summary of the events and the lessons learned. I want to see an actual, for real, fully detailed example of DC being used productively. I don't understand why no such examples are publicly available.

whperson's comment touches on why examples are rarely publicized.

I watched Constantin's Double-Crux, and noticed that, no matter how much I identified with one participant or another, they were not representing me. They explored reciprocally and got to address concerns as they came up, while the audience gained information about them unilaterally. They could have changed each other's minds without ever coming near points I considered relevant. Double-crux mostly accrues benefits to individuals in subtle shifts, rather than to the public in discrete actionable updates.

A good double-crux can get intensely personal. Double-crux has an empirical advantage over scientific debate because it focuses on integrating real, existing perspectives instead of attempting to simultaneously construct and deconstruct a solid position. On the flip side, you have to deal with real perspectives, not coherent platforms. Double-crux only integrates those two perspectives, cracked and flawed as they are. It's not debate 2.0 and won't solve the same problems that arguments do.

9Zvi6y
I also watched Constantin's Double-Crux, and feel that most of my understanding of how the process works comes from that observation rather than any posts including Duncan's. I also agree that her post of results, while excellent, does not do the job of explaining the process that was done by watching the process live. I wonder to what extent having an audience made the process unfold in a way that was easier to follow; on the surface both of them were ignoring us, and as hamnox says they were not trying to respond to our possible concerns, but I still got the instinctive sense that having people watching was making the process better or at least easier to parse. The topic of that Crux was especially good for a demonstration, in that it involved a lot of disagreements over models, facts and probabilities. The underlying disagreements did not boil down to questions of philosophy. I do think that finding out that the difference does boil down to philosophy or epistemology is a success mode rather than a failure mode - you've successfully identified important disagreements you can talk about now or another time, and ruled out other causes, so you don't waste further time arguing over things that won't change minds. It's an unhint: You now think you're worse off than you thought you were before, but you're actually better off than you actually were. It also points to the suggestion that if you're frequently having important disagreements that boil down to philosophy, perhaps you should do more philosophy!
4Conor Moreton6y
Strong agreement that identifying important root disagreements is success rather than failure. If people on opposite sides of the abortion debate got themselves boiled all the way down to virtue ethics vs. utilitarianism or some other similar thing, this would be miles better than current demonization and misunderstanding.

For me, the world is divided into roughly two groups:

1. People who I do not trust enough to engage in this kind of honest intellectual debate with, because our interests and values are divergent and all human communication is political.

2. Close friends, who, when we disagree, I engage in something like "double crux" naturally and automatically, because it's the obvious general shape of how to figure out what we should do.

The latter set currently contains about two (2) people.

This is why I don't do explicit double crux.

I think there are dis-encentives to do it on the internet, even if you expect good faith from your partner, you don't expect good faith from all the other viewers.

But because if you change your mind for all the world to see, people with bad faith can use it is as evidence that you can be wrong and so are likely to be wrong about other things you say as well. Examples of this in the real world are politicians accused of flip-flopping on issues.

You touch on this with

instead of continuing to argue in public where there's a lot more pressure to no

... (read more)
8Zvi6y
We need public examples, agreed. I think this under-sells the difficulty here. In an argument or discourse worth having, a lot of the beliefs feeding in are going to be things that are: A) Hard to state with precision, or that require the sum of a lot of different claims. B) Involve beliefs or implications that risk getting a very negative reaction on the internet. There are a lot of important facts about the world you do not want to be seen endorsing in public, as much as we wish it were not so. C) Involve claims that you do not have a social right to make. D) Involve claims you can't provide well-articulated evidence for, or can't without running into some of A-C. In my experience, advanced actually-changing-minds discussions are very hard to follow and very easy to misconstrue. They involve saying things that make sense in context to the particular person you're talking to, but that often on the surface make absurd, immoral or taboo claims. I still think trying to do this is Worth It. I would start by trying to think harder about what topics we can do this on in public, that dodge these problems while still being non-trivial enough to be worthwhile.
5Raemon7y
There'd likely be a multi-step plan, which depends on whether your goals are more "raise the sanity waterline" or "build an intellectual hub that makes rapid progress on important issues." Step 1: Practice it in the rationality community. Generally get people on board with the notion that if there's an actually-important disagreement, that people try to resolve it. This would require a few public examples of productive disagreement and double crux (I agree that lack-of-those is a major issue). Then, when people have a private dispute, they come back saying "Hey this is what we talked about, this was what we agreed on, and this is any meta-issues we stumbled upon that we think others should know about re: productive disagreement." Step 2: Do that in semi-public places (facebook, other communities we're part of, etc), in a way that let's nearby intellectual communities get a sense of it. (Maybe if we can come up with clear examples and better introduction articles, it'd be good to share those). The next time you get into a political argument with your uncle, rather than angrily yell at each other, try to meet privately and talk to each other and share it with your family. (Note: I have some uncles for whom I think this would work and some for whom it definitely wouldn't) (This will require effort and emotional labor that may be uncomfortable) Step 3: After getting some practice doing productive disagreement and/or Double Crux in particular with random people, do it in somewhat higher stakes environment. Try it when a dispute comes up at your company. (This may only work if you have the sort of company that already at least nominally values truthseeking/transparency/etc so that it feels like a natural extension of the company culture rather than a totally weird thing you're shoving into it) Step 4: A lot of things could go wrong in between steps 1-3, but afterwards basically make deliberate efforts to expand it into wider circles (I would not leap to "try to get

I feel like, as a contrarian, it is my duty to offer to double-crux with people so they get some practice. :P When I've moved up to the East Bay interested people should feel free to message me.

4Zvi6y
I too volunteer to double-crux with people to let them and myself get practice, either in-person in NYC or online, and encourage others to also reply and add their names to such a list.

I find that I never double crux because it feels too much like a Big Serious Activity I have to Decide to engage in or something. The closest I've gotten is having a TAP where during disagreements I try to periodically ask myself what my cruxes are and then state them.

My first thought on reading the post on double crux was that it's not clear to me how much value it adds beyond previous ideas about productive disagreement. If I'm already thinking about the inferential distance and trying to find a place where I agree with my conversational partner to start from, then building from there, I'm not sure what extra value the idea of cruxes has and I'm not sure what circumstances I could use double crux that the naive "find a shared starting point and go from there" doesn't work.

Obviously a

... (read more)
3Raemon7y
One important thing is that Doublecrux is not about finding a "shared starting point" (or at least, depends a lot on what you mean by shared-starting-point and I expect a lot of people to get confused). You're looking for a shared concrete disagreement, and a related-but-different pattern is more like look for what things we agree on so we can remember we're on the same side which doesn't necessarily build the skill of productively, thoroughly resolving disagreements. I do think most of the time, if things are going well, that you'll have constructed your belief systems such that you've already clearly identified cruxes, or when debating you proactively share "this is probably my crux" in a way that makes the Double Crux be a natural extension out of the productive-disagreement-environment. (i.e. when I'm arguing with CFAR-adjaecent-rationalists, we rarely say "let's have a double crux to resolve this" but we often construct the dialog in a way that has DC thoroughly embedded in its DNA, to the point where it's not necessary to do it explicitly
5magfrump7y
I'm imagining a hierarchy of beliefs like: school uniforms are good (disagreement) because school uniforms reduce embarrassment (empirical disagreement, i.e. the crux) which is good because I care about the welfare of students (agreement) If I find the point of agreement and try to work toward the point of disagreement, I expect to come across the crux. If my beliefs don't live in this hierarchy, I'm not sure how searching for a crux is supposed to help (aside from telling me to build the hierarchy, which you could tell me directly). If my beliefs already live in this hierarchy, I'm not sure how searching for a crux does more than exploring the hierarchy. So I feel like "double crux" is sitting on top of another skill, like "build an inferential bridge," which is actually doing all the work. Especially if you are just using the "DNA" of the technique, it feels like everything being written about double crux is obscuring the fact that you're actually talking about building inferential bridges. Maybe my takeaway should be something like "the double crux is the way building an inferential bridge leads to resolving disagreements," and then things like the background of "genuinely care about your conversational partner's model of the world" filters through a chain like: double crux is useful because double crux is about a disagreement I care about it's use comes from letting me connect the disagreement to explicit belief hierarchies and explicit belief hierarchies are good for establishing mutual understanding So I'm starting to see double crux as a motivational tool, or a concept living within hierarchies of belief, rather than a standalone conceptual tool. But I'm not sure how this relates to the presentation of it I'm seeing here.
2Raemon7y
Part of my point with the post is that I think Double Crux is just one step in a long list of steps (i.e. the giant list of background skills necessary for it to be useful). I think it's the next step a chain where every step is necessary. My belief that Double Crux is getting overloaded to mean both "literally finding the double crux" and "the entire process of productive disagreement" may be a bit of a departure from it's usual presentation. I think your current take on it, and mine, may be fairly similar, and that these are in fact different from how it's usually described.

Some Meta-Data:

This took me about 5 hours to write.

My primary goal was to get as many thoughts down as I could so I could see them all at once, so that I could then think more clearly about how they fit together and where to go from there.

A second goal was to do that mindfully, in a way that helped me better think about how to think. What was my brain actually doing as it wrote this post? What could I have done instead? I'll be writing another post soonish exploring that concept in more detail.

A third goal was to prompt a conversation to help flesh o... (read more)

7whpearson7y
Datapoint: I'm okay with brain dumps.
3gjm7y
Me too, especially when (1) their authors acknowledge them as such and (2) there isn't any sign of a general tendency for everyone to post brain dumps all the time when a modest expenditure of effort would let them get their thoughts better organized.
3Raemon7y
Later on I'll be wanting to post brain dumps all the time, but I think the rate at which this will come to pass will roughly coincide with "people move their off-the-cuff posts to personal pages and then opt into the personal pages of people whose off-the-cuff posts they like"

This makes me want to try it :)

Would anyone else be interested in a (probably recurring if successful) "Productive disagreement practice thread"? Having a wider audience than one meetup's attendance should make it easier to find good disagreements, while being within LW would hopefully secure good faith.

I imagine a format where participants make top-level comments listing beliefs they think likely to generate productive disagreement, then others can pick a belief to debate one-on-one.

I see the technique of double-crux as being useful, although there will not always be a double crux. Sometimes people will have a whole host of reasons for being for something and merely convincing them to change their view on any one of them won't be enough to shift their view, even if they are a perfectly rational agent. Similarly, I don't see any reason why two people's cruxes have to overlap. Yet it practise, this technique seems to work reasonably well. I haven't thought enough about this to understand it very well yet.

3Raemon7y
Yeah - in the lengthy Double Crux article it's acknowledged that there can be multiple cruxes. But it's important to find whatever the most important cruxes are, instead of getting distracted by lots of things that sound-like-good-arguments but aren't actually the core issue.