Epistemic Status: Thinking out loud, not necessarily endorsed, more of a brainstorm and hopefully discussion-prompt.
Double Crux has been making the rounds lately (mostly on Facebook but I hope for this to change). It seems like the technique has failed to take root as well as it should. What's up with that?
(If you aren't yet familiar with Double Crux I recommend checking out Duncan's post on it in full. There's a lot of nuance that might be missed with a simple description.)
Observations So Far
- Double Crux hasn't percolated beyond circles directly adjacent to CFAR (it seems to be learned mostly be word of mouth). This might be evidence that it's too confusing or nuanced a concept to teach without word of mouth and lots of examples. It might be evidence that we have not yet taught it very well.
- "Double Crux" seems to refer to two things: the specific action of "finding the crux(es) you both agree the debate hinges on" and "the overall pattern of behavior surrounding using Official Doublecrux Technique". (I'll be using the phrase "productive disagreement" to refer to the second, broader usage)
Double Crux seems hard to practice, for a few reasons.
Filtering Effects
- In local meetups where rationality-folk attempt to practice productive disagreement on purpose, they often have trouble finding things to disagree about. Instead they either:
- are already filtered to have similar beliefs,
- quickly realize their beliefs shouldn't be that strong (i.e. they disagree on Open Borders, but soon as they start talking they admit that neither of them really have that strong an opinion)
- they have wildly different intuitions about deep moral sentiments that are hard to make headway on in a reasonable amount of time - often untethered to anything empirical. (i.e. what's more important? Preventing suffering? Material Freedom? Accomplishing interesting things?)
Insufficient Shared Trust
- Meanwhile in many online spaces, people disagree all the time. And even if they're both nominally rationalists, they have an (arguably justified) distrust of people on the internet who don't seem to be arguing in good faith. So there isn't enough foundation to do a productive disagreement at all.
- One failure mode of Double Crux is when people disagree on what frame to even be using to evaluate truth, in which case the debate recurses all the way to the level of basic epistemology. It often doesn't seem to be worth the effort to resolve that.
- Perhaps most frustratingly: it seems to me that there are many longstanding disagreements between people who should totally be able to communicate clearly, update rationally, and make useful progress together, and those disagreements don't go away, people just eventually start ignoring each other or leaving the dispute as unresolved. (An example I feel safe bringing up publicly is the argument between Hanson and Yudkowsky, although this may be a case of the 'what frame are we even using' issue above.)
That last point is one of the biggest motivators of this post. If the people I most respect can't productively disagree in a way that leads to clear progress, recognizable from both sides, then what is the rationality community even doing? (Whether you consider the primary goal to "raise the sanity waterline" or "build a small intellectual community that can solve particular hard problems", this bodes poorly).
Possible Pre-Requisites for Progress
There's a large number of sub-skills you need to productively disagree. To have public norms surrounding disagreement, you not only need individuals to have those skills - they need to trust that each other have those skills as well.
Here's a rough list of those skills. (Note: this is long, and it's less important that you read the whole list than that the list is long, which is why Double Cruxing is hard)
- Background beliefs (listed in Duncan's original post)
- Epistemic humility ("I could be the wrong person here")
- Good Faith ("I trust the other person to be believing things that make sense to them, which I'd have ended up believing if I were exposed to the same stimuli, and that they are generally trying to find the the truth")
- Confidence in the existence of objective truth
- Curiosity / Desire to uncover truth
- Building-Block and Meta Skills
- (Necessary or at least very helpful to learn everything else)
- Ability to gain habits (see Trigger Action Plans, Reflex/Routines, Habits 101)
- Ability to introspect and notice your internal states (Focusing and Noticing can help)
- Ability to induce a mental state or reframe
- Habit of gaining habits
- Notice you are in a failure mode, and step out. Examples:
- You are fighting to make sure an side/argument wins
- You are fighting to make another side/argument lose (potentially jumping on something that seems allied to something/someone you consider bad/dangerous)
- You are incentivized to believe something, or not to notice something, because of social or financial rewards,
- You're incentivized not to notice something or think it's important because it'd be physically inconvenient/annoying
- You are offended/angered/defensive/agitated
- You're afraid you'll lose something important if you lose a belief (possibly 'bucket errors')
- You're rounding a person's statement off to the nearest stereotype instead of trying to actually understand and response to what they're saying
- You're arguing about definitions of words instead of ideas
- Notice "freudian slip" ish things that hint that you're thinking about something in an unhelpful way. (for example, while writing this, I typed out "your opponent" to refer to the person you're Double Cruxing with, which is a holdover from treating it like an adversarial debate)
(The "Step Out" part can be pretty hard and would be a long series of blogposts, but hopefully this at least gets across the ideas to shoot for)
- Social Skills (i.e. not feeding into negative spirals, noticing what emotional state or patterns other people are in [*without* accidentaly rounding them off to a stereotype])
- Ability to tactfully disagree in a way that arouses curiosity rather than defensiveness
- Leaving your colleague a line of retreat (i.e. not making them lose face if they change their mind)
- Socially reward people who change their mind (in general, frequently, so that your colleague trusts that you'll do so for them)
- Ability to listen (in a way that makes someone feel listened to) so they feel like they got to actually talk, which makes them inclined to listen as well
- Ability to notice if someone else seems to be in one of the above failure modes (and then, ability to point it out gently)
- Cultivate empathy and curiosity about other people so the other social skills come more naturally, and so that even if you don't expect them to be right, you can see them as helpful to at least understand their reasoning (fleshing out your model of how other people might think)
- Ability to communicate in (and to listen to) a variety of styles of conversation, "code switching", learning another person's jargon or explaining yours without getting frustrated
- Habit asking clarifying questions, that help your partner find the Crux of their beliefs.
- Actually Thinking About Things
- Understanding when and how to apply math, statistics, etc
- Practice thinking causally
- Practice various creativity related things that help you brainstorm ideas, notice implications of things, etc
- Operationalize vague beliefs into concrete predictions
- Actually Changing Your Mind
- Notice when you are confused or surprised and treat this as a red flag that something about your models is wrong (either you have the wrong model or no model)
- Ability to identify what the actual Crux of your beliefs are.
- Ability to track bits of small bits of evidence that are accumulating. If enough bits of evidence have accumulated that you should at least be taking an idea *seriously* (even if not changing your mind yet), go through motions of thinking through what the implications WOULD be, to help future updates happen more easily.
- If enough evidence has accumulated that you should change your mind about a thing... like, actually do that. See the list of failure modes above that may prevent this. (That said, if you have a vague nagging sense that something isn't right even if you can't articulate it, try to focus on that and flesh it out rather than trying to steamroll over it)
- Explore Implications: When you change your mind on a thing, don't just acknowledge, actually think about what other concepts in your worldview should change. Do this
- because it *should* have other implications, and it's useful to know what they are....
- because it'll help you actually retain the update (instead of letting it slide away when it becomes socially/politically/emotionally/physically inconvenient to believe it, or just forgetting)
- If you notice your emotions are not in line with what you now believe the truth to be (in a system-2 level), figure out why that is.
- Noticing Disagreement and Confusion, and then putting in the work to resolve it
- If you have all the above skills, and your partner does too, and you both trust that this is the case, you can still fail to make progress if you don't actually follow up, and schedule the time to talk through the issues thoroughly. For deep disagreement this can take years. It may or may not be worth it. But if there are longstanding disagreements that continuously cause strife, it may be worthwhile.
Building Towards Shared Norms
When smart, insightful people disagree, at least one of them is doing something wrong, and it seems like we should be trying harder to notice and resolve it.
A rough sketch of a norm I'd like to see.
Trigger: You've gotten into a heated dispute where at least one person feels the other is arguing in bad faith (especially in public/online settings)
Action: Before arguing further:
- stop to figure out if the argument is even worth it
- if so, each person runs through some basic checks (i.e. "am *I* being overly tribal/emotional?)
- instead of continuing to argue in public where there's a lot more pressure to not lose face, or steer social norms, they continue the discussion privately, in whatever the most human-centric way is practical.
- they talk until at least they succeed at Step 1 Double Crux (i.e. agree on where they disagree, and hopefully figure out a possible empirical test for it). Ideally, they also come to as much agreement as they can.
- Regardless of how far they get, they write up a short post (maybe just a paragraph, maybe longer depending on context) on what they did end up agreeing on or figuring out. (The post should be something they both sign off on)
I am genuinely confused by the discourse around double crux. Several people I respect seem to think of DC as a key intellectual method. Duncan (curriculum director at CFAr) explicitly considers DC to be a cornerstone CFAR technique. However I have tried to use the technique and gotten nowhere.
Ray deserves credit for identifying and explicitly discussing some of the failure modes I ran into. In particular DC style discussion frequently seems to recurse down to very fundamental issues in philosophy and epistemology. Twice I have tried to discuss a concrete practical issue via DC and wound up discussing utility aggregation; in these cases we were both utilitarians and we still couldn't get the method to work.
I have to second Said Achmiz's request for public examples of double crux going well. I once asked Ray for an example via email and received the following link to Sarah Constantin's blogpost . This post is quite good and caused me to update towards the view that DC can be productive. But this post doesn't contain the actual DC conversation, just a summary of the events and the lessons learned. I want to see an actual, for real, fully detailed example of DC being used productively. I don't understand why no such examples are publicly available.
whperson's comment touches on why examples are rarely publicized.
I watched Constantin's Double-Crux, and noticed that, no matter how much I identified with one participant or another, they were not representing me. They explored reciprocally and got to address concerns as they came up, while the audience gained information about them unilaterally. They could have changed each other's minds without ever coming near points I considered relevant. Double-crux mostly accrues benefits to individuals in subtle shifts, rather than to the public in discrete actionable updates.
A good double-crux can get intensely personal. Double-crux has an empirical advantage over scientific debate because it focuses on integrating real, existing perspectives instead of attempting to simultaneously construct and deconstruct a solid position. On the flip side, you have to deal with real perspectives, not coherent platforms. Double-crux only integrates those two perspectives, cracked and flawed as they are. It's not debate 2.0 and won't solve the same problems that arguments do.
For me, the world is divided into roughly two groups:
1. People who I do not trust enough to engage in this kind of honest intellectual debate with, because our interests and values are divergent and all human communication is political.
2. Close friends, who, when we disagree, I engage in something like "double crux" naturally and automatically, because it's the obvious general shape of how to figure out what we should do.
The latter set currently contains about two (2) people.
This is why I don't do explicit double crux.
I think there are dis-encentives to do it on the internet, even if you expect good faith from your partner, you don't expect good faith from all the other viewers.
But because if you change your mind for all the world to see, people with bad faith can use it is as evidence that you can be wrong and so are likely to be wrong about other things you say as well. Examples of this in the real world are politicians accused of flip-flopping on issues.
You touch on this with
I feel like, as a contrarian, it is my duty to offer to double-crux with people so they get some practice. :P When I've moved up to the East Bay interested people should feel free to message me.
I find that I never double crux because it feels too much like a Big Serious Activity I have to Decide to engage in or something. The closest I've gotten is having a TAP where during disagreements I try to periodically ask myself what my cruxes are and then state them.
My first thought on reading the post on double crux was that it's not clear to me how much value it adds beyond previous ideas about productive disagreement. If I'm already thinking about the inferential distance and trying to find a place where I agree with my conversational partner to start from, then building from there, I'm not sure what extra value the idea of cruxes has and I'm not sure what circumstances I could use double crux that the naive "find a shared starting point and go from there" doesn't work.
Obviously a
Some Meta-Data:
This took me about 5 hours to write.
My primary goal was to get as many thoughts down as I could so I could see them all at once, so that I could then think more clearly about how they fit together and where to go from there.
A second goal was to do that mindfully, in a way that helped me better think about how to think. What was my brain actually doing as it wrote this post? What could I have done instead? I'll be writing another post soonish exploring that concept in more detail.
A third goal was to prompt a conversation to help flesh o... (read more)
This makes me want to try it :)
Would anyone else be interested in a (probably recurring if successful) "Productive disagreement practice thread"? Having a wider audience than one meetup's attendance should make it easier to find good disagreements, while being within LW would hopefully secure good faith.
I imagine a format where participants make top-level comments listing beliefs they think likely to generate productive disagreement, then others can pick a belief to debate one-on-one.
I see the technique of double-crux as being useful, although there will not always be a double crux. Sometimes people will have a whole host of reasons for being for something and merely convincing them to change their view on any one of them won't be enough to shift their view, even if they are a perfectly rational agent. Similarly, I don't see any reason why two people's cruxes have to overlap. Yet it practise, this technique seems to work reasonably well. I haven't thought enough about this to understand it very well yet.