Argument A is a crux for Alice on position P if finding out that A is false causes Alice to substantially chance their view on P. If Alice thinks P, Bob thinks not P, Alice thinks A, Bob thinks not A, and A/not A are both cruxes for Alice and Bob respectively, then A is a double-crux for Alice and Bob's disagreement about P.
Suppose that I think that they sky is yellow. Suppose that Alice thinks that the sky is blue. Suppose that I think that the sky is yellow because noted sky-color historian Carol wrote that the sky is yellow. Suppose that Alice thinks that the sky is blue because they think that Carol wrote that the sky is blue. "Carol wrote X" is thus a double crux for our disagreement about the color of the sky. We can now just check what the sky-historian Carol actually wrote and effortlessly resolve our disagreement.
Note that cruxes can be conjunctions. That is, I can think that the sky is yellow because sky-color historians Dave and Erin both said that the sky is yellow, where neither individual historian would have been sufficient.
Do double-cruxes even always exist? By the Aumann Agreement Theorem, the answer is sort of yes, conditional on a couple of assumptions. In practice, CFAR instructors have indicated to me that they have always found double-cruxes when they were seeking, although sometimes it took upwards of 5 hours.
I don't have a reliable way of finding-double cruxes, but I have found a few. Here are some strategies that I think help.
Epistemic status: both strategies are highly experimental.
The search for a double-crux between two parties is not an argument, it's a conversation aimed at both parties coming out with truer beliefs. However, it can feel a lot like an argument, resulting in parties trying to talk over each other, overstating their beliefs, etc.
One way to prevent this is to explicitly delegate one party as the "proposer" and the other party as the "listener." The role of the proposer is to try to find their own cruxes for the position and tell them to the listener. The role of the listener is to ask clarifying questions and indicate whether or not they agree/disagree with the proposer's cruxes and whether or not those cruxes are also cruxes for them.
If a proposer crux is found that is also a listener crux, then you have succeeded.
If no such crux is found, then you presumably have a proposer crux that the listener disagrees with, but isn't a listener crux. If the crux is easy to check empirically (i.e. by looking something up), then it might be worth it just to check.
(If no such crux exists, then I think something has gone wrong. It seems like if the listener agrees with no cruxes but disagrees on the position, then the proposer hasn't listed all the cruxes.)
At this point, there are two options. The first option is to switch proposer/listener roles and try to find a double-crux that way. This is probably what you should do. The other extremely highly super experimental move is to try to recursively find a double-crux about the meta-position that the proposer crux isn't a listener crux.
Note that if the double-crux you find is not something easily resolvable, you can recursively apply this process to find a series of (hopefully) simpler and simpler cruxes. The sometimes results in you realizing that your complex philosophical thesis really hinges on the fact that there a certain statistic that can easily be looked up is greater than 12.
Often times, the reason that I hold certain beliefs isn't because of some set of arguments that I think is strong, it's because I have a model that implies that belief. The general notion of a crux can capture this with the crux being "my model of the situation captures the relevant factors", but it might be worth it to handle this case separately.
This process is similar to Proposer/Listener, but instead of directly trying to find cruxes, the proposer unpacks the next level of their model. The listener finds parts of the model that they disagree with, while checking if changing their model to match the listeners produces a different implication. If it does, then that part of the model is a crux for the listener. If it happens to also be a crux for the proposer, then a double-crux has been found.
The reason why model elaboration is different is because often time, I have parts of my model that control the output that I don't realize. Since my model is so deeply ingrained in my thinking, I often assume that everyone's model contains the same basic moving parts. Explicitly choosing to explain your model tends to unearth all of these hidden assumptions more reliably than "find cruxes."
Ideally, you would find someone who also knows what double-cruxing is, find a position you both disagree on, and find a double-crux. Do this if possible.
Otherwise, there are two TAPs that are useful for applying the principle of double-cruxing to daily life:
- Crux checking
- Trigger: someone makes an argument at you.
- Action: check if it's a crux.
- This can save a lot of time because if someone is arguing with you, you don't really care if they're right about stuff that isn't a crux for you.
- Trigger: someone explains something to you.
- Action: paraphrase it back at them.
- This TAP makes more likely that you understand what people are saying by having to pass a mini ideological turing test.