Double crux is one of CFAR's newer concepts, and one that's forced a re-examination and refactoring of a lot of our curriculum (in the same way that the introduction of TAPs and Inner Simulator did previously). It rapidly became a part of our organizational social fabric, and is one of our highest-EV threads for outreach and dissemination, so it's long overdue for a public, formal explanation.
Note that while the core concept is fairly settled, the execution remains somewhat in flux, with notable experimentation coming from Julia Galef, Kenzi Amodei, Andrew Critch, Eli Tyre, Anna Salamon, myself, and others. Because of that, this post will be less of a cake and more of a folk recipe—this is long and meandering on purpose, because the priority is to transmit the generators of the thing over the thing itself. Accordingly, if you think you see stuff that's wrong or missing, you're probably onto something, and we'd appreciate having them added here as commentary.
To a first approximation, a human can be thought of as a black box that takes in data from its environment, and outputs beliefs and behaviors (that black box isn't really "opaque" given that we do have access to a lot of what's going on inside of it, but our understanding of our own cognition seems uncontroversially incomplete).
When two humans disagree—when their black boxes output different answers, as below—there are often a handful of unproductive things that can occur.
The most obvious (and tiresome) is that they'll simply repeatedly bash those outputs together without making any progress (think most disagreements over sports or politics; the people above just shouting "triangle!" and "circle!" louder and louder). On the second level, people can (and often do) take the difference in output as evidence that the other person's black box is broken (i.e. they're bad, dumb, crazy) or that the other person doesn't see the universe clearly (i.e. they're biased, oblivious, unobservant). On the third level, people will often agree to disagree, a move which preserves the social fabric at the cost of truth-seeking and actual progress.
Double crux in the ideal solves all of these problems, and in practice even fumbling and inexpert steps toward that ideal seem to produce a lot of marginal value, both in increasing understanding and in decreasing conflict-due-to-disagreement.
This post will occasionally delineate two versions of double crux: a strong version, in which both parties have a shared understanding of double crux and have explicitly agreed to work within that framework, and a weak version, in which only one party has access to the concept, and is attempting to improve the conversational dynamic unilaterally.
In either case, the following things seem to be required:
- Epistemic humility. The number one foundational backbone of rationality seems, to me, to be how readily one is able to think "It's possible that I might be the one who's wrong, here." Viewed another way, this is the ability to take one's beliefs as object, rather than being subject to them and unable to set them aside (and then try on some other belief and productively imagine "what would the world be like if this were true, instead of that?").
- Good faith. An assumption that people believe things for causal reasons; a recognition that having been exposed to the same set of stimuli would have caused one to hold approximately the same beliefs; a default stance of holding-with-skepticism what seems to be evidence that the other party is bad or wants the world to be bad (because as monkeys it's not hard for us to convince ourselves that we have such evidence when we really don't).1
- Confidence in the existence of objective truth. I was tempted to call this "objectivity," "empiricism," or "the Mulder principle," but in the end none of those quite fit. In essence: a conviction that for almost any well-defined question, there really truly is a clear-cut answer. That answer may be impractically or even impossibly difficult to find, such that we can't actually go looking for it and have to fall back on heuristics (e.g. how many grasshoppers are alive on Earth at this exact moment, is the color orange superior to the color green, why isn't there an audio book of Fight Club narrated by Edward Norton), but it nevertheless exists.
- Curiosity and/or a desire to uncover truth. Originally, I had this listed as truth-seeking alone, but my colleagues pointed out that one can move in the right direction simply by being curious about the other person and the contents of their map, without focusing directly on the territory.
At CFAR workshops, we hit on the first and second through specific lectures, the third through osmosis, and the fourth through osmosis and a lot of relational dynamics work that gets people curious and comfortable with one another. Other qualities (such as the ability to regulate and transcend one's emotions in the heat of the moment, or the ability to commit to a thought experiment and really wrestle with it) are also helpful, but not as critical as the above.
How to play
Let's say you have a belief, which we can label A (for instance, "middle school students should wear uniforms"), and that you're in disagreement with someone who believes some form of ¬A. Double cruxing with that person means that you're both in search of a second statement B, with the following properties:
- You and your partner both disagree about B as well (you think B, your partner thinks ¬B).
- The belief B is crucial for your belief in A; it is one of the cruxes of the argument. If it turned out that B was not true, that would be sufficient to make you think A was false, too.
- The belief ¬B is crucial for your partner's belief in ¬A, in a similar fashion.
In the example about school uniforms, B might be a statement like "uniforms help smooth out unhelpful class distinctions by making it harder for rich and poor students to judge one another through clothing," which your partner might sum up as "optimistic bullshit." Ideally, B is a statement that is somewhat closer to reality than A—it's more concrete, grounded, well-defined, discoverable, etc. It's less about principles and summed-up, induced conclusions, and more of a glimpse into the structure that led to those conclusions.
(It doesn't have to be concrete and discoverable, though—often after finding B it's productive to start over in search of a C, and then a D, and then an E, and so forth, until you end up with something you can research or run an experiment on).
At first glance, it might not be clear why simply finding B counts as victory—shouldn't you settle B, so that you can conclusively choose between A and ¬A? But it's important to recognize that arriving at B means you've already dissolved a significant chunk of your disagreement, in that you and your partner now share a belief about the causal nature of the universe.
If B, then A. Furthermore, if ¬B, then ¬A. You've both agreed that the states of B are crucial for the states of A, and in this way your continuing "agreement to disagree" isn't just "well, you take your truth and I'll take mine," but rather "okay, well, let's see what the evidence shows." Progress! And (more importantly) collaboration!
This is where CFAR's versions of the double crux unit are currently weakest—there's some form of magic in the search for cruxes that we haven't quite locked down. In general, the method is "search through your cruxes for ones that your partner is likely to disagree with, and then compare lists." For some people and some topics, clearly identifying your own cruxes is easy; for others, it very quickly starts to feel like one's position is fundamental/objective/un-break-downable.
- Increase noticing of subtle tastes, judgments, and "karma scores." Often, people suppress a lot of their opinions and judgments due to social mores and so forth. Generally loosening up one's inner censors can make it easier to notice why we think X, Y, or Z.
- Look forward rather than backward. In places where the question "why?" fails to produce meaningful answers, it's often more productive to try making predictions about the future. For example, I might not know why I think school uniforms are a good idea, but if I turn on my narrative engine and start describing the better world I think will result, I can often sort of feel my way toward the underlying causal models.
- Narrow the scope. A specific test case of "Steve should've said hello to us when he got off the elevator yesterday" is easier to wrestle with than "Steve should be more sociable." Similarly, it's often easier to answer questions like "How much of our next $10,000 should we spend on research, as opposed to advertising?" than to answer "Which is more important right now, research or advertising?"
- Do "Focusing" and other resonance checks. It's often useful to try on a perspective, hypothetically, and then pay attention to your intuition and bodily responses to refine your actual stance. For instance: (wildly asserts) "I bet if everyone wore uniforms there would be a fifty percent reduction in bullying." (pauses, listens to inner doubts) "Actually, scratch that—that doesn't seem true, now that I say it out loud, but there is something in the vein of reducing overt bullying, maybe?"
- Seek cruxes independently before anchoring on your partner's thoughts. This one is fairly straightforward. It's also worth noting that if you're attempting to find disagreements in the first place (e.g. in order to practice double cruxing with friends) this is an excellent way to start—give everyone the same ten or fifteen open-ended questions, and have everyone write down their own answers based on their own thinking, crystallizing opinions before opening the discussion.
Overall, it helps to keep the ideal of a perfect double crux in the front of your mind, while holding the realities of your actual conversation somewhat separate. We've found that, at any given moment, increasing the "double cruxiness" of a conversation tends to be useful, but worrying about how far from the ideal you are in absolute terms doesn't. It's all about doing what's useful and productive in the moment, and that often means making sane compromises—if one of you has clear cruxes and the other is floundering, it's fine to focus on one side. If neither of you can find a single crux, but instead each of you has something like eight co-cruxes of which any five are sufficient, just say so and then move forward in whatever way seems best.
(Variant: a "trio" double crux conversation in which, at any given moment, if you're the least-active participant, your job is to squint at your two partners and try to model what each of them is saying, and where/why/how they're talking past one another and failing to see each other's points. Once you have a rough "translation" to offer, do so—at that point, you'll likely become more central to the conversation and someone else will rotate out into the squinter/translator role.)
Ultimately, each move should be in service of reversing the usual antagonistic, warlike, "win at all costs" dynamic of most disagreements. Usually, we spend a significant chunk of our mental resources guessing at the shape of our opponent's belief structure, forming hypotheses about what things are crucial and lobbing arguments at them in the hopes of knocking the whole edifice over. Meanwhile, we're incentivized to obfuscate our own belief structure, so that our opponent's attacks will be ineffective.
(This is also terrible because it means that we often fail to even find the crux of the argument, and waste time in the weeds. If you've ever had the experience of awkwardly fidgeting while someone spends ten minutes assembling a conclusive proof of some tangential sub-point that never even had the potential of changing your mind, then you know the value of someone being willing to say "Nope, this isn't going to be relevant for me; try speaking to that instead.")
If we can move the debate to a place where, instead of fighting over the truth, we're collaborating on a search for understanding, then we can recoup a lot of wasted resources. You have a tremendous comparative advantage at knowing the shape of your own belief structure—if we can switch to a mode where we're each looking inward and candidly sharing insights, we'll move forward much more efficiently than if we're each engaged in guesswork about the other person. This requires that we want to know the actual truth (such that we're incentivized to seek out flaws and falsify wrong beliefs in ourselves just as much as in others) and that we feel emotionally and socially safe with our partner, but there's a doubly-causal dynamic where a tiny bit of double crux spirit up front can produce safety and truth-seeking, which allows for more double crux, which produces more safety and truth-seeking, etc.
First and foremost, it matters whether you're in the strong version of double crux (cooperative, consent-based) or the weak version (you, as an agent, trying to improve the conversational dynamic, possibly in the face of direct opposition). In particular, if someone is currently riled up and conceives of you as rude/hostile/the enemy, then saying something like "I just think we'd make better progress if we talked about the underlying reasons for our beliefs" doesn't sound like a plea for cooperation—it sounds like a trap.
So, if you're in the weak version, the primary strategy is to embody the question "What do you see that I don't?" In other words, approach from a place of explicit humility and good faith, drawing out their belief structure for its own sake, to see and appreciate it rather than to undermine or attack it. In my experience, people can "smell it" if you're just playing at good faith to get them to expose themselves; if you're having trouble really getting into the spirit, I recommend meditating on times in your past when you were embarrassingly wrong, and how you felt prior to realizing it compared to after realizing it.
(If you're unable or unwilling to swallow your pride or set aside your sense of justice or fairness hard enough to really do this, that's actually fine; not every disagreement benefits from the double-crux-nature. But if your actual goal is improving the conversational dynamic, then this is a cost you want to be prepared to pay—going the extra mile, because a) going what feels like an appropriate distance is more often an undershoot, and b) going an actually appropriate distance may not be enough to overturn their entrenched model in which you are The Enemy. Patience- and sanity-inducing rituals recommended.)
As a further tip that's good for either version but particularly important for the weak one, model the behavior you'd like your partner to exhibit. Expose your own belief structure, show how your own beliefs might be falsified, highlight points where you're uncertain and visibly integrate their perspective and information, etc. In particular, if you don't want people running amok with wrong models of what's going on in your head, make sure you're not acting like you're the authority on what's going on in their head.
Speaking of non-sequiturs, beware of getting lost in the fog. The very first step in double crux should always be to operationalize and clarify terms. Try attaching numbers to things rather than using misinterpretable qualifiers; try to talk about what would be observable in the world rather than how things feel or what's good or bad. In the school uniforms example, saying "uniforms make students feel better about themselves" is a start, but it's not enough, and going further into quantifiability (if you think you could actually get numbers someday) would be even better. Often, disagreements will "dissolve" as soon as you remove ambiguity—this is success, not failure!
Finally, use paper and pencil, or whiteboards, or get people to treat specific predictions and conclusions as immutable objects (if you or they want to change or update the wording, that's encouraged, but make sure that at any given moment, you're working with a clear, unambiguous statement). Part of the value of double crux is that it's the opposite of the weaselly, score-points, hide-in-ambiguity-and-look-clever dynamic of, say, a public political debate. The goal is to have everyone understand, at all times and as much as possible, what the other person is actually trying to say—not to try to get a straw version of their argument to stick to them and make them look silly. Recognize that you yourself may be tempted or incentivized to fall back to that familiar, fun dynamic, and take steps to keep yourself in "scout mindset" rather than "soldier mindset."
This is the double crux algorithm as it currently exists in our handbook. It's not strictly connected to all of the discussion above; it was designed to be read in context with an hour-long lecture and several practice activities (so it has some holes and weirdnesses) and is presented here more for completeness and as food for thought than as an actual conclusion to the above.
1. Find a disagreement with another person
- A case where you believe one thing and they believe the other
- A case where you and the other person have different confidences (e.g. you think X is 60% likely to be true, and they think it’s 90%)
2. Operationalize the disagreement
- Define terms to avoid getting lost in semantic confusions that miss the real point
- Find specific test cases—instead of (e.g.) discussing whether you should be more outgoing, instead evaluate whether you should have said hello to Steve in the office yesterday morning
- Wherever possible, try to think in terms of actions rather than beliefs—it’s easier to evaluate arguments like “we should do X before Y” than it is to converge on “X is better than Y.”
3. Seek double cruxes
- Seek your own cruxes independently, and compare with those of the other person to find overlap
- Seek cruxes collaboratively, by making claims (“I believe that X will happen because Y”) and focusing on falsifiability (“It would take A, B, or C to make me stop believing X”)
- Spend time “inhabiting” both sides of the double crux, to confirm that you’ve found the core of the disagreement (as opposed to something that will ultimately fail to produce an update)
- Imagine the resolution as an if-then statement, and use your inner sim and other checks to see if there are any unspoken hesitations about the truth of that statement
We think double crux is super sweet. To the extent that you see flaws in it, we want to find them and repair them, and we're currently betting that repairing and refining double crux is going to pay off better than try something totally different. In particular, we believe that embracing the spirit of this mental move has huge potential for unlocking people's abilities to wrestle with all sorts of complex and heavy hard-to-parse topics (like existential risk, for instance), because it provides a format for holding a bunch of partly-wrong models at the same time while you distill the value out of each.
Comments appreciated; critiques highly appreciated; anecdotal data from experimental attempts to teach yourself double crux, or teach it to others, or use it on the down-low without telling other people what you're doing extremely appreciated.
- Duncan Sabien
One reason good faith is important is that even when people are "wrong," they are usually partially right—there are flecks of gold mixed in with their false belief that can be productively mined by an agent who's interested in getting the whole picture. Normal disagreement-navigation methods have some tendency to throw out that gold, either by allowing everyone to protect their original belief set or by replacing everyone's view with whichever view is shown to be "best," thereby throwing out data, causing information cascades, disincentivizing "noticing your confusion," etc.
The central assumption is that the universe is like a large and complex maze that each of us can only see parts of. To the extent that language and communication allow us to gather info about parts of the maze without having to investigate them ourselves, that's great. But when we disagree on what to do because we each see a different slice of reality, it's nice to adopt methods that allow us to integrate and synthesize, rather than methods that force us to pick and pare down. It's like the parable of the three blind men and the elephant—whenever possible, avoid generating a bottom-line conclusion until you've accounted for all of the available data.
The agent at the top mistakenly believes that the correct move is to head to the left, since that seems to be the most direct path toward the goal. The agent on the right can see that this is a mistake, but it would never have been able to navigate to that particular node of the maze on its own.
A suggestion: don't avoid feelings. Instead, think of feelings as deterministic and carrying valuable information; feelings can be right even when our justifications are wrong (or vice versa).
Ultimately, this whole technique is about understanding the causal paths which led to both your own beliefs, and your conversational partner's beliefs. In difficult areas, e.g. politics or religion, people inevitably talk about logical arguments, but the actual physical cause of their belief is very often intuitive and emotional - a feeling. Very often, those feelings will be the main "crux".
For instance, I've argued that the feelings of religious people offer a much better idea of religion's real value than any standard logical argument - the crux in a religious argument might be entirely a crux of feelings. In another vein, Scott's thrive/survive theory offers psychological insight on political feelings more than political arguments, and it seems like it would be a useful crux generator - i.e., would this position seem reasonable in a post-scarcity world, or would it seem reasonable during a zombie outbreak?
Let's use the school uniforms example.
The post mentions "uniforms make students feel better about themselves" as something to avoid. But that claim strongly suggests a second statement for the claimant: "I would have felt better about myself in middle school, if we'd had uniforms." A statement like that is a huge gateway into productive discussion.
First and foremost, that second statement is very likely the true cause for the claimant's position. Second, that feeling is something which will itself have causes! The claimant can then think back about their own experiences, and talk about why they feel that way.
Of course, that creates another pitfall to watch out for: argument by narrative, rather than statistics. It's easy to tell convincing stories. But if one or both participants know what's up, then each participant can produce a narrative to underlie their own feelings, and then the real discussion is over questions like (1) which of those narratives is more common in practice, (2) should we assign more weight to one type of experience, (3) what other types of experiences should we maybe consider, and (4) does the claim make sense even given the experience?
Th... (read more)
Anecdotal data time! We tried this at last week’s Chicago rationality meetup, with moderate success. Here’s a rundown of how we approached the activity, and some difficulties and confusion we encountered.
Before the meeting, some of us came up with lists of possibly contentious topics and/or strongly held opinions, and we used those as starting points by just listing them off to the group and seeing if anyone held the opposite view. Some of the assertions on which we disagreed were:
We paired off, with each pair in front of a blackboard, and spent about 15 minutes on our first double crux, after the resolution of which the conversations mostly devolved. We then came together, gave feedback, switche... (read more)
Personally, I am still eagerly waiting for CFAR to release more of their methods and techniques. A lot of them seem to be already part of the rationalist diaspora's vocabulary -- however, I've been unable to find descriptions of them.
For example, you mention "TAP"s and the "Inner Simulator" at the beginning of this article, yet I haven't had any success googling those terms, and you offer no explanation of them. I would be very interested in what they are!
I suppose the crux of my criticism isn't that there are techniques you haven't released yet, nor that rationalists are talking about them, but that you mention them as though they were common knowledge. This, sadly, gives the impression that LWers are expected to know about them, and reinforces the idea that LW has become a kind of elitist clique. I'm worried that you are using this in order to make aspiring rationalists, who very much want to belong, come to CFAR events, to be in the know.
Decided to contribute a bit: here's a new article on TAPs! :)
I'm excited to try this out in both strong and weak forms.
There are parallels of getting to the crux of something in design and product discovery research. It is called Why Laddering. I have used it when trying to understand the reasons behind a particular person's problem or need. If someone starts too specific it is a great way to step back from solutions they have preconceived before knowing the real problem (or if there is even one).
It also attempts to get to a higher level in a system of needs.
Are there ever times that the double crux have resulted in narrowing with each crux? Or do they generally become more general?
There is the reverse as well called How Laddering which tries to find solutions for more general problems.
It sounds like the 'reverse double crux' would be to find a new, common solution after a common crux has been found.
Thanks for writing this up! One thing I particularly like about this technique is that it seems to really help with getting into the mindset of seeing disagreements as good (not an unpleasant thing to be avoided), and seeing them as good for the right reasons - for learning more about the world/your own beliefs/changing your mind (not a way to assert status/dominance/offload anger etc.)
I feel genuinely excited about paying more attention to where I disagree with others and trying to find the crux of the disagreement now, in a way I didn't before reading this post.
I'm going to go out and state that the chosen example of "middle school students should wear uniforms" fails the prerequisite of "Confidence in the existence of objective truth", as do many (most?) "should" statements.
I strongly believe that there is no objectively true answer to the question "middle school students should wear uniforms", as the truth of that statement depends mostly not on the understanding of the world or the opinion about student uniforms, but on the interpretation of what the "should" m... (read more)
I think you're basically making correct points, but that your conclusion doesn't really follow from them.
Remember that double crux isn't meant to be a "laboratory technique" that only works under perfect conditions—it's meant to work in the wild, and has to accommodate the way real humans actually talk, think, and behave.
You're completely correct to point out that "middle school students should wear uniforms" isn't a well-defined question yet, and that someone wanting to look closely at it and double crux about it would need to boil down a lot of specifics. But it's absolutely the sort of phrase that someone who has a well-defined concept in mind might say, at the outset, as a rough paraphrase of their own beliefs.
You're also correct (in my opinion/experience) to point out that "should" statements are often a trap that obfuscates the point and hinders progress, but I think the correct response there isn't to rail against shoulds, but to do exactly the sort of conversion that you're recommending as a matter of habit and course.
People are going to say things like "we should do X," and I think letting that get under one's skin from the outset is unproductive, whereas going, ah, cool, I can think of like four things you might mean by that—is it one of these? ... is a simple step that any aspiring double cruxer or rationalist is going to want to get used to.
I've waited to make this comment because I wanted to read this carefully a few times first, but it seems to me that the "crux" might not be doing a lot here, compared to simply getting people to actually think about and discuss an issue, instead of thinking about argument in terms of winning and losing. I'm not saying the double crux strategy doesn't work, but that it may work mainly because it gives the people something to work at other than winning and losing, and something that involves trying to understand the issues.
One thing that I have not... (read more)
Hi, some friends and I tried to practice this a couple days back. So I guess main takeaway points (two years later! Haha):
It's hard to practice with "imagined" disagreements or devil advocates; our group often blanked out when we dug deeper. Eg one thing we tried to discuss was organ donation after death. We all agreed that it was a good idea, and had a hard time imagining why someone wouldn't think it was a good idea.
Choice of topic is important - some lighter topics might not work that well. We tackled something lighter afterwards - "Apple products
This set of strategies looks familiar. I've never called it double crux or anything like that, but I've used a similar line in internet arguments before.
Taking a statement that disagrees with me; assuming my opponent is sane and has reasons to insist that that statement is true; interrogating (politely) to try to find those reasons (and answering any similar interrogations if offered); trying to find common ground where possible, and work from there to the point of disagreement; eventually either come to agreement or find reasons why we do not agree that d... (read more)
I'm seeing similarities between this and Goldratt's "Evaporating Cloud". You might find it worthwhile to read up on applications of EC in the literature on Theory of Constraints, if you haven't already.
(This is Dan from CFAR)
Here are a few examples of disagreements where I'd expect double crux to be an especially useful approach (assuming that both people hit the prereqs that Duncan listed):
2 LWers disagree about whether attempts to create "Less Wrong 2.0" should try to revitalize Less Wrong or create a new site for discussion.
2 LWers disagree on whether it would be good to have a norm of including epistemic effort metadata at the start of a post.
2 EAs disagree on whether the public image of EA should make it seem commonsense and relatable ... (read more)
Does anyone else, other than me, have a problem with noticing when the discussion they're having is getting more abstract? I'm often reminded of this fact when debating some topic. This is relating to the point on "Narrowing the scope", and how to notice the need to do this.
Interesting. Some time ago I was planning on writing some things on how to have an argument well, but I found a lot of it was already covered by Eliezer in "37 ways words can be wrong". I think this covers a lot of the rest of it! Things like "Spot your interlocutor points so you can get to the heart of the matter; you can always unspot them later if they turn out to be more crucial than you realized."
One thing I've tried sometimes is actively proposing reasons for my interlocutor's beliefs when they don't volunteer any, and seeing i... (read more)
If double crux felt like the Inevitable Correct Thing, what other things would we most likely believe about rationality in order for that to be the case?
I think this is a potentially useful question to ask for three reasons. One, it can be a way to install double crux as a mental habit -- figure out ways of thinking which make it seem inevitable. Two, to the extent that we think double crux really is quite useful, but don't know exactly why, that's Bayesian evidence for whatever we come up with as potential justification for it. But, three, pinning down su... (read more)
I hope very minor nitpicks make acceptable comments. I apologize if not.
In the section about Prerequisites, specifically about The belief in an objective truth, the example "is the color orange superior to the color green" does not function as a good example in my opinion. This is because it is not a well-posed problem for which a clear-cut answer exists. At the very least, the concept of "superior" is too vague for such an answer to exist. I would suggest adding "for the specific purpose we have in mind", or removing that example.
This is well written and makes me want to play.
I think the cartoon could benefit from concrete, short, A and B. It also may benefit from rewording the final statements to "If I change my mind about B, that would change my mind about A too"?
Take some of your actual double crux sessions, boil down A and B into something short, try the comic out with the concrete example?
Has anyone here tried building a UI for double crux conversations? The format has the potential to transform contentious conversations and debates, but right now it's unheard of outside of this community, mostly due to difficulties in execution.
Graph databases (the structure used for RoamResearch) would be the perfect format, and the conversations would benefit greatly from a standardized visual approach (much easier than whiteboarding or trying to write every point down). The hard part would be figuring out how to standardize it, which would involve having several conversations and debating the best way to break them down.
If anyone's interested in this, let me know.
How much will you bet that there aren't better strategies for resolving disagreement?
Given the complexity of this strategy it seems to me like in most cases it is more effective to do some combination of the following:
1) Agree to disagree 2) Change the subject of disagreement 3) Find new friends who agree with you 4) Change your beliefs, not because you believe they are wrong but because other people believe they are wrong. 5) Violence (I don't advocate this in general, but in practice it's what humans do when they have disagreed through history)
Nice to see the Focusing/Gendlin link (but it's broken!)
Are you familiar with "Getting To Yes" by Ury and the Harvard mediators?
They were initially trained by Marshall Rosenberg, who drew on Gendlin and Rogers. I don't know if Double Crux works, but NVC mediation really does. Comparison studies for effectiveness would be interesting!
For more complex/core issues, Convergent Facilitation by Miki Kashtan is brilliant, as is www.restorativecircles.org by Dominic Barter, who is very insightful on our unconscious informal "justice systems".
Fast Consensing, Sociocracy and Holacracy are also interesting.
Yes, but the idea is that a proof within one axiomatic system does not constitute a proof within another.
2.... (read more)
If someone uses different rules than you to decide what to believe, then things that you can prove using your rules won't necessarily be provable using their rules.
What if the disagreeing parties have radical epistemological differences? Double crux seems like a good strategy for resolving disagreements between parties that have an epistemological system in common (and access to the same relevant data), because getting to the core of the matter should expose that one or both of them is making a mistake. However, between two or more parties that use entirely different epistemological systems - e.g. rationalism and empiricism, or skepticism and "faith" - double crux should, if used correctly, eventually lead ... (read more)
This looks like a good method to derive lower-level beliefs from higher-level beliefs. The main thing to consider when taking a complex statement of belief from another person, is that it is likely that there is more than one lower-level belief that goes into this higher-level belief.
In doxastic logic, a belief is really an operator on some information. At the most base level, we are believing, or operating on, sensory experience. More complex beliefs rest on the belief operation on knowledge or understanding; where I define knowledge as belief of some in... (read more)
This is my response in which I propose a different approach.
For the author and the audience: what are your favourite patience- and sanity-inducing rituals?
Correct me if I'm wrong. You are searching for a sentence B such that:
1) if B then A
2) if not B, then not A. Which implies if A then B.
Which implies that you are searching for an equivalent argument. How can an equivalent argument have an explanatory power?
This account has been posting spam since April 2017 (though all of their old comments have been deleted and are visible only on their overview and comments pages).
SPAMMITY SPAM SPAM
"In essence: a conviction that for almost any well-defined question, there really truly is a clear-cut answer."
Every question is formulated using words, and words are either defined by other words, or defined by pointing to a number of examples, which means that all words are ultimately defined by pointing to examples. Pointing to examples does not and cannot make anything "well-defined", so no question is well defined, nor does any question have a really truly clear-cut answer.
This comment might look like a joke, but it is not; I think it is pretty much as true as anything can be (since the point is that nothing is ever absolutely precise, the argument can't claim absolute precision itself.)