I've looked into this question a little, but not very far. The following are some trailheads that I've have on the list to investigate, when I get around to it. My current estimation is that all of these, are at best, tangential to the problem that I (and it sounds like you) are interested in: getting to the truth of epistemic disagreements. My impression is there are lots of things in the world that are about resolving disputes, but not many people are interested in resolving disputes to get the answer. But I haven't looked very hard.
- The philosopher Robert Stalnaker, has a theory of conversations that involves building up a series of premises that both parties agree with. If either makes a claim that the other doesn't buy, you back up and substantiate that claim. Or something like that. I can't currently find a link to the essay in which outlines this method (anyone have it?), but this seems the most interesting to me, of all the things on this list.
- H/T to Nick Beckstead, who shared this with Anna, who shared it with me.
- There's a book called How to Have Impossible Conversations. I haven't read it yet, but seems mostly about having reasonable conversations about heated political / culture war style topics.
- Erisology is the study of disagreement, a term coined by John Nerst.
- Argument mapping is a thing, that some people claim is useful for disagreement resolution. I'm not very impressed though.
- Bay NVC teaches something called "convergent facilitation", which is about making decisions accommodating everyone's needs, and executing meetings rapidly.
- There's circling, which an number of rationalists have gotten value from, including for resolving disagreement.
Most of the things that I know about, and seem like they're in the vein of what you want, have come from our community. As you say, there's CFAR's Double Crux. Paul wrote this piece as a precursor to an AI alignment idea. Anna Salamon has been thinking about some things in this space lately. I use a variety of homegrown methods. Arbital was a large scale attempt to solve this problem. I think the basic idea of AI safety via debate is relevant, if only for theoretical reasons (Double Crux makes use of the same principle of isolating the single most relevant branch in a huge tree of possible conversations, but Double Crux and AI safety via debate used different functions for evaluating which branch is "most relevant").
I happened to have written about another framework for disagreement resolution today, but this one in particular is very much in the same family as Double Crux.