I don't want people to trust me, because I think trust would result in us getting the wrong answer.
I want people to read the words I write, think it through for themselves, and let me know in the comments if I got something wrong.
This is a refreshing conclusion. I'm happy to point out what I think you're getting wrong, but I have to note that this feels pretty cooperative. Already.
If I'm a Bayesian reasoner honestly reporting my beliefs about some question, and you're also a Bayesian reasoner honestly reporting your beliefs about the same question, we should converge on the same answer, not because we're cooperating with each other, but because it is the answer.
I think you're doing a bit of slight of hand here. If I were to punch you in the face, I could say that this would damage your face -- not because we're fighting each other, simply because my fist is converging with your face. And while it's true that it's the fist to face impact that's doing the damage, and that this screens off intent... this probably won't happen unless we're fighting. Likewise, if we're playing an adversarial game, why the heck would I give away my informational advantage? Without at least trying to deceive you?
That is to say, yes, "honestly reporting beliefs" is what converges people on the same answer because it's true, but doing this is cooperation.
But correct epistemology does not involve conflicting interests.
Here's a disproof by example: "You are going to do the dishes"
You can't divorce the two, because the truth about reality depends on how people try to achieve their interests. And we don't tend to focus on facts that do not interest us.
Accordingly, when humans successfully approach the Bayesian ideal, it doesn't particularly feel like cooperating with your beloved friends, who see you with all your blemishes and imperfections but would never let a mere disagreement interfere with loving you. It usually feels like just perceiving things—resolving disagreements so quickly that you don't even notice them as disagreements.
So, it depends on the nature of the disagreement. If it's just "when will the bus arrive?", then yeah, that's sufficiently free of emotional charge that it doesn't feel like much, there's little motive for dishonesty, and will often resolve before its noticed as disagreement.
If it's something much more meaningful, like "It's okay if people see what you look like under your makeup" or "Despite this injury, you're okay", it starts to feel like something.
These things can still be resolved "bus schedule fast", when the disagreement really is that simple and people stay honest. It can even be fast enough that no one notices what happened. Yet "Love, imperfections and all" is actually a fairly decent description for its length. So is "Honesty, in an unusually strict sense".
There are techniques for resolving economic or interpersonal conflicts that involve both parties adopting a more cooperative approach, each being more willing to do what the other party wants (while the other reciprocates by doing more of what the first one wants). Someone who had experience resolving interpersonal conflicts using techniques to improve cooperation might be tempted to apply the same toolkit to resolving dishonest disagreements.
It might very well work for resolving the disagreement. It probably doesn't work for resolving the disagreement correctly, because cooperation is about finding a compromise amongst agents with partially conflicting interests, and in a dishonest disagreement in which both parties have non-epistemic goals, trying to do more of what the other party functionally "wants" amounts to catering to their bias, not systematically getting closer to the truth.
Interpersonal conflicts are about dishonest disagreements. Because if we're both being honest about "Who is going to do the dishes", then just like the bus time disagreement, it resolves before we notice it as a "conflict".
"You're going to do the dishes, because I don't wanna". "Actually, I think you're going to do the dishes today because I did them yesterday, and you're smart enough to recognize that 'I always get what I want because I say so' is factually untrue. So you will choose to do the thing that gets you out of as much dish washing as is possible. Which is that's doing it your half of the time". "Okay, you're right".
Except, like.. you usually don't have to say it out loud unless someone has been dishonest, because "I get what I want because I say so" is just pretty obviously wrong. So it's just "Hey, is there a reason you haven't done the dishes yet today?", because the underlying "Because you're smart enough to know you won't be able to get away with shirking" goes unsaid. And the response is just "Shoot, thanks for reminding me".
Heck, even physical violence goes that way. I can't count the number of fights I've avoided by responding to "Wanna fight!?" with "Ok". It's Aumann agreement over who is about to get their ass kicked if the fight were to happen. "I am gonna beat you up!" "I doubt it" "Me too, actually. Nvm"
To "compromise" a bit, not for the sake of social-cohesion-at-the-cost-of-truth but because you make a good point that I don't want to get lost, "compromising" on things by keeping the dishonesty and splitting the difference is indeed a failure mode worth pointing out.
If the goal becomes "sing 'Kumbaya' together" rather than "track reality", then the reality you're not tracking is probably gonna come back to bite you. And it won't be an accident on the part of the side that perceives it as a "win"
Most real-world disagreements of interest don't look like the bus arrival or math problem examples—qualitatively, not as a matter of trying to prove quantitatively harder theorems. Real-world disagreements tend to persist; they're predictable—in flagrant contradiction of how the beliefs of Bayesian reasoners would follow a random walk. From this we can infer that typical human disagreements aren't "honest", in the sense that at least one of the participants is behaving as if they have some other goal than getting to the truth.
Importantly, this characterization of dishonesty is using a functionalist criterion: when I say that people are behaving as if they have some other goal than getting to the truth, that need not imply that anyone is consciously lying; "mere" bias is sufficient to carry the argument.
Dishonest disagreements end up looking like conflicts because they are disguised conflicts.
I like a lot of the content in this post but this is where I get off the bus. I do not believe that Aumann's Agreement Theorem shows that all disagreements are dishonest; this is because Aumann's Agreement Theorem shows an end state and is not an argument that coming to consensus is an efficient algorithm.
I only know of one paper attempting to argue that Aumann Agreement is efficient, and I don't believe it succeeds. It argues that you only need to communicate a relatively low number of bits between two agents, but the computational limit it puts on how much thinking you have to do is utterly impractical—to get a 50% chance at arriving within 50 percentile points of one anotherm, you must execute subroutine calls, which is not remotely efficient.
So while I believe Aumann Agreement applies to unbounded agents, I don't believe it has practical implications for bounded agents like us.
I think in practice the crux of most disagreement is failure to understand something about how the other person thinks, with various practical or psychological difficulties enabling this failure. Game theory mostly enters this picture when it wants to further disrupt something on that path to understanding. Sufficiently persistent disagreements can involve tying yourself into philosophical knots, so that your thinking becomes infeasible to understand for (in particular) those who disagree, probably also for yourself.
Ideologies develop extensive lore held together by obscure patterns, which gets infeasible for even non-professional insiders to follow in detail. Some questions are just difficult, and the only existing reasonable hypotheses are very complicated. In both of these cases, there emerges a stratum of professionals who can follow the arguments, with outside interactions on the topic becoming impractical.
Disagreement feeds on preference for nuance, on cultural accumulation. Legibility breaks the cycle (even for very complicated ideas), where it can be found. Sometimes legibility only grows on the soil of cultural accumulation, cultivated with nuance. Quite often, it can be found directly, if nuance is cut away.
I don't think Aumann's agreement theorem is a good way to motivate your normative judgments, though I basically agree with your conclusions. I read Duncan's post as well and did not really understand why he called you out. You both seem non-malevolent to me.
Bayesianism generalizes logical reasoning to uncertain claims, subject to certain consistency assumptions. Obviously humans are not ideal Bayesians. But in a deeper sense, maybe we're not supposed to be. Not in an instrumental sense where being Bayesian is incompatible with some kind of good life, but rather in an epistemic sense. Maybe there is some mathematical theory of reasoning, we'll call it Glorpism, of which humans are an approximation, and it is easier for humans to become more Glorpish than it is for us to become more Bayesian, and becoming more Glorpish is powerful and general in the sense we expect epistemic rationality to be. Glorpism may not have agreement guarantees in the way that Bayesianism does.
Sam Eisenstat's Condensation is an example of something like this, although I don't think it's The Thing. Importantly, Condensation only has the translation theorem to the extent that models are hierarchically organized in a nice way, which does not always hold. (Apologies for any errors, feel free to correct me.)
I also think a purely functionalist account of reasoning error deletes a lot of information. For example, a Ruby that says, "oh, my bad" upon being confronted with evidence from computer analysis of photographs that the different images are all grey is different from a Ruby who changes the topic or flies into a rage. Among the first type of Ruby, those that systematically downgrade or restructure how they assign credence to their color-intuitions after admitting their error is different from those who "bounce back" to their original epistemic state. The best one of these is well-modelled by mistake theory. The worst two, conflict theory.
In real life, I think honest humans often agree to disagree. I do not fully understand why this is and consider this an important problem in the theory of powerful reasoners. I think part of it is that humans perform reasoning using words. Honest words correspond to natural categories but natural categories have an intrinsic misgeneralization problem. If you have two objects, korgs and spangs, which both have exactly half the properties each of bleggs and rubes, but different sets of these properties, then honest people might categorize them differently as bleggs and rubes. But this process is happening below the level of introspective access, so dissolving the question / debucketing has to be done "out loud" in the chamber of consciousness. The act of debucketing / rectifying definitions is a constraint problem with the constraints supplied by one's introspection on hypotheticals. In general this can take exponential time in the number of traits used to define bleggs and rubes. (I do not have a proof of this, and expect the answer is sensitive to the formulation of the problem. This last claim is purely mathematical intuition.)
Also, our equivalent of Bayesian evidence is our sense-data, which is stored in an extremely unreliable compression system.
I don't think parties "are competing to get their preferred belief accepted" in typical persistent disagreements among people I respect. Instead:
ETA: 5. And yeah, sometimes people are trying to get the other person to believe something no matter if it's true. Not all the time though.
Aumann's agreement is pragmatically wrong. For bounded levels of compute you can't necessarily converge on the meta level of evidence convergence procedures.
Real-world disagreements tend to persist; they're predictable—in flagrant contradiction of how the beliefs of Bayesian reasoners would follow a random walk. From this we can infer that typical human disagreements aren't "honest", in the sense that at least one of the participants is behaving as if they have some other goal than getting to the truth.
Technically, it's sufficient if one of the participants incorrectly believes this about the other. If I incorrectly believe you're a bus arrival time scammer, we won't converge despite both being honest.
I see how it makes sense but it was super freaking hard for me to understand why "Cassandra/Mule". These words mean so many things I guess
In figuring out what would constitute good conduct and productive discourse, it’s important to appreciate how bizarre the human practice of “discourse” looks in light of Aumann’s dangerous idea.
Within the rationalsphere , Aumanns Theorem is simplified into something like "reasonable people can't agree to differ". Yet they can, because the conditions on AAT are much more stringent than an informal understanding of what it is to be reasonable. So AAT lacks real world applicability..
There is a further problem that prior beliefs can include beliefs about epistemology, about what constitutes evidence.
Controversies that are sufficiently deep, or which cut across cultural boundaries run into a problem where, not only do parties disagree about the object level issue, they also disagree about underlying questions of what constitutes truth, proof, evidence, etc. "Satan created the fossils to mislead people" is an example of one side rejecting the other sides evidence as even being evidence . Its a silly example, but there are much more robust ones.
There’s only one reality.
Why do people keep saying that?
It's true that If there were more than one world, that would undercut the ability of different thinkers to come to a uniform conclusion about it, but there is no corollary that existence of a single world would guarantee anything epistemically. It's a necessary condition for convergence, but not sufficient.
If there is one world, that doesn't guarantee that there are agents capable of understanding it, within it ..and if there are intelligent and rational agents, their ability to converge on a single all encompassing truth could be impossible for a number of further reasons. Problems include the inadequacy of epistemic evidence to address all problems, the reliance of logic on axioms (the Munchausen trilemma) , the reliance of epistemology on epistemology (the problem of the criterion), etc.
None of those problems has anything to do with conflicts, dishonesty or different values -- although they undoubtedly exist as well.
If I’m a Bayesian reasoner honestly reporting my beliefs about some question, and you’re also a Bayesian reasoner honestly reporting your beliefs about the same question, we should converge on the same answer, not because we’re cooperating with each other, but because it is the answer.
Bayes has both problems simultaneously. It is dependent on evidence, so it has all the problems of empiricism; and it has the problems of rationalism because it starts with priors. The argument for Bayesianism is that even agents with wildly differing priors can eventually agree, given sufficient evidence. But that is an argument about ideal Bayesians: in reality , the amount of evidence available might be too limited to allow convergence. The ability of realistic Bayesians to formulate hypotheses is also limited. Alice puts most of her credence on the one hypothesis that seems best supported to her, out of the hypotheses she has heard of, or thought up, but Bob might have a better hypothesis that's not in her set.
Real-world disagreements tend to persist; they’re predictable—in flagrant contradiction of how the beliefs of Bayesian reasoners would follow a random walk. From this we can infer that typical human disagreements aren’t “honest”, in the sense that at least one of the participants is behaving as if they have some other goal than getting to the truth.
No, that isn't the inevitable conclusion, because there are so many other sources of disagreement.
That is to say: I do not understand how high-trust, high-cooperation dynamics work. I’ve never seen them. They are utterly outside my experience and beyond my comprehension
I find the black-and-white framing of that very odd. You've spent most of your life in business and academia, which seem to me high co-operation compared to politics and crime, for instance.
Meta note to the mods: I'd personally prefer posts with this level of personal dispute not to make it onto the frontpage, even if they are used as a frame to argue more general points.
I don't think I agree with "used as a frame to argue more general points"? I think the general points are core to the conflict, and most of what is being discussed.
Like, if two people are disagreeing over theories of physics, and the debate gets heated, it's still the case that posts showing the evidence and arguments between the two theories are timeless and good, even if it's motivated by a personal conflict.
I don't know, there's still something about this post I don't like, which is that if I showed up and had no context (and in fact I didn't because I didn't read the post this is responding to and didn't realize it was also on the frontpage), my reaction would be "uh, what kind of site is this where personal beef is front and center?" (as it was my actual reaction was "oh, I thought this might be something worth reading, but it's just Zack beefing with someone again, so I quickly skimmed it, saw more beefing, and then I skipped looking at it closely).
(And just to be super clear, I think it's quite reasonable to post this on LessWrong, just personally for me I don't like that it ended up on the frontpage, which is what I'm registering here, even if that ends up being inconsistent with the mod team's promotion principles.)
I think it’s appropriate for some ppl to be alarmed that this can happen to you here. And I agree it’s sad for ppl to be openly hostile to others in an ongoing way (tbc I think the quote of Duncan‘s in the OP is this more than anything in the rest of the OP by Zack.).
Do you have the same objection to the post I'm responding to getting Frontpaged (and in fact, Curated)?
To be clear, I think it was obviously correct for "Truth or Dare" to be Frontpaged (it was definitely relevant and timeless, even if I disagree with it); I'm saying I don't think it's consistent for a direct response to a Frontpage (Curated!) post to somehow not qualify for Frontpage.
(It wasn't obvious the post you are responding to was made on LW, since the text of your post only links to Duncan's blog, not to the post on LW. I think the distinction between that post being from just Duncan's blog vs. also LW specifically is a crux for flagging this on LW being reasonable. Though it's still a delicate balance to avoid encouraging infinite feuds, inciting events unavoidably have externalities in making responses to them possible or necessary. A little bit of anything like that is never a problem directly, but it gets to feed relevant norms a little bit, making it less convenient to course correct later.)
Thanks; I edited the link (on this Less Wrong mirrorpost).
encouraging infinite feuds
"Feuds", is that really what people think? (I think it's fine for people to criticize me, and that it's fine for me to reply.) I'm really surprised at the contrast between the karma and the comment section on this one—currently 10 karma in 26 votes (0.38 karma/vote). Usually when I score that poorly, it's because I really messed up on substance, and there's a high-karma showstopper comment explaining what I got so wrong, but none of the comments here seem like showstoppers.
As an edge-case, I can imagine a frontpage post making a cutting remark about a person while still overall meeting the frontpage criteria, and then a person writing a response post just addressing the personal aspects of the cutting remark, and that not meeting the frontpage criteria.
I don't think that's what's happening here though, this reads to me as substantively engaging (critically) with the core / thrust of the frontpage post it's responding to.
If I'm a Bayesian reasoner honestly reporting my beliefs about some question, and you're also a Bayesian reasoner honestly reporting your beliefs about the same question, we should converge on the same answer, not because we're cooperating with each other, but because it is the answer
If I am a Bayesian reasoner honestly reporting your beliefs about some question, and you're also a Bayesian reasoner honestly reporting your beliefs about the same question, and the question is what games we want to play together, the answer and whether we converge to it depends on other factors as well :)
In "Truth or Dare", Duncan Sabien articulates a phenomenon in which expectations of good or bad behavior can become self-fulfilling: people who expect to be exploited and feel the need to put up defenses both elicit and get sorted into a Dark World where exploitation is likely and defenses are necessary, whereas people who expect beneficence tend to attract beneficence in turn.
Among many other examples, Sabien highlights the phenomenon of gift economies: a high-trust culture in which everyone is eager to help each other out whenever they can is a nicer place to live than a low-trust culture in which every transaction must be carefully tracked for fear of enabling free-riders.
I'm skeptical of the extent to which differences between high- and low-trust cultures can be explained by self-fulfilling prophecies as opposed to pre-existing differences in trust-worthiness, but I do grant that self-fulfilling expectations can sometimes play a role: if I insist on always being paid back immediately and in full, it makes sense that that would impede the development of gift-economy culture among my immediate contacts. So far, the theory articulated in the essay seems broadly plausible.
Later, however, the post takes an unexpected turn:
As a reader of the essay, I reply: wait, who? Am I supposed to know who this Davies person is? Ctrl-F search confirms that they weren't mentioned earlier in the piece; there's no reason for me to have any context for whatever this section is about.
As Zack Davis, however, I have a more specific reply, which is: yeah, I don't think that button does what you think it does. Let me explain.
In figuring out what would constitute good conduct and productive discourse, it's important to appreciate how bizarre the human practice of "discourse" looks in light of Aumann's dangerous idea.
There's only one reality. If I'm a Bayesian reasoner honestly reporting my beliefs about some question, and you're also a Bayesian reasoner honestly reporting your beliefs about the same question, we should converge on the same answer, not because we're cooperating with each other, but because it is the answer. When I update my beliefs based on your report on your beliefs, it's strictly because I expect your report to be evidentially entangled with the answer. Maybe that's a kind of "trust", but if so, it's in the same sense in which I "trust" that an increase in atmospheric pressure will exert force on the exposed basin of a classical barometer and push more mercury up the reading tube. It's not personal and it's not reciprocal: the barometer and I aren't doing each other any favors. What would that even mean?
In contrast, my friends and I in a gift economy are doing each other favors. That kind of setting featuring agents with a mixture of shared and conflicting interests is the context in which the concepts of "cooperation" and "defection" and reciprocal "trust" (in the sense of people trusting each other, rather than a Bayesian robot trusting a barometer) make sense. If everyone pitches in with chores when they can, we all get the benefits of the chores being done—that's cooperation. If you never wash the dishes, you're getting the benefits of a clean kitchen without paying the costs—that's defection. If I retaliate by refusing to wash any dishes myself, then we both suffer a dirty kitchen, but at least I'm not being exploited—that's mutual defection. If we institute a chore wheel with an auditing regime, that reëstablishes cooperation, but we're paying higher transaction costs for our lack of trust. And so on: Sabien's essay does a good job of explaining how there can be more than one possible equilibrium in this kind of system, some of which are much more pleasant than others.
If you've seen high-trust gift-economy-like cultures working well and low-trust backstabby cultures working poorly, it might be tempting to generalize from the domains of interpersonal or economic relationships, to rational (or even "rationalist") discourse. If trust and cooperation are essential for living and working together, shouldn't the same lessons apply straightforwardly to finding out what's true together?
Actually, no. The issue is that the payoff matrices are different.
Life and work involve a mixture of shared and conflicting interests. The existence of some conflicting interests is an essential part of what it means for you and me to be two different agents rather than interchangable parts of the same hivemind: we should hope to do well together, but when push comes to shove, I care more about me doing well than you doing well. The art of cooperation is about maintaining the conditions such that push does not in fact come to shove.
But correct epistemology does not involve conflicting interests. There's only one reality. Bayesian reasoners cannot agree to disagree. Accordingly, when humans successfully approach the Bayesian ideal, it doesn't particularly feel like cooperating with your beloved friends, who see you with all your blemishes and imperfections but would never let a mere disagreement interfere with loving you. It usually feels like just perceiving things—resolving disagreements so quickly that you don't even notice them as disagreements.
Suppose you and I have just arrived at a bus stop. The bus arrives every half-hour. I don't know when the last bus was, so I don't know when the next bus will be: I assign a uniform probability distribution over the next thirty minutes. You recently looked at the transit authority's published schedule, which says the bus will come in six minutes: most of your probability-mass is concentrated tightly around six minutes from now.
We might not consciously notice this as a "disagreement", but it is: you and I have different beliefs about when the next bus will arrive; our probability distributions aren't the same. It's also very ephemeral: when I ask, "When do you think the bus will come?" and you say, "six minutes; I just checked the schedule", I immediately replace my belief with yours, because I think the published schedule is probably right and there's no particular reason for you to lie about what it says.
Alternatively, suppose that we both checked different versions of the schedule, which disagree: the schedule I looked at said the next bus is in twenty minutes, not six. When we discover the discrepancy, we infer that one of the schedules must have been outdated, and both adopt a distribution with most of the probability-mass in separate clumps around six and twenty minutes from now. Our initial beliefs can't both have been right—but there's no reason for me to weight my prior belief more heavily just because it was mine.
At worst, approximating ideal belief exchange feels like working on math. Suppose you and I are studying the theory of functions of a complex variable. We're trying to prove or disprove the proposition that if an entire function satisfies f(x+1)=f(x) for real x, then f(z+1)=f(z) for all complex z. I suspect the proposition is false and set about trying to construct a counterexample; you suspect the proposition is true and set about trying to write a proof by contradiction. Our different approaches do seem to imply different probabilistic beliefs about the proposition, but I can't be confident in my strategy just because it's mine, and we expect the disagreement to be transient: as soon as I find my counterexample or you find your reductio, we should be able to share our work and converge.
Most real-world disagreements of interest don't look like the bus arrival or math problem examples—qualitatively, not as a matter of trying to prove quantitatively harder theorems. Real-world disagreements tend to persist; they're predictable—in flagrant contradiction of how the beliefs of Bayesian reasoners would follow a random walk. From this we can infer that typical human disagreements aren't "honest", in the sense that at least one of the participants is behaving as if they have some other goal than getting to the truth.
Importantly, this characterization of dishonesty is using a functionalist criterion: when I say that people are behaving as if they have some other goal than getting to the truth, that need not imply that anyone is consciously lying; "mere" bias is sufficient to carry the argument.
Dishonest disagreements end up looking like conflicts because they are disguised conflicts. The parties to a dishonest disagreement are competing to get their preferred belief accepted, where beliefs are being preferred for some reason other than their accuracy: for example, because acceptance of the belief would imply actions that would benefit the belief-holder. If it were true that my company is the best, it would follow logically that customers should buy my products and investors should fund me. And yet a discussion with me about whether or not my company is the best probably doesn't feel like a discussion about bus arrival times or the theory of functions of a complex variable. You probably expect me to behave as if I thought my belief is better "because it's mine", to treat attacks on the belief as if they were attacks on my person: a conflict rather than a disagreement.
"My company is the best" is a particularly stark example of a typically dishonest belief, but the pattern is very general: when people are attached to their beliefs for whatever reason—which is true for most of the beliefs that people spend time disagreeing about, as contrasted to math and bus-schedule disagreements that resolve quickly—neither party is being rational (which doesn't mean neither party is right on the object level). Attempts to improve the situation should take into account that the typical case is not that of truthseekers who can do better at their shared goal if they learn to trust each other, but rather of people who don't trust each other because each correctly perceives that the other is not truthseeking.
Again, "not truthseeking" here is meant in a functionalist sense. It doesn't matter if both parties subjectively think of themselves as honest. The "distrust" that prevents Aumann-agreement-like convergence is about how agents respond to evidence, not about subjective feelings. It applies as much to a mislabeled barometer as it does to a human with a functionally-dishonest belief. If I don't think the barometer readings correspond to the true atmospheric pressure, I might still update on evidence from the barometer in some way if I have a guess about how its labels correspond to reality, but I'm still going to disagree with its reading according to the false labels.
There are techniques for resolving economic or interpersonal conflicts that involve both parties adopting a more cooperative approach, each being more willing to do what the other party wants (while the other reciprocates by doing more of what the first one wants). Someone who had experience resolving interpersonal conflicts using techniques to improve cooperation might be tempted to apply the same toolkit to resolving dishonest disagreements.
It might very well work for resolving the disagreement. It probably doesn't work for resolving the disagreement correctly, because cooperation is about finding a compromise amongst agents with partially conflicting interests, and in a dishonest disagreement in which both parties have non-epistemic goals, trying to do more of what the other party functionally "wants" amounts to catering to their bias, not systematically getting closer to the truth.
Cooperative approaches are particularly dangerous insofar as they seem likely to produce a convincing but false illusion of rationality, despite the participants' best of subjective conscious intentions. It's common for discussions to involve more than one point of disagreement. An apparently productive discussion might end with me saying, "Okay, I see you have a point about X, but I was still right about Y."
This is a success if the reason I'm saying that is downstream of you in fact having a point about X but me in fact having been right about Y. But another state of affairs that would result in me saying that sentence, is that we were functionally playing a social game in which I implicitly agreed to concede on X (which you visibly care about) in exchange for you ceding ground on Y (which I visibly care about).
Let's sketch out a toy model to make this more concrete. "Truth or Dare" uses color perception as an illustration of confirmation bias: if you've been primed to make the color yellow salient, it's easy to perceive an image as being yellower than it is.
Suppose Jade and Ruby consciously identify as truthseekers, but really, Jade is biased to perceive non-green things as green 20% of the time, and Ruby is biased to perceive non-red things as red 20% of the time. In our functionalist sense, we can model Jade as "wanting" to misrepresent the world as being greener than it is, and Ruby as "wanting" to misrepresent the world is being redder than it is.
Confronted with a sequence of gray objects, Jade and Ruby get into a heated argument: Jade thinks 20% of the objects are green and 0% are red, whereas Ruby thinks they're 0% green and 20% red.
As tensions flare, someone who didn't understand the deep disanalogy between human relations and epistemology might propose that Jade and Ruby should strive be more "cooperative", establish higher "trust."
What does that mean? Honestly, I'm not entirely sure, but I worry that if someone takes high-trust gift-economy-like cultures as their inspiration and model for how to approach intellectual disputes, they'll end up giving bad advice in practice.
Cooperative human relationships result in everyone getting more of what they want. If Jade wants to believe that the world is greener than it is and Ruby wants to believe that the world is redder than it is, then naïve attempts at "cooperation" might involve Jade making an effort to see things Ruby's way at Ruby's behest, and vice versa. But Ruby is only going to insist that Jade make an effort to see it her way when Jade says an item isn't red. (That's what Ruby cares about.) Jade is only going to insist that Ruby make an effort to see it her way when Ruby says an item isn't green. (That's what Jade cares about.)
If the two (perversely) succeed at seeing things the other's way, they would end up converging on believing that the sequence of objects is 20% green and 20% red (rather than the 0% green and 0% red that it actually is). They'd be happier, but they would also be wrong. In order for the pair to get the correct answer, then without loss of generality, when Ruby says an object is red, Jade needs to stand her ground: "No, it's not red; no, I don't trust you and won't see things your way; let's break out the Pantone swatches." But that doesn't seem very "cooperative" or "trusting".
At this point, a proponent of the high-trust, high-cooperation dynamics that Sabien champions is likely to object that the absurd "20% green, 20% red" mutual-sycophancy outcome in this toy model is clearly not what they meant. (As Sabien takes pains to clarify in "Basics of Rationalist Discourse", "If two people disagree, it's tempting for them to attempt to converge with each other, but in fact the right move is for both of them to try to see more of what's true.")
Obviously, the mutual sycophancy outcome is clearly not what proponents of trust and cooperation consciously intend. The problem is that mutual sycophancy seems to be the natural outcome of treating interpersonal conflicts as analogous to epistemic disagreements and trying to resolve them both using cooperative practices, when in fact the decision-theoretic structure of those situations are very different. The text of "Truth or Dare" seems to treat the analogy as a strong one; it wouldn't make sense to spend so many thousands of words discussing gift economies and the eponymous party game and then draw a conclusion about "what constitutes good conduct and productive discourse", if gift economies and the party game weren't relevant to what constitutes productive discourse.
"Truth or Dare" seems to suggest that it's possible to escape the Dark World by excluding the bad guys. "[F]rom the perspective of someone with light world privilege, [...] it did not occur to me that you might be hanging around someone with ill intent at all," Sabien imagines a denizen of the light world saying. "Can you, um. Leave? Send them away? Not be spending time in the vicinity of known or suspected malefactors?"
If we're talking about holding my associates to a standard of ideal truthseeking (as contrasted to a lower standard of "not using this truth-or-dare game to blackmail me"), then, no, I think I'm stuck spending time in the vicinity of people who are known or suspected to be biased. I can try to mitigate the problem by choosing less biased friends, but when we do disagree, I have no choice but to approach that using the same rules of reasoning that I would use with a possibly-mislabeled barometer, which do not have a particularly cooperative character. Telling us that the right move is for both of us to try to see more of what's true is tautologically correct but non-actionable; I don't know how to do that except by my usual methodology, which Sabien has criticized as characteristic of living in a dark world.
That is to say: I do not understand how high-trust, high-cooperation dynamics work. I've never seen them. They are utterly outside my experience and beyond my comprehension. What I do know is how to keep my footing in a world of people with different goals from me, which I try to do with what skill and tenacity I can manage.
And if someone should say that I should not be trusted when I try to explain what constitutes good conduct and productive discourse ... well, I agree!
I don't want people to trust me, because I think trust would result in us getting the wrong answer.
I want people to read the words I write, think it through for themselves, and let me know in the comments if I got something wrong.