I keep hearing this phrase, "collaborative truthseeking." Question: what kind of epistemic work is the word "collaborative" doing?

Like, when you (respectively I) say a thing and I (respectively you) hear it, that's going to result in some kind of state change in my (respectively your) brain. If that state change results in me (respectively you) making better predictions than I (respectively you) would have in the absence of the speech, then that's evidence for the hypothesis that at least one of us is "truthseeking."

But what's this "collaborative" thing about? How do speech-induced state changes result in better predictions if the speaker and listener are "collaborative" with each other? Are there any circumstances in which the speaker and listener being "collaborative" might result in worse predictions?

New Answer
Ask Related Question
New Comment

5 Answers sorted by

Assumption: Most people are not truthseeking.

Therefore, a rational truthseeking person's priors would still be that the person they are debating with is optimizing for something else, such as creating an alliance, or competing for status.

Collaborative truthseeking would then be what happens where all participants trust each other to care about truth. That not only each of them cares about truth privately, but that this value is also common knowledge.

If I believe that the other person genuinely cares about truth, then I will take their arguments more seriously, and if I am surprised, I will be more likely to ask for more info.

If "collaborative" is qualifying truth-seeking, perhaps we can see it more easily by contrast with non-collaborative truthseeking. So what might that look like?

  • I might simply be optimizing for the accuracy of my beliefs, instead of whether or not you also discover the truth.
  • I might be optimizing competitively, where my beliefs are simply judged on whether they're better than yours.
  • I might be primarily concerned about learning from the environment or from myself as opposed to learning from you.
  • I might be following only my interests, instead of joint interests.
  • I might be behaving in a way that doesn't incentivize you to point out things useful to me, or discarding clues you provide, or in a way that fails to provide you clues.

This suggests collaborative truthseeking is done 1) for the benefit of both parties, 2) in a way that builds trust and mutual understanding, and 3) in a way that uses that trust and mutual understanding as a foundation.

There's another relevant contrast, where we could look at collaborative non-truthseeking, or contrast "collaborative truthseeking" as a procedure with other procedures that could be used (like "allocating blame"), but this one seems most related to what you're driving at.

I share Richard Kennaway's feeling that this is a rather strange question because the answer seems so obvious; perhaps I'm missing something important. But:

"Collaborative" just means "working together". Collaborative truthseeking means multiple people working together in order to distinguish truth from error. They might do this for a number of reasons, such as these:

  • They have different skills that mesh together to let them do jointly what they could not do so well separately.
  • The particular truths they're after require a lot of effort to pin down, and having more people working on that can get it done quicker.
  • They know different things; perhaps the truth in question can be deduced by putting together multiple people's knowledge.
  • There are economies of scale; e.g., a group of people could get together and buy a bunch of books or a fast computer or a subscription to some information source, which is almost as useful to each of them as if they'd paid its full price on their own.
  • There are things they can do together that nudge their brains into working more effectively (e.g., maybe adversarial debate gets each person to dig deeper for arguments in a particular direction than they would have done without the impetus to compete and win).

There is a sense in which collaborative truth-seeking is built out of individual truth-seeking. It just happens that sometimes the most effective way for an individual to find what's true in a particular area involves working together with other individuals who also want to do that.

Collaborative truth-seeking may involve activities that individual truth-seeking (at least if that's interpreted rather strictly) doesn't because they fundamentally require multiple people, such as adversarial debate or double-cruxing.

Being "collaborative" isn't a thing that in itself brings benefits. It's a name for a variety of things people do that bring benefits. Speech-induced state changes don't result in better predictions because they're "collaborative"; engaging in the sort of speech whose induced state changes seem likely to result in better predictions is collaboration.

And yes, there are circumstances in which collaboration could be counterproductive. E.g., it might be easier to fall into groupthink. Sufficiently smart collaboration might be able to avoid this by explicitly pushing the participants to explore more diverse positions, but empirically it doesn't look as if that usually happens.

Related: collaborative money-seeking, where people join together to form a "company" or "business" that pools their work in order to produce goods or services that they can sell for profit, more effectively than they could if not working together. Collaborative sex-seeking, where people join together to form a "marriage" or "relationship" or "orgy" from which they can derive more pleasure than they could individually. Collaborative good-doing, where people join together to form a "charity" which helps other people more effectively than the individuals could do it on their own. Etc.

(Of course businesses, marriages, charities, etc., may have other purposes besides the ones listed above, and often do; so might groups of people getting together to seek the truth.)

There are two cultures in this particular trade-off. Collaborative and adversarial.

I pitch collaborative as, "let's work together to find the answer (truth)" and I pitch adversarial as, "let's work against each other to find the answer (truth)".

Internally the stance is different. For collaborative, it might look something like, "I need to consider the other argument and then offer my alternative view". For adversarial, it might look something like, "I need to advocate harder for my view because I'm right". (not quite a balanced description)

Collaborative: "I don't know if that's true, what about x" Adversarial "you're wrong because of x".

Culturally 99% of either is fine as long as all parties agree on the culture and act like it. They do include each other at least partially.

Bad collaboration is not being willing to question the other's position and bad adversarial is not being willing to question one's own position and blindly advocating.

I see adversarial as going downhill in quality of conversation faster because it's harder to get a healthy separation of "you are wrong" from, "and you should feel bad (or dumb) about it". "only an idiot would have an idea like that".

In a collaborative process, the other person is not an idiot because there's an assumption that we work together. If adversarial process cuts to the depth of beliefs about our interlocker then from my perspective it gets un-pretty very quickly. Although skilled scientists are always using both and have a clean separation of personal and idea.

In an adversarial environment, I've known of some brains to take the feedback, "you are wrong because x" and translate it to, "I am bad, or I should give up, or I failed" and not "I should advocate for my idea better".

At the end of an adversarial argument is a very strong flip, popperian style "I guess I am wrong so I take your side".

At the end of a collaborative process is when I find myself taking sides, up until that point, it's not always clear what my position is, and even at the end of a collaborative process I might be internally resting on the best outcome of collaboration so far, but tomorrow that might change.

I see the possibility of being comfortable in each step of collaboration to say, "thank you for adding something here". However I see that harder or more friction to say so during adversarial cultures.

I advocate for collaboration over adversarial culture because of the bleed through from epistemics to inherent interpersonal beliefs. Humans are not perfect arguers or it would not matter so much. Because we play with brains and mixing territory of belief and interpersonal relationships I prefer collaborative to adversarial but I could see a counter argument that emphasised the value of the opposite position.

I can also see that it doesn't matter which culture one is in, so long as there is clarity around it being one and not the other.

Collaborative: "I don't know if that's true, what about x" Adversarial "you're wrong because of x".

Culturally 99% of either is fine as long as all parties agree on the culture and act like it.

Okay, but those mean different things. "I don't know if that's true, what about x" is expressing uncertainty about one's interlocutor's claim, and entreating them to consider x as an alternative. "You're wrong because of x" is a denial of one's interlocutor's claim for a specific reason.

I find myself needing to say both of these things, but in different situations, each of which probably occurs more than 1% of the time. This would seem to contradict the claim that 99% of either is fine!

A culture that expects me to refrain from saying "You're wrong because of x" even if someone is in fact wrong because of x (because telling the truth about this wouldn't be "collaborative") is trying to decrease the expressive power of language and is unworthy of the "rationalist" brand name.

I advocate for collaboration over adversarial culture because of the bleed through from epistemics to inherent interpersonal beliefs.

I advocate for a culture that discourages bleed-through from epistemics to inherent

... (read more)
This is largely tangential to your point (with which I agree), but I think it's worth pointing out that if Kevin really manages to be wrong about everything, you'd be able to get the right answer just by taking his conclusions and inverting them--meaning whatever cognitive processes he's using to get the wrong answer 100% of the time must actually be quite intelligent. [https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence]

if Kevin really manages to be wrong about everything, you'd be able to get the right answer just by taking his conclusions and inverting them

That only works for true-or-false questions. In larger answer spaces, he'd need to be wrong in some specific way such that there exists some simple algorithm (the analogue of "inverting") to compute the right answers from those wrong ones.

If multiple parties engage in adversarial interactions (e.g., debate, criminal trial, ...) with the shared goal of arriving at the truth then as far as I'm concerned that's still an instance of collaborative truth-seeking.

On the other hand, if at least one party is aiming to win rather than to arrive at the truth then I don't think they're engaging in truth-seeking at all. (Though maybe it might sometimes be effective to have a bunch of adversaries all just trying to win, and then some other people, who had better be extremely smart and... (read more)

The reason this question comes up in the first place is because there's multiple conversation and debate styles that have different properties, and you need some kind of name to distinguish them. Naming things is hard, and I'm not attached to any particular name. The thing I currently call "Adversarial Collaboration" is where two people are actively working together, in a process that is adversarial, but where they have some kind of shared good faith that if each of them represents their respective viewpoint well, the truth will emerge. A different thing, which I'd currently call "Adversarial Truthseeking", is like the first one, but where there's not as much of a shared framework of whether and how the process is supposed to produce net truth. Two people meet in the wild, think each other are wrong, and argue. What I currently call "Collaborative Truthseeking" typically makes sense when two people are building a product together on a team. It's not very useful to say "you're wrong because X", because the goal is not to prove ideas wrong, it's to build a product. "You're wrong because X, but Y might work instead" is more useful, because it actually moves you closer to a working model. It can also do a fairly complex thing of reaffirming trust, such that people remain truthseeking rather than trying to win. And yes, each of these can be "collaborative" in some sense, but you need some kind of word for the difference. (There are also things where you're doing something that looks collaborative but isn't truthseeking, and something that looks adversarial but isn't truthseeking) And each of those tend to involve fairly different mental states, which facilitate different mental motions. Adversarial truthseeking seems most likely (to me) to result in people treating arguments as soldiers and other political failure modes. Abram's hierarchy of conversational styles [https://www.lesswrong.com/posts/WB49uKgMkQRbKaHme/combat-vs-nurture-and-meta-contrarianism] is a

What I currently call “Collaborative Truthseeking” typically makes sense when two people are building a product together on a team. It’s not very useful to say “you’re wrong because X”, because the goal is not to prove ideas wrong, it’s to build a product. “You’re wrong because X, but Y might work instead” is more useful, because it actually moves you closer to a working model. It can also do a fairly complex thing of reaffirming trust, such that people remain truthseeking rather than trying to win.

What if we’re building a product together, and I think you’re wrong about something, but I don’t know what might work instead? What should I say to you?

(See, e.g., this exchange, and pretend that cousin_it and I were members of a project team, building some sort of web app or forum software together.)

There's a couple significant aspects of that exchange that make it look more collaborative than adversarial to me. Copying the text here for reference: The first sentence could have been worded "Allowing <object> is an invitation to XSS." This would have (to me) come across as a bit harsher. The "Isn't?" frame gives it more a sense of "hey, you know this, right?". It signifies that the relationship is between two people who reasonably know what they're doing, whereas the other phrasing would have communicated an undertone of "you're wrong and should have known better and I know better than you." (how strong the undertone is depends on the existing relationship. In this case I think it would have probably been relatively week) Moreover, the second sentence actually just fits the collaborative frame as I specified it: cousin_it specifically says "the product didn't need the features that required <object>", therefore there's no more work to be done. And meanwhile says "I enjoyed your post", which indicates that they generally like what you did. All of this helps reinforce "hey, we're on the same side, building a thing together." (I do suspect you could find an example that doesn't meet these criteria but still is a reasonable workplace exchange. I don't think you *never* need to say 'hey, you're wrong here', just that if you're saying it all the time without helping to solve the underlying problems, something is off about your team dynamics. Probably not going to have time to delve much further into this for now though)

Should this be its own post?

Yes, because I can only upvote it once if it remains an answer on this question. Also, because it'll be useful to refer to in future discussions.
This feels sort of on the edge of "is useful outside of the current discussion." It'd be fine to write up as it's own post but my current feel is that it's accomplishing most of it's value as an answer to this question. [this just my opinion of what feels vaguely right as a user, not intended to be normative]

I roughly endorse this description. (I specifically think the "99% of either is fine" is a significant overstatement, but I probably endorse the weaker claim of "both styles can generally work if people are trying to do the same thing")

I don't hear this phrase much, so I suspect it's heavily context-specific in it's usage. If I were to use it at work, it'd probably be ironic, as a euphemism for "let me correct your thinking".

I can imagine it being used as a way to explicitly agree that the participants in a discussion are there to each change their minds, or to understand and improve their models, by comparing and exchanging beliefs with each other. Truth-seeking is the intent to change your beliefs, collaborative truth-seeking is the shared intent to change the group members' beliefs.

6 comments, sorted by Click to highlight new comments since: Today at 5:15 AM

People coming together to work on a common goal can typically accomplish more than if they worked separately. This is such a familiar thing that I am unclear where your perplexity lies.

What conditions must obtain for an interaction between people to constitute “coming together to work on a common goal”? How commonly do said conditions obtain? Are they in effect in all, most, some, or none of the interactions between commenters on Less Wrong?

These are non-trivial questions.

What conditions must obtain for an interaction between people to constitute “coming together to work on a common goal”?

That people have a common goal, and that they come together to work on it. Ok, I'm being deliberately tautologous there, but these are ordinary English words that we all know the meanings of, put together in plain sentences. I am not seeing what is being asked by your question, or by Zack's. Examples of the phenomenon are everywhere (as are examples of its failure).

As for how to do real work as a group (an expression meaning the same as "coming together to work on a common goal"), and how much of it is going on at any particular place and time, these are non-trivial questions. They have received non-trivial quantities of answers. To consider just LW and the rationalsphere, see for example various criticisms of LessWrong as being no more than a place to idly hang out (a common purpose but a rather trifling one compared with some people's desires for the place); MIRI; CFAR, FHI; rationalist houses; meetups; and so on. In another sphere, the book "Moral Mazes" (recently discussed here) illustrates some failures of collaboration.

I do not see how the OP gives any entry into these questions, but I look forward to seeing other people's responses to it.

[Off topic] Data point: the repeated "(respectively I/you)" at the beginning of the post made that paragraph several times harder to read for me than it otherwise would have been.

(Publication history note: lightly adapted from a 4 May 2017 Facebook status update. I pulled the text out of the JSON-blob I got from exporting my Facebook data, but I'm not sure how to navigate to the status update itself without the permalink or pressing the Page Down key too many times, so I don't remember whether I got any good answers from my Facebook friends at the time.)

Here's the link, found from my account by searching for "collaborative truthseeking". There is a "Posts from Anyone/You/Your Friends" radio control on the left of the search page, so should probably work on your own posts as well.