Say you have a belief. Person A agrees with you. Person B disagrees with you. They both seem pretty smart. You should probably adjust your probabilities in light of their beliefs.

You reason that A is more intelligent, because he agrees with you, and B is less intelligent, because he doesn't. This adjusts your probabilities to being more certain you're right.

This is something that happens in real life, obviously.

When assessing someone's reliability, do you ignore the issue you seek knowledge about? How do you deal with this?

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 11:25 AM

A partial way around this is to not double-count the basic evidence - if a person's belief is based on some evidence, count the evidence, not the belief.

If person A agrees with you but has exactly the same evidence as you, this should only slightly change your beliefs, reducing your estimate chance of making a mistake or being insane - it doesn't actually say anything new about the evidence. Remember, ideal logicians count each piece of evidence only once, even if it comes from someone else's mouth. If person B disagrees with you, is rational, and has a different set of evidence, you should change your opinion to reflect the new evidence.

Also, people who disagree with you about things you don't have much evidence about have a reasonably good chance of being better informed than you on the subject, or at least having different evidence about the subject.

Moreover, the correct answer based on all the evidence may not even look anything like the "average" of the beliefs. In the trivial case, imagine two people see a coin and both observe different flipping events where a handful of heads come up. Both update in the direction of a biased coin, but aren't certain. After sharing data each should be more certain than either was alone that the coin is biased towards heads.

In theory, seeking out people whose beliefs are different from your own, to ask them about their knowledge and experiences should make you better informed than doing the same with people with whom you already agree with about most things. They don't even have to be "rationalists" for it to benefit you they only need to be honest about their experiences...

On the other hand, they might need to be rationalists to reciprocally benefit from talking with you.

One doesn't necessarily trust everyone the same amount for all issues. I will trust Terence Tao a lot more about math than I will Richard Dawkins. But, I'd trust Dawkins a lot more about biology than I would trust Tao.

You reason that A is more intelligent, because he agrees with you, and B is less intelligent, because he doesn't. This adjusts your probabilities to being more certain you're right.

This is something that happens in real life, obviously.

Yes, but it probably shouldn't. Unless someone is a subject matter expert, they're personal intelligence shouldn't be that relevant. Moreover, from the above this sounds dangerously like double counting evidence in a way that is similar to how people repeat cached thoughts in response to evidence rather than updating.

[-][anonymous]13y30

If two people tell me conflicting stories about some issue I know little about, like Finnish nationalism or something, but one is a hardcore Scientologist or something, I'm more likely to believe the other. Quoting from http://lesswrong.com/lw/3jq/some_rationality_tweets/, "If one person makes multiple claims, this introduces a positive correlation between the claims."

So how do I deal with this?

I would like to emphasize that there is a difference between intelligence and reliability.

In that light, I would suggest implementing some kind of system to try to account for A and B's reliability, and in what subjects one is more reliable than the other. This can be intractable if you don't have a lot of history to go off of, so maybe you can toss out questions to A and B on other issues to try and determine where their opinions cluster. As long as the Scientologist was talking about something outside the cluster of crazy (if that is possible), you can probably take them seriously.

I was actually recently thinking of a series of conversations I had with Person A and Person B regarding the nature of Eliezer. I shared an article from a Sequence (not sure which one) to A and B, and Person B told me that Eliezer's writing struck him as someone who was pretending to be smart. I asked him to defend the claim, and he was very vague (this could be a good test, too. If B can defend their beliefs better than A, maybe you should treat their arguments with more respect). I spoke with Person A about it, and he pointed out several reasons why Person B wasn't terribly reliable in situations involving disagreement, rationality, intellectual honesty, curiosity, etc. In this case, I already had a long history of analyzing A, but not much to go off of with B. Combined with A having excellent examples and arguments, it was pretty obvious that B was just wrong. So instead of judging Person B as less intelligent, I just updated my assessment of his reliability in issues relating to skepticism and rationality.

Of course, in practice, this isn't neat or easy. It was painful to entertain the seemingly zero-sum conflict between Eliezer's and Person B's reliability. Especially when I was independently reading about cults and death spirals.

I was actually recently thinking of a series of conversations I had with Person A and Person B regarding the nature of Eliezer.

That does it. Eliezer is now a Greater Deity, and his followers argue about the Nature of God and exegesis of the sacred texts - I foresee a schism!

:-)

[-][anonymous]13y00

But reliablilty in different areas correlate. Someone who is wrong about one thing is more likely to be wrong about other things, relative to someone who is right, in my experience.

(However, I bet you that there'd be a few cases where the people who believed the truth about something were less correct than average, like conspiracy theorists, who are right occasionally.)

But yeah. That article is really good; it pretty much discusses just what I'm talking about.

I agree. I was just trying to help come up with a way to find where that correlation breaks, or is countered by other external factors. A trivial example: A mormon dentist who is extremely irrational about religion, and yet exceedingly rational about tooth decay.

I like to think of myself as a protoBorg: I seek the strengths of other people, and add them to my own. Except since I'm much more bounded than they are, I often just make a note that if I am interested in increasing my uniqueness in the field of X, then I could start by talking to Person B.

"When assessing someone's reliability, do you ignore the issue you seek >knowledge about?"

So getting back to that, I would say that instead of ignoring the issue, I try to see how the person fits in a network of subjects, and mark their reliability up or down in issues where they most closely touch the issue at hand. I still think of Person B as excellent with programming, history, current events, general knowledge. I just don't trust his self-awareness as much as I did before discussing the Greater Deity's nature with the heathen.

As has been said by others: ideally, to the extent that I consider X likely without reference to A, A's belief in X means I should consider A more credible, and to the extent that A is credible without reference to X, A's belief in X means I should consider X more likely.

You're right that in real life, humans skip the "without reference to" part and end up in loops, where we consider A more credible because A believes X, which we consider likely, and then we treat A's belief in X as further reason to consider X likely. In other words, we double-count evidence, as various folks have said. Welcome to being humans rather than ideal reasoners.

When it's possible to ignore other people's testimony and credibility and just look at X without reference to anyone else's opinion about it, that can be a helpful way of digging out of this particular tangle. When that isn't possible, I try to ask myself why I trust a particular source, when I came to consider them credible, etc. ... and if I don't have a good answer, I try to ask myself whether I'd believe different things if I didn't consider them credible.

I say "try" because I often fail at this. There are things I believe because someone told them to me once and I trusted them and I've long since forgotten the derivation.

We do what we can do, right?

Of course, there's a million other sources of noise in our judgments, many of which drown this one out.