You are viewing a version of this post published on the 27th Nov 2006. This link will always display the most recent version of the post.

    A bias is a certain kind of obstacle to our goal of obtaining truth - its character as an "obstacle" stems from this goal of truth - but there are many obstacles that are not "biases".

    If we start right out by asking "What is bias?", it comes at the question in the wrong order.  As the proverb goes, "There are forty kinds of lunacy but only one kind of common sense."  The truth is a narrow target, a small region of configuration space to hit.  "She loves me, she loves me not" may be a binary question, but E=MC^2 is a tiny dot in the space of all equations, like a winning lottery ticket in the space of all lottery tickets.  Error is not an exceptional condition; it is success which is a priori so improbable that it requires an explanation.

    We don't start out with a moral duty to "reduce bias", because biases are bad and evil and Just Not Done.  This is the sort of thinking someone might end up with if they acquired a deontological duty of "rationality" by social osmosis, which leads to people trying to execute techniques without appreciating the reason for them.  (Which is bad and evil and Just Not Done, according to Surely You're Joking, Mr. Feynman, which I read as a kid.)

    Rather, we want to get to the truth, for whatever reason, and we find various obstacles getting in the way of our goal.  These obstacles are not wholly dissimilar to each other - for example, there are obstacles that have to do with not having enough computing power available, or information being expensive.  It so happens that a large group of obstacles seem to have a certain character in common - to cluster in a region of obstacle-to-truth space - and this cluster has been labeled "biases".

    What is a bias?  Can we look at the empirical cluster and find a compact test for membership?  Perhaps we will find that we can't really give any explanation better than pointing to a few extensional examples, and hoping the listener understands.  If you are a scientist just beginning to investigate fire, it might be a lot wiser to point to a campfire and say "Fire is that orangey-bright hot stuff over there," rather than saying "I define fire as an alchemical transmutation of substances which releases phlogiston."  As I said in The Simple Truth, you should not ignore something just because you can't define it.  I can't quote the equations of General Relativity from memory, but nonetheless if I walk off a cliff, I'll fall.  And we can say the same of biases - they won't hit any less hard if it turns out we can't define compactly what a "bias" is.  So we might point to conjunction fallacies, to overconfidence, to the availability and representativeness heuristics, to base rate neglect, and say:  "Stuff like that."

    With all that said, we seem to label as "biases" those obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by the shape of our own mental machinery.  For example, the machinery is evolutionarily optimized to purposes that actively oppose epistemic accuracy; for example, the machinery to win arguments in adaptive political contexts.  Or the selection pressure ran skew to epistemic accuracy; for example, believing what others believe, to get along socially.  Or, in the classic heuristic-and-bias, the machinery operates by an identifiable algorithm that does some useful work but also produces systematic errors: the availability heuristic is not itself a bias, but it gives rise to identifiable, compactly describable biases.  Our brains are doing something wrong, and after a lot of experimentation and/or heavy thinking, someone identifies the problem in a fashion that System 2 can comprehend; then we call it a "bias".  Even if we can do no better for knowing, it is still a failure that arises, in an identifiable fashion, from a particular kind of cognitive machinery - not from having too little machinery, but from the shape of the machinery itself.

    "Biases" are distinguished from errors that arise from cognitive content, such as adopted beliefs, or adopted moral duties.  These we call "mistakes", rather than "biases", and they are much easier to correct, once we've noticed them for ourselves.  (Though the source of the mistake, or the source of the source of the mistake, may ultimately be some bias.)

    "Biases" are distinguished from errors that arise from damage to an individual human brain, or from absorbed cultural mores; biases arise from machinery that is humanly universal.

    Plato wasn't "biased" because he was ignorant of General Relativity - he had no way to gather that information, his ignorance did not arise from the shape of his mental machinery.  But if Plato believed that philosophers would make better kings because he himself was a philosopher - and this belief, in turn, arose because of a universal adaptive political instinct for self-promotion, and not because Plato's daddy told him that everyone has a moral duty to promote their own profession to governorship, or because Plato sniffed too much glue as a kid - then that was a bias, whether Plato was ever warned of it or not.

    Biases may not be cheap to correct.  They may not even be correctable.  But where we look upon our own mental machinery and see a causal account of an identifiable class of errors; and when the problem seems to come from the evolved shape of the machinery, rather from there being too little machinery, or bad specific content; then we call that a bias.

    Personally, I see our quest in terms of acquiring personal skills of rationality, in improving truthfinding technique.  The challenge is to attain the positive goal of truth, not to avoid the negative goal of failure.  Failurespace is wide, infinite errors in infinite variety.  It is difficult to describe so huge a space:  "What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world."  Success-space is narrower, and therefore more can be said about it.

    While I am not averse (as you can see) to discussing definitions, we should remember that is not our primary goal.  We are here to pursue the great human quest for truth: for we have desperate need of the knowledge, and besides, we're curious.  To this end let us strive to overcome whatever obstacles lie in our way, whether we call them "biases" or not.

    New Comment
    15 comments, sorted by Click to highlight new comments since: Today at 4:06 PM

    We seem to mostly agree about what we are about here, but it seems damn hard to very precisely define exactly what. I guess I'll focus on coming up with concrete examples of bias and concrete mechanisms for avoiding it, and set aside for now the difficult task of defining it.

    "it seems damn hard to very precisely define exactly what"

    Robin, I don't see why a definition offered in terms of the origin of a phenomenon ("the shape of our mental machinery") should be any less a definition (or any less precise) than one that directly describes the characteristics of the phenomenon. Why isn't the former sufficient?

    Pdf, I didn't mean to imply that Eliezer's approach was inferior to the approach I was taking, just that all the approaches run into problems when you try to become more precise.

    Is there a well-defined difference between the shape of one's mental machinery and its limited computing power?

    Oh, how curious. I've been reading on here a while, and I think I had previously misunderstood the adopted meaning of the word "bias"... using the term as it's socially used, that is to say, a prior reason for holding a certain belief over another due to convenience. A judge might be biased because one side is paying him; a jury member might be biased because their sister is the one on trial. Are these "mistakes"? Or do they fall under a certain type of cognitive bias that is similar among all humans? *ponder*

    I would call a judge who is favoring a side because they're paying him "biased", and not "mistaken" or any such thing. But it's not a cognitive bias. The word "bias" has legitimate meanings other than what EY is saying, so it would have been clearer if the article used the term "cognitive bias" at least at the outset.

    I would argue a corrupt judge only seems biased as biased people in my understanding are not aware of their underlying preferences. That also might be the common ground with a cognitive bias: you are never directly aware of its presence and can only deduce on it by analysis.

    Biases seem like they could be understood in terms of logical validity. Even if you reason solely from sound premises, you could still adopt an invalid argument (aka a fallacy; a conclusion that does not actually follow from the premises, no matter how true). I suggest the definition that biases are whatever cause people to adopt invalid arguments.

    I suggest the definition that biases are whatever cause people to adopt invalid arguments.

    False or incomplete/insufficient data can cause the adoption of invalid arguments.

    Contrast this with:

    The control group was told only the background information known to the city when it decided not to hire a bridge watcher. The experimental group was given this information, plus the fact that a flood had actually occurred. Instructions stated the city was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the control group concluded the flood was so unlikely that no precautions were necessary; 57% of the experimental group concluded the flood was so likely that failure to take precautions was legally negligent. A third experimental group was told the outcome and also explicitly instructed to avoid hindsight bias, which made no difference: 56% concluded the city was legally negligent.

    I.e. on average, it doesn't matter if people try to avoid hindsight bias. "prior outcome knowledge" literally corresponds to conclusion "prior outcome should've been deemed very likely".

    To avoid it, you literally have to INSIST on NOT knowing what actually happened, if you aim to accurately represent the decision making process that actually happened.

    Or if you do have the knowledge, you might result in having to force yourself to assign an extra 1 : 10 odds factor against the actual outcome (or worse) in order to compensate.

    This definition of bias seems problematic. If a putative bias is caused by absorbed cultural mores, then supposedly it is not a bias. But that causal chain can be tricky to track down; we go on thinking something is a 'bias' until we find the black swan culture where the bias doesn't exist, and then realize that the problem was not inherent in our mental machinery. But is that distinction even worth making, if we don't know what caused the bias?

    I suspect the definition is worth making because even if we don't know what caused the bias, we can use the label of a bias "not inherent in our mental machinery" as a marker for study of what it's cause is in the future.

    For example, I read in a contemporary undergraduate social psychology textbook that experimental results found that a common bias affected subjects from Western cultures more strongly than it affected subjects from more interdependent cultures such as China and Japan.

    [Obviously, my example is useless. I just don't have access to that book at the current moment. I will update this comment with more detail when I'm able.]

    The Simple Truth link should be http://yudkowsky.net/rational/the-simple-truth/

    Thanks, fixed!

    Typo: "and besides, were curious." ~ s/were/we're/.

    I wonder when a venerable old article reaches the "any remaining bugs become features" stage.

    There's still "things that arent true", instead of "things that aren't true", in the second paragraph.