https://carcinisation.com/2020/01/27/ignorance-a-skilled-practice/

Excerpt:

The Global Knowledge Game

To illustrate that global knowledge is a game, consider a story about Alexander Luria, who studied illiterate Russian peasants and their semi-literate children. Consider especially this version of the story, prepared in the 1970s to provide morale and context to reading teachers (John Guthrie, 1977). Essentially, Luria discovered that the illiterate, unschooled peasants were highly resistant to syllogisms and word games. The adult peasants would only answer questions based on their own knowledge, and stubbornly refused to make deductions from given premises. “All bears are white where it is snowy. It is snowy in Nova Zembla. What color are the bears in Nova Zembla?” “I don’t know, I have never been to Nova Zembla.” Children with only a year or two of education, however, were easily able to engage in such abstract reasoning. They quickly answered the syllogisms and drew inferences from hypothetical facts outside of their own observation.

In this story, I argue, Luria’s peasants are indexical geniuses, who refuse to engage in unproven syllogistic games. They are not interested in a global, universal game. Their children, however, are easily introduced to this game by the process of schooling and literacy.

Interestingly, a more recent group of researchers claim that illiterate people do fine at making inferences against experience, if the context is given as a distant planet (Dias et al., 2005). I am not offering this as true, but as a story about how expecting people to operate in the “global knowledge game” might portray them as stupider than they really are, if they simply choose not to play in that game. This is to segue into the next hermeneutic pass, in which we are told that the hype surrounding “cognitive bias” is really a sort of science magic trick, an illusion designed to portray indexical geniuses, like Luria’s peasants and ourselves, as global fools.

The paper is “The Bias Bias in Behavioral Economics,” by Gerd Gigerenzer (2018). If you, like me, have ever been fascinated by cognitive bias research, this is a brutal paper to come to terms with. Gigerenzer examines several purported biases in what I would call analytic reasoning or the global knowledge game, and finds explanations for these purported biases in the indexical reality of humans.

For instance, some apparent “biases” that people display about probability are not actually errors. For the small (and in most cases, merely finite) samples that reality has to offer, people’s “biased” intuitions are more accurate than a “globally correct” answer would be (that is, the correct answer if the sample were infinite). In tossing fair coins, people tend to intuit that irregular strings are more probable than more regular strings (e.g. that HHHT is more probable than HHHH in a sequence of coin flips). This simple intuition can’t be correct, though, because given infinite coin flips, each string is as likely as any other, and if the sequence is only four flips, after HHH, each outcome is equally likely. But for small, finite numbers of flips greater than the string length, Gigerenzer argues, it is the human intuition that is correct, not the naive global solution: HHHT does take less time to show up than HHHH in repeated simulations, and is more commonly encountered in small samples. To drive home his point, he offers a bet:

If you are still not convinced, try this bet (Hahn and Warren, 2010), which I will call the law-of-small-numbers bet:

You flip a fair coin 20 times. If this sequence contains at least one HHHH, I pay you $100. If it contains at least one HHHT, you pay me $100. If it contains neither, nobody wins.

More broadly, cognitive bias proponents find fault with their subjects for treating “logically equivalent” language statements as having different meanings, when context reveals that these “logically irrelevant” cues frequently do reveal rich meaning in practice. For instance, people react differently to the “same” information presented negatively vs. positively (10% likelihood of death vs. 90% likelihood of survival). Cognitive bias proponents frame this as an error, but Gigerenzer argues that when people make this “error,” they are making use of meaningful context that a “bias-free” robot would miss.

New Comment
9 comments, sorted by Click to highlight new comments since:

Somewhat duplicating noggin-scratcher's comment.

HHHT does take less time to show up than HHHH in repeated simulations, and is more commonly encountered in small samples.

This is true in specific technical ways and false in specific technical ways. It's far from obvious to me that the true ways are more important than the false ways. Here are some ways we can cash this out, along with what I think are the results:

  • Continue flipping until we generate either sequence, then stop. Which did we most likely encounter? Equally likely.
  • Continue flipping until we generate a specific sequence. What's the expected stopping time? Lower for HHHT.
  • Generate a sample of size > 5. How many of each sequence does it have? More HHHH.
  • Generate a sample of size > 5. How likely is each sequence to show up at least once? HHHT is more likely. This is the bet.

Even accepting the claim as basically true, it's because the end overlaps the beginning, not because of regularity. This is a type of regularity, I admit, but I don't believe it's well correlated with what people will perceive as statistical regularity. I think you get the same results with HTH versus HTT (replacing HHHH and HHHT respectively).

I would expect a less pronounced version of the same effect. Both get to HT together, but if you're looking for HTH and get HTT then you're starting over hoping for your first H on the next flip, whereas if you're looking for HTT and get HTH you've got a small headstart because that last H can be the first H going forward to HT[HTT]

In this story, I argue, Luria’s peasants are indexical geniuses, who refuse to engage in unproven syllogistic games. They are not interested in a global, universal game. Their children, however, are easily introduced to this game by the process of schooling and literacy.

I've noticed a weaker version of this effect when interacting with people especially not like me, for example in my zen practice. By "not like me" I mean not the sort of person who readily plays the "global, universal game", looking to find abstract models to explain every situation and apply them in new ones. All these people still went to school, all these people can play this game to some extent, but not to the extent I'm willing to (they didn't spend 10 years on voluntary higher education in mathematics and then work jobs where they are paid to create abstractions).

The differences are impressive. Here's just a very small sample of what I have in mind.

In the chant books for our zen center we have little marks showing where to do things like ring bells. Depending on what chants are being done that day and in what order, some of the bells change. For example, we always start with the same sequence of bells but then the transition from one chant to another can vary depending on what came before.

A few months back someone, not me, updated the chant book and thought to abstract out some of the details of this, marking in a separate section how those things work and putting in notes referencing that section. I saw it and thought "ah, finally, someone made this clearer by abstracting away the complicated details". Other people were confused, and the less like me they were the more confused they were. In the end we had to change it back.

Creating abstractions, while natural to me and some others, was extremely confusing to those on the other end of this spectrum who didn't know what to do when the details were not directly there for them to interact with. I expect this cognitive difference between people goes a long way to explaining some kinds of conflicts we see.

You flip a fair coin 20 times. If this sequence contains at least one HHHH, I pay you $100. If it contains at least one HHHT, you pay me $100. If it contains neither, nobody wins.

Nits could be picked about this working more because "occurrences of a given substring matched continuously within a longer string" is a different question from "odds of a given string"; rather than because of irregular strings being inherently more probable or any difference between finite and infinite strings.

Specifically the part where, for the HHHT player, if the string is at HHH, then either they get a successful match from a T on the next flip or the string stands at HHHH and they can still hope for H[HHHT] a mere one flip later (compared to the HHHH player having to start over from zero whenever a T comes up). Benefiting greatly there from the target string overlapping with itself as you slide a 4-wide frame along the larger sequence.

The imbalance thus created would presumably still appear if you were to count matches on a similar sliding basis along an infinite string. Or equally disappear in the finite case if you only look at discrete chunks of 4 flips at a time (and treat that 20 flip sequence as 5 independent nonoverlapping trials).

So the claim would have to be that the bias is adaptive because we're more likely to need to intuitively estimate odds about occurrences in continuous series rather than discrete chunks. Which isn't implausible, but is less intrinsically obvious than the idea that we'd more often encounter finite cases than infinite ones.

Why are you calling this a nitpick? IMO it's a major problem with the post -- I was very unhappy that no mention was made of this obvious problem with the reasoning presented.

Why are you calling this a nitpick?

Because the central idea of the post isn't really about that specific probability puzzle, and can in theory stand alone to succeed or fail on other merits - regardless of whether that illustrative example in particular is actually a good choice of example.

Possibly there are better examples in the full paper linked, but I couldn't comment on that either way because I've only read this excerpt/summary.

This reminded me of Cargo Cult Science by Richard Feynman, particularly this part:

For example, there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and, still the rats could tell.

He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

Now, from a scientific standpoint, that is an A‑Number‑l experiment. That is the experiment that makes rat‑running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat‑running.

I looked into the subsequent history of this research. The subsequent experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic of Cargo Cult Science.

[-][anonymous]20

Ironically, the actual study by Mr. Young and his identity seem to be a bit of a mystery. H/t Gwern.

That's hilarious, and a little sad!

Sorry Mr. Feynman, we couldn't replicate your memories.