Andrew Gelman has a post up today discussing a particularly illustrative instance of narrative fallacy involving the recent plagiarism discussion surrounding Karl Weick. I think there are also some interesting lessons in there about generalizing from fictional evidence.

In particular, Gelman says, "Setting aside [any] issues of plagiarism and rulebreaking, I argue that by hiding the source of the story and changing its form, Weick and his management-science audience are losing their ability to get anything out of it beyond empty confirmation."

I am wondering if anyone has explicitly looked into connections between generalizing from fictional evidence and confirmation bias. It sounds intuitively plausible that if you are going to manipulate fictional evidence for your purposes, you'll almost always come out believing the evidence has confirmed your existing beliefs. I would be highly interested in documented accounts where the opposite has happened and fictional evidence actually served as a correction factor.

For what it's worth, I personally enjoy a watered-down version of the moral that Weick attempts to manipulate from the story that's discussed in Gelman's post. My high school math teacher used to always say to us, "When you don't know what to do, do something." I think he said it because he was constantly pissed about questions left completely blank on his math exams, and wanted students to write down scribblings or ideas so he could at least give them some partial credit, but it has been more motivational than that for me.

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 9:58 PM

"When you don't know what to do, do something."

He'd be right at home in corporate management.

As a teaching assistant in a mathematics department, I consistently tell, and have professors around me tell, students NOT to write things down if they don't know anything.

If you are in the habit of writing things down with no idea whether or not they are true or false, relevant or irrelevant, you are (1) wasting the time of people who are grading and (2) losing track of whether or not you know something (especially if you're rewarded for it.)

It probably has much more to do with (1) than (2) though, just because grading 400 finals is exhausting enough without having to wade through a paragraph of meaningless scribbles every test.

My high school math teacher used to always say to us, "When you don't know what to do, do something." I think he said it because he was constantly pissed about questions left completely blank on his math exams, and wanted students to write down scribblings or ideas so he could at least give them some partial credit

I actually hate that practice of math teachers - encouraging students to "write something down" so they can give partial credit. Often this means that when students run into a particularly difficult problem, at some point they stop actually trying to solve the problem and start intentionally making mistakes so that they come up with some sort of answer - you know, they make shit up. This does not seem to be a skill that teachers should encourage students to develop. I don't even want to think about how many points I've gotten on exams for writing down things that I knew were patently false.

This may be the intuitive line of thinking, but in the course of life, action seems to be incredibly more effective than non-action. There have been many times where I haven't done anything and I've kicked myself in the butt for not at least putting forth some sort of effort vaguely aimed at the goal because even that little bit would have been better than the alternative. It doesn't seem like bad pragmatic advice to suggest people to move to action rather than sit passively, as we all know how one can "Fake it till they make it," and while that does not build the most efficient system, it does give the person a chance to stay afloat where otherwise, if they did nothing, they would sink.

in the course of life, action seems to be incredibly more effective than non-action

To stop this generalization from running too far, it depends on situation: what are the probabilities, rewards and risks. But assuming that there is no danger, yes, even doing random things is an opportunity to learn; and we often do better than random.

About the teacher vs student situation, this can be solved by proper incentives. What about giving +1 point for a correct answer, -1 point for incorrect answer, and 0 points for blank? Then rational students will stop making up stuff that has less than 50% probability to be right.

To be clear, the problem isn't that the professors are encouraging action over inaction. It's that they are encouraging students to spend some of their limited amount of action points making things up rather than actually making progress toward solving the problem.

There may be some severe overlap between confirmation bias and fictional evidence generalization, but then again, the entire point of an anecdote, real or fake, is to establish something that you've already taken to be true and demonstrate it in a more accessible lens. I don't mean to say that the studies people run in addition to anecdotes have no weight, but, for the most part, when assessing a situation or a point, we tend to self-select the anecdotes most useful to whatever we originally believe, and those tend to stick with us more often than the exact studies which have actual weight in demonstrating the argument

[-][anonymous]12y30

I agree that often the point of an anecdote is to reinforce something you've taken to be true, from the teller's perspective. But from the listener's perspective, hearing an anecdote is often viewed as a way of exposing yourself to valid incoming evidence.

I, along with thousands of others, willfully embrace this bad mode of reasoning often when I read Yelp reviews, for example. I think, "I wonder if Restaurant X is good... hmm... let's see what contrived, one-off experiences that others found noteworthy enough to report..."

Of course, elements of Yelp can be very helpful, and to the extent that I am careful to apply filters, look at statistically common reviews, account for selection bias, and so forth, it's not that dangerous to just generalize from Yelp reviews. But just think of all that stuff I said which I need to do to ensure careful interpretation of Yelp reviews! And Yelp reviews can almost always be taken as true (or a 'true perspective' at least). Imagine how much harder that problem becomes when reading fictional sources of input.

As an extreme example, I have an anecdote (har har) from my childhood about anecdotal reasoning. My dad was a corpsman in the Marines and often overestimated his own medical prowess because of his experience. Once I had to get stitches very close to the corner of my eye (from a nasty scrape during a basketball game). My dad thought the prices for "just getting stitches" were outrageous. He sought out some anecdotal opinions of the doctor and others had plenty of one-off stories about why they didn't like this particular doctor.

So my dad (very incorrectly) reasoned that it was better for him to take out my stitches at home. Luckily, I wasn't injured, but my mom and dad sure had a pretty bad fight about it. Obviously, he was suffering from more severe biases than just fictional evidence, but the stories he used to justify his preferred actions were basically just embellished stories of doctor dislike. Presumably they were mostly fiction and the doctor was a perfectly skilled doctor (perhaps he didn't have good bedside manner or something).

Anyway, my point is that this stuff comes up in different ways, and often. The teller is often motivated to believe their own conclusion. But listeners may seek out anecdotes for lots of other reasons. In the small town where I'm from, a drop of anecdote is worth more than a gallon of higher quality evidence, for sure.

Devil's advocate here: fictional 'evidence' can play the same role as a hypothetical; it creates a situation in which you can apply your intuition. If I encounter a story (e.g., an anecdote), it gets weighted to the extent that the story makes sense. If it doesn't make sense, it doesn't get weighted because I'll assume unknown factors or assign a low probability that it can be generalized. It doesn't matter so much, in either case, if the story is true or not.

Claiming that a story is true when it isn't is requesting more serious consideration of the plausibility of the scenario than it deserves, but if it the story is possible, it might as well have happened, since everything can happen once.

For example, with respect to the map anecdote, a person can use their own critical faculty (and experience) to decide the role that luck would have had in the survival of the soldiers. The anecdote certainly expressed the idea that impressions and attitude matter, but I also know that a map of the Starbucks locations in Nashville won't be so helpful in finding the Starbucks in Memphis. (...Except that there often will be one off any interstate exit near the downtown. So I might be lucky if I'm on the interstate and both cities have "Main" streets with a Starbucks. And it helps in any case to drive around.)

And there's another example, because I can't say I've actually ever seen a Starbucks on a street called Main Street, but the entire scenario seems plausible and illustrates the point.

[-][anonymous]12y00

I'm not sure I fully agree. Sure, if I can simulate draws from my posterior beliefs about reality, then I can reason from what those draws tell me. Scientists do this all the time when they simulate from a posterior distribution, and we hardly call this "fictional evidence" although, in a weak sense, it is. If that is all you are claiming, then I agree with you.

But when we create narratives to account for the evidence we see, we're almost never doing so by strictly drawing from a well-confirmed posterior. We're almost always doing whatever is simplest and whatever gratifies our immediate urges for availability and confirmation. In this sense, how can you really trust the narratives you generate in fiction? Sure, they might seem plausible to you, but how do you know? Have you really gone and made a probability calculation describing the whole chain of propositions necessary for your fictional narrative to be true? Almost surely not.

Therein lies great danger when you say something like: "... but if it the story is possible, it might as well have happened, since everything can happen once." I suggest that your Starbucks/Main Street example is a bad one, since these are rather specific details over which a given person's daily experience is likely to produce an accurate posterior distribution. Most instances of narrative fallacy are not this simple, and it would be a little disingenuous to claim that examples like that somehow lend validity to the entire practice of generalizing from fictional evidence.

More to the point, you should consider the LW post The Logical Fallacy of Generalizing from Fictional Evidence.

And in particular, in re-reading that, I noticed that Eliezer had hit upon Andrew Gelman's point as well:

Yet in my estimation, the most damaging aspect of using other authors' imaginations is that it stops people from using their own. As Robert Pirsig said:

"She was blocked because she was trying to repeat, in her writing, things she had already heard, just as on the first day he had tried to repeat things he had already decided to say. She couldn't think of anything to write about Bozeman because she couldn't recall anything she had heard worth repeating. She was strangely unaware that she could look and see freshly for herself, as she wrote, without primary regard for what had been said before."

Remembered fictions rush in and do your thinking for you; they substitute for seeing—the deadliest convenience of all."

I suggest that your Starbucks/Main Street example is a bad one, since these are rather specific details over which a given person's daily experience is likely to produce an accurate posterior distribution.

There's a confusion regarding the example, due to my writing, because I meant to argue that the map of Nashville would not be useful for navigating Memphis. My thesis (however buried) was that a person can use anecdotes (fabricated or not) to evaluate how compelling an idea is. By analogy with the locations of Starbucks in different cities, I don't buy the idea that faith in a map is more important than the information content of the map, even if it somehow played a role in lost soldiers navigating their way out of the mountains.

I nearly always counter-weight my thoughts with counter-arguments, which is the way my brain organizes information, but which makes my writing difficult to follow, I'll work on that. In the original comment of mine above, I spent some time on the idea that to some extent the information of a map is relevant in a distinct but similar context, as for example in my analogy cities have spatial patterns in common (and mountains will too). But that was just a distracting counterpoint...

So in the end I think we agree mostly. My thesis was that a person needs to be critical of the relevance of anecdotes.

Where we might disagree is in the significance of the size of the domain in real life where anecdotes are the best means we have of organizing, extracting and relaying information. For example,

a probability calculation describing the whole chain of propositions necessary for a fictional narrative to be true

is going to be more or less useless in the cases where we are most dependent on narratives. Narratives help us integrate thinking over a non-linear network of ideas developed over a lifetime of experience. If estimating probabilities over a linear chain of propositions is feasible, then its a different kind of problem, one more suited to analytic analysis.

Back to the object level, what was the problem/idea the authors were trying to express with their story about the soldiers? That 'perspective and attitude' matters (more than? sometimes just as much as? can compensate for lack of?) real knowledge about the territory. It's a pretty amorphic, fuzzy idea to begin with. I consider it a success they were able to capture the idea at all, but I wouldn't consider it worth actually quantifying..

[-][anonymous]12y00

Regarding the last quote from your math teacher: I had heard a variant that goes something like, "It's more useful to be wrong right now then to be correct a few weeks later." The idea being that dealing with whatever incorrect conclusions you have directly, as soon as possible, is better in terms of improvement if you can rapidly iterate from that failure.

Obviously, this doesn't work well with many different things (anything with a high cost of failure), but it is a problem-solving method I've not heard promoted on LessWrong that is good to have in your toolbox.

[This comment is no longer endorsed by its author]Reply