My school has a weekly event on Thursdays where someone can give a 15-25 minute lecture about a topic of their choice during the lunch break. The standard attendance is about 20-30, aged between 14 and 18, and some teachers drop by if the topic is related to their subject. It's heavily interlinked with the philosophy department, in that topics are typically about religion or ethics, so the audience is generally more philosophically informed than average. A good percentage are theists or deists, and there's a very high chance that the subject will be more thoroughly discussed in the philosophy club the day after.

 

In a previous lecture a few months ago I tried to explain some standard biases, the Map/Territory concepts, Bayes, and generally attempted to compress the core sequences into 25 minutes, which despite a lot of interest from the head of the philosophy department, didn't go as well as I'd hoped for the rest of the audience. The problem was that I tried to close too many inferential gaps in too many areas in too short a timespan, so for this I thought I should take one rationality idea and go into detail. The problem is I don't know which one to choose for maximum impact. I've decided against cryonics because I don't feel confident that I know enough about it.

 

So what do you think I should talk about for maximum sanity-waterline-raising impact?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 7:43 AM

Maybe 'short inferential distances' would be a good place for you to start?

Although already three commenters suggest talking about inferential distances, I am afraid that it is quite hard to make such presentation both interesting and believable. Telling teenagers that you have to carefully explain the idea on a level accessible to your audience, and perhaps even then the audience would not understand if they lack some important knowledge or experience, hm. The audience (the real one at the presentation, not the one spoken about) would interpret it either as banal "it is useless to try to explain things to idiots", or as simple bragging along the lines "I am so smart that you have to study a lot to understand me". When I was 16, if somebody told me that creationists might not accept my argument not because their fanaticism and stupidity but because some "inferential distance", I would think he secretly sides with the creationists. There is little appreciation for subtleties in that age.

If I have to suggest a topic, take something simple, relatively non-controversial and easy to explain. Pick one or more biases or fallacies and present them together with realistic illustrational examples. Base rate fallacy may work fine. Take some quasi-realistic example, such as cancer testing or court trial, which your audience would consider important. Make them guess the answer - that will make it interactive and therefore more interesting. After they get it wrong (they reliably will) show the right answer, which makes a surprising point. You have all ingredients for a good talk.

Telling teenagers that you have to carefully explain the idea on a level accessible to your audience, and perhaps even then the audience would not understand if they lack some important knowledge or experience, hm.

I think this idea is worth telling at the beginning, but of course, in a best accessible way, and shortly. My preferred way is to describe an ancient setting (for less mindkilling, don't even mention evolution, just say "hunters in a jungle") where any knowledge is easily transferred. If someone says "there are gazelles near the river", everyone knows what "gazelle" means, and what "river" means. In our society, if you pick a scientific journal from a field you don't understand, you probably won't understand the articles. And yet it feels like we should understand everything quickly. This is an example of a bias, we call it "expecting short inferential distances". (Now move on to other biases.)

This is an example of a bias, we call it "expecting short inferential distances".

"Inferential distance" is LW jargon. Does the bias have a standard name?

Illusion of transparency is thinking that contents of MIND1 and MIND2 must be similar, ignoring that MIND2 does not have information that strongly influences how MIND1 thinks.

Expecting short inferential distances is underestimating the vertical complexity (information that requires knowledge of other information) of a MAP.

EDIT: I don't know if there is a standard name for this, and it would not surprise me if there isn't. Seems to me that most biases are about how minds work and communicate, while "inferential distances" is about maps that did not exist in ancient environment.

After they get it wrong (they reliably will)

When I did my presentation before, a substantial fraction of the audience actually got the quasi-trick question about the breast cancer test probability correct.

substantial fraction

More than half?

I don't remember, but less than half. Maybe a third.

Excellent suggestion! That's what I would recommend, too. Mainly because this is the most common mistake presenters of all levels make. Concentrating on it explicitly is likely to make a presentation much more accessible and useful.

I would call it "why we need classes in rebellion" or something (to get their attention) and then talk about the Milgram obedience experiment and Asch's conformity experiment. Two classic results that demonstrate the power of authority and peer pressure.

Then get your audience talking about how to foster independence without going too far in the other direction. E.g. teachers who nearly always teach the core curriculum, but once in a while throw out a huge whopper to test if you're paying attention. (This works btw. I had a history teacher who tested our credulity by telling us that the pseudo-documentary Punishment Park was real. I googled it and called him a liar next class, and he gave me an A. The vivid memory of being terrified but right was useful for a few years)

Confirmation bias or the problem of short inferential distances are probably the two that would have the most large-scale impact if people really understood them. Either of those would be good choices.

It's hard for me to gauge your audience, so maybe this wouldn't be terribly useful, but a talk outlining logical fallacies (especially lesser-known ones) and why they are fallacies seems like it would have a high impact since I think the layperson commits fallacies quite frequently. Or should I say, I observe people committing fallacies more often than I'd like :p

I recommend taking something like the K&T paper and adding presentation material. I'm not a huge fan of approaching the presentation with the attitude of 'take something big and compress it' in general: I think 'take something deep, explore it, mention there are lots of other areas this deep' is more fruitful.

Maybe something like this? Basically, the idea that science and psychology can impact both our philosophy and our daily lives- that just like learning human anatomy and diseases can make us healthier, a similar approach can be taken with minds.

It follows this advice by painting a picture of what is possible, getting people excited about rationality, rather than teaching them a single tip that may not be useful in isolation.

I would probably focus on evidence. Why you really need evidence to raise any belief, yes any belief and not just those in "scientific" domains, to the point of attention. That absence of evidence really is evidence of absence. The difference between genuine Bayesian evidence and reasoning by the representativeness heuristic. That no amount of clever arguing will help you reach the right answer if you're starting by writing the bottom line, rather than following the winds of evidence.

In my experience, philosophy students learn plenty about formal logic and argumentative fallacies, not so much about good inductive reasoning.

There are several entertaining videos that demonstrate inattentional blindness. The conjunction fallacy is very easy to demonstrate. Eyewitness testimony isn't reliable. Memories we're absolutely certain of could easily be fake. Conclusion: when we're certain we rarely have reason to be certain (not nuanced but nuance is boring).