Followup to:  Explainers Shoot High, Illusion of Transparency

My first true foray into Bayes For Everyone was writing An Intuitive Explanation of Bayesian Reasoning, still one of my most popular works.  This is the Intuitive Explanation's origin story.

In December of 2002, I'd been sermonizing in a habitual IRC channels about what seemed to me like a very straightforward idea:  How words, like all other useful forms of thought, are secretly a disguised form of Bayesian inference.  I thought I was explaining clearly, and yet there was one fellow, it seemed, who didn't get it.  This worried me, because this was someone who'd been very enthusiastic about my Bayesian sermons up to that point.  He'd gone around telling people that Bayes was "the secret of the universe", a phrase I'd been known to use.

So I went into a private IRC conversation to clear up the sticking point.

 

And he still didn't get it.

I took a step back and explained the immediate prerequisites, which I had thought would be obvious -

He didn't understand my explanation of the prerequisites.

In desperation, I recursed all the way back to Bayes's Theorem, the ultimate foundation stone of -

He didn't know how to apply Bayes's Theorem to update the probability that a fruit is a banana, after it is observed to be yellow.  He kept mixing up p(b|y) and p(y|b).

It seems like a small thing, I know.  It's strange how small things can trigger major life-realizations.  Any former TAs among my readers are probably laughing:  I hadn't realized, until then, that instructors got misleading feedback.  Robin commented yesterday that the best way to aim your explanations is feedback from the intended audience, "an advantage teachers often have".  But what if self-anchoring also causes you to overestimate how much understanding appears in your feedback?

I fell prey to a double illusion of transparency.  First, I assumed that my words meant what I intended them to mean - that my listeners heard my intentions as though they were transparent.  Second, when someone repeated back my sentences using slightly different word orderings, I assumed that what I heard was what they had intended to say.  As if all words were transparent windows into thought, in both directions.

I thought that if I said, "Hey, guess what I noticed today!  Bayes's Theorem is the secret of the universe!", and someone else said, "Yes! Bayes's Theorem is the secret of the universe!", then this was what a successful teacher-student interaction looked like: knowledge conveyed and verifiedI'd read Pirsig and I knew, in theory, about how students learn to repeat back what the teacher says in slightly different words.  But I thought of that as a deliberate tactic to get good grades, and I wasn't grading anyone.

This may sound odd, but until that very day, I hadn't realized why there were such things as universities.  I'd thought it was just rent-seekers who'd gotten a lock on the credentialing system.  Why would you need teachers to learn?  That was what books were for.

But now a great and terrible light was dawning upon me.  Genuinely explaining complicated things took months or years, and an entire university infrastructure with painstakingly crafted textbooks and professional instructors.  You couldn't just tell people.

 

You're laughing at me right now, academic readers; but think back and you'll realize that academics are generally very careful not to tell the general population how difficult it is to explain things, because it would come across as condescending.  Physicists can't just say, "What we do is beyond your comprehension, foolish mortal" when Congress is considering their funding.  Richard Feynman once said that if you really understand something in physics you should be able to explain it to your grandmother.  I believed him.  I was shocked to discover it wasn't true.

But once I realized, it became horribly clear why no one had picked up and run with any of the wonderful ideas I'd been telling about Artificial Intelligence. 

If I wanted to explain all these marvelous ideas I had, I'd have to go back, and back, and back.  I'd have to start with the things I'd figured out before I was even thinking about Artificial Intelligence, the foundations without which nothing else would make sense.

Like all that stuff I'd worked out about human rationality, back at the dawn of time.

Which I'd considerably reworked after receiving my Bayesian Enlightenment.  But either way, I had to start with the foundations.  Nothing I said about AI was going to make sense unless I started at the beginning.  My listeners would just decide that emergence was a better explanation.

And the beginning of all things in the reworked version was Bayes, to which there didn't seem to be any decent online introduction for newbies.  Most sources just stated Bayes's Theorem and defined the terms.  This, I now realized, was not going to be sufficient.  The online sources I saw didn't even say why Bayes's Theorem was important.  E. T. Jaynes seemed to get it, but Jaynes spoke only in calculus - no hope for novices there.

So I mentally consigned everything I'd written before 2003 to the trash heap - it was mostly obsolete in the wake of my Bayesian Enlightenment, anyway - and started over at what I fondly conceived to be the beginning.

(It wasn't.)

And I would explain it so clearly that even grade school students would get it.

(They didn't.)

I had, and have, much left to learn about explaining.  But that's how it all began.

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 10:38 AM

Eliezer, the so-called "expert blind spot" is IMHO one of the most important problems in lecture-based education and even scientific communication. One of my dreams is to make AI that helps address this, via cognitive models of both experts and novices. This is one of my holy grails. The AI would understand what was said, and "translate" the message to each novice individually, taking advantage of their pre-existing knowledge. In some cases, this "translation" would involve lengthy tutoring with new concepts and knowledge.

Yes the dependancies need to bee built first. I like this idea of a package management system for humans.

Physicists can't just say, "What we do is beyond your comprehension, foolish mortal" ... "if you really understand something in physics you should be able to explain it to your grandmother." ... I was shocked to discover it wasn't true.

This seems inconsistent with the rest of what your post, which argues that you can explain physics to grandmothers or Congressmen, as long as you have the opportunity to iterate back and forth to verify understanding on both sides. While the practical implications of both may be the same ("It's not worth the time to try"), there's a big difference between "not worth doing" and "impossible".

Well, yes, you can explain physics to sufficiently intelligent and diligent grandmothers or Congresspersons, over the course of months using adequate textbooks, lectures, and homework exercises.

It seems, then, that the goal is to motivate (and hence emotionally reward) diligence and intelligence.

It occurs to me that explaining things to people who "don't get it" is often actually a matter of them not wanting to get it - but being polite enough to feign interest (even to themselves) all the way through the conversation. Most likely, their "wanting to get it" is a "belief in belief" - it's part of their identity and personal integrity that they wish to listen to evidence, but in reality their emotional brain is sending out "what is the point of paying attention to any of this gibberish?" signals, and so they let their mind drift off to other things while they "try" to follow along. They likely do not even realize they are doing this.

This is consistent with my own anecdata, which is that engaging people emotionally before you start explaining something to them, and genuinely praising them - without being condescending - each time they reach a milestone along the path towards understanding your explanation, tends to have a much higher chance of succeeding in them "grasping" the explanation and actually attempting to incorporate it into their world-view.

The problem I'm currently working on, is that when they do attempt to incorporate it, if it winds up causing cognitive dissonance with something else that's already in their world-view, they will often become irrationally hostile to me for having "slipped in" an "enemy soldier".

It's like the way someone said that good thinking is to hold two diametrically opposite thoughts at the same time but still continue with whatever we're doing.

When asked to explain in a few words what he had accomplished, Feynman said, "Buddy, if I could tell you in a minute what I had done, it would not be worth the Nobel Prize." Though not exactly it's kind of contradictory to what he said about explaining Physics to your grandmother. He also says he wasn't able to explain what he did to his father.

Feynman said "The first principle is that you must not fool yourself, and you are the easiest person to fool." In a book 'Some time with Feynman', when he talks about working on problems he says you have to fool yourself. It's in a different sense. He says that when you are attacking a formidable problem you may doubt how you'll be able to solve something where others haven't been able to. But then you fool yourself saying you're kind of special and you'll be able to solve and keep working on it.

Usually, what is said taken out of context or only one side of it is taken, as most people want to take things to be either black or white, when most things are in varying shades of gray.

over the course of months

Doubtless. But you may be able to pare it down to something at the same time manageable in a short time and yet neither trite nor vague. I think Feynman did this in the book QED.

There are indeed cases where it does take someone who understands the subject, months of iteration to explain it. But when someone says, "I can't explain it to you," that can either mean:

a) the time to do so is genuinely cost-prohibitive or the listener very stupid

OR

b) the would-be explainer doesn't really understand it and has been operating in a sort of "Chinese room", manipulating symbols without understanding their connection to everything else.

In my experience, a) is the exception, not the rule.

This may sound odd, but until that very day, I hadn't realized why there were such things as universities. I'd thought it was just rent-seekers who'd gotten a lock on the credentialing system. Why would you need teachers to learn? That was what books were for.

It's not clear why it isn't true as originally intended. Books are enough for understanding anything, you'd just need good from-the-ground-up textbooks and probably months or years to read them. Teachers are out of this loop, and from personal experience I see teacher-mediated learning as inefficient, given motivated student and availability of good textbooks.

Universities institutionalize the very process of learning, which helps if motivation is weak and goal is not even on horizon, and as a result universities supply bigger amount of trained people than would be possible by just printing good textbooks.

Books are enough for understanding anything, you'd just need good from-the-ground-up textbooks and probably months or years to read them.

In practice, this isn't true. Some people really do have trouble learning from books. Simply reading the book aloud to them is enough to overcome the block.

I don't know where the problem originates, however. It seems strange to chalk it up to lack of motivation or stupidity, given the people I know.

In other words, books contain all of the knowledge necessary to understand anything but not everyone can pick up the understanding itself from a book. Why, I don't know.

There's one major difference: people can answer learner-generated questions and engage in conversation, while books cannot. Reading the book aloud to someone probably isn't enough; reading it aloud and then having a Q&A session after (or better yet, during) can be a major improvement.

Is it sufficient to read the book aloud to them even if you don't understand it yourself? If so why isn't there a profession of ill-educated freelance book-readers?

Many tutors are more or less exactly that.

Really? One on one? I've certainly been to many 'read-out-the-textbook' lectures, but there's a good point to those, which is why I went. One on one I'd feel very robbed.

there's a good point to those

What's that?

You can ask questions from an expert on the fly.

That's not enough to make me not hate lectures.

We must not overlook the number one reason something is difficult to explain- that is that what one is trying to explain is nonsense. (this is not specifically directed at anyone posting here)

douglas: I think that counts as a subset of my b), in that if it's nonsense, the would-be explainer doesn't understand it.

Silas- yes, good point, but an important subset in that the person attempting to do the explaining often overlooks it. When was the last time you were having trouble explaining or understanding something and you asked, "Is this just nonsense?"

douglas: Actually, for me that happens quite a bit when on the "having trouble understanding" side, but I'm just cynical like that. For example, I interrogate people in online discussions about the difference in meaning between "Sony's problem was setting the PS3's price point too high" and "Sony's problem was setting the PS3's price too high." (Yes, I know what a price point is, but it doesn't seem to affect their statement.)

OTOH, when on the "having trouble explaining" side, I often do find gaps in my knowledge that force me to concede I don't really understand the topic, in that sense, "overlooking" the possibility it's nonsense.

douglas: Actually, for me that happens quite a bit when on the "having trouble understanding" side, but I'm just cynical like that. For example, I interrogate people in online discussions about the difference in meaning between "Sony's problem was setting the PS3's price point too high" and "Sony's problem was setting the PS3's price too high." (Yes, I know what a price point is, but it doesn't seem to affect their statement.)

OTOH, when on the "having trouble explaining" side, I often do find gaps in my knowledge that force me to concede I don't really understand the topic, in that sense, "overlooking" the possibility it's nonsense.

Silas- I like your example of interrogation. You rabble rouser, and I say that with utmost respect and love. I've had to throw out a couple of deeply cherished beliefs in my time, and it can be brutal. I try to go back to the question, "What does the evidence indicate?", and then I have to be willing to look at evidence that I had neglected because I was to fixed or bias to consider it. I must admit, when I look at the state of the world, I don't have a hard time believing that much of what currently passes as sense is actually nonsense. Ya know?

For the record, Sony's biggest problem is the lack of a killer app for the PS3. When Final Fantasy 13 or Metal Gear Solid 4 are finished, we might just see actual PS3 sales. (Or so I believe, extrapolating from my own behavior; I don't buy a system unless there is a game for it that I want to play.)

I like to think I'm pretty good at explaining things; it is easier to explain when you have back-and-forth feedback than when you're writing a textbook (because when somebody doesn't get something, you can keep throwing words at the topic until something sticks) but sometimes all you have is one shot...

I had a similar run-in when I tried to go through the Cantor Diagonal Argument with a bunch of gifted 13-year-olds. I thought I had them following right through to the end, but when I came to the conclusion, they cried: "But infinity is infinity!"

Not quite as concrete as Bayesian inference, but it's still a difficult concept. Some of those students would probably never think of that lecture again, and some, after some years of ruminating and/or majoring in math, would finally understand what I was getting at. After having that run-in, I actually switched over to teaching conditional probability (in particular, the Mony Hall problem) as my "fun" math lecture.

But you may be able to pare it down to something at the same time manageable in a short time and yet neither trite nor vague. I think Feynman did this in the book QED.

QED was one of my favorite books when I was nine years old. I was shocked when I grew up and read The Feynman Lectures and realized that QED hadn't taught me a single bit of physics.

How do you know QED didn't teach you a single bit of physics?

  1. If you assimilated the corresponding bits of the Feynman lectures (or any other physics you encountered along the way) at all more easily for having read QED at age 9, then it did teach you some physics, albeit in a sense hard to quantify.

  2. If reading its hand-waving stuff about light taking all possible paths at once increased the probability you'd have assigned to (say) something like the Bohm-Aharonov effect if anyone had thought to ask you how likely you thought it, then it did teach you some physics, even in the "Technical" sense. (Whether more or less than one bit depends on how much that probability increased.)

If having notions like path integrals, phase, and stationary action waved at you unintimidatingly didn't push your thinking about physics in the direction of clearer understanding, then it seems that you were either (1) already implausibly acquainted with them for even an extraordinarily bright 9-year-old, or (2) implausibly impervious to such things for someone capable of reading and enjoying QED. Of course, something could be implausible to me but still true.

I see QED as a bit like stating the axioms of a mathematical theory. You can, in principle, derive the whole theory from the axioms, but in practice it takes generations of ingenuity to come up with the tools to do that. We take courses in mathematics not just to learn the axioms, but also, and primarily, to learn the vast library of tricks that let us do something useful with the axioms.

Similarly, I remember my first or second physics course, either mechanics or electromagnetism. The inside of the cover had, as I recall, all the "axioms", the fundamental laws from which everything could be derived. Those fit inside the cover. But, just as in a mathematical subject, the main body of the subject was the library of tricks that let us actually make specific predictions from those fundamental laws.

Feynman, as I recall, was very up front in QED about what it did and did not contain. He was explicit about it not including the tricks that we would need to learn to apply the fundamental principles to real predictions about real situations.

However, I would not really call the book "vague" or even "hand-waving", any more than I would call the inside cover of my physics textbook "hand-waving" or even "not physics". It was seriously lacking, yes, admittedly so. But not at all in the way that, say, quantum mechanics popularizations typically are. Popularizations include neither the axioms (fundamental laws) of the theory, nor the tricks, but instead are filled with metaphor and impressionistic talk and not a small amount of pop philosophy. Not the same thing at all as QED (I mean QED the book, not the subject of quantum electrodynamics).

"I'd thought it was just rent-seekers who'd gotten a lock on the credentialing system. Why would you need teachers to learn? That was what books were for. But now a great and terrible light was dawning upon me [...]"

You know, I'm starting to suspect you were right the first time.

If I wanted to explain all these marvelous ideas I had, I'd have to go back, and back, and back. I'd have to start with the things I'd figured out before I was even thinking about Artificial Intelligence, the foundations without which nothing else would make sense.

As much as I am enjoying the sequences, I can tell you right now that they are not written in a way that makes them very accessible to the layperson. The Simple Truth is pretty solid, but the rest of the foundations still haven't come into focus for me. I can see the buildings and rooftops, but I still don't feel like I have the foundations.

I am picking up more than enough knowledge and understanding to start filling the gaps myself but I still stop and wonder how tab A fits into slot B. Eventually it clicks and when it does I know how I could have said it to my past self in only a few sentences. But this is an unfair comparison for me to make against you.

The only reason I bring this up is because you seem very interested in explaining your ideas. You are doing great but I don't think you are quite where you want to be yet. I have no problem with the quality of your work but, if you are anything like me, you would have a problem with it if you were given the chance to see it through my eyes. I suspect you would be surprised and say, "Oh, wow, that isn't what I intended at all."

This is why I have been adding my thoughts on the older articles as I read them. I am hoping that that somehow my first exposure can be useful feedback for you. (Well, that and talking about it helps me remember things more accurately.)

In any case, thank you very much for the hard work. I am at 140 of 584 on the list and am looking forward to the rest.

Give students a short intelligence report, have them underline all expressions of uncertainty, then have them express their understanding of the report by writing above each expression of uncertainty the numerical probability they believe was intended by the writer of the report. This is an excellent learning experience, as the differences among students in how they understand the report are typically so great as to be quite memorable.

In one experiment, an intelligence analyst was asked to substitute numerical probability estimates for the verbal qualifiers in one of his own earlier articles. The first statement was: "The cease-fire is holding but could be broken within a week." The analyst said he meant there was about a 30-percent chance the cease-fire would be broken within a week. Another analyst who had helped this analyst prepare the article said she thought there was about an 80-percent chance that the cease-fire would be broken. Yet, when working together on the report, both analysts had believed they were in agreement about what could happen.^141^ Obviously, the analysts had not even communicated effectively with each other, let alone with the readers of their report.

...I personally recall an ongoing debate with a colleague over the bona fides of a very important source. I argued he was probably bona fide. My colleague contended that the source was probably under hostile control. After several months of periodic disagreement, I finally asked my colleague to put a number on it. He said there was at least a 51-percent chance of the source being under hostile control. I said there was at least a 51-percent chance of his being bona fide. Obviously, we agreed that there was a great deal of uncertainty. That stopped our disagreement. The problem was not a major difference of opinion, but the ambiguity of the term probable.

--Heuer, Psychology of Intelligence Analysis, chapter 12 (very good book; recommended)

Richard Feynman once said that if you really understand something in physics you should be able to explain it to your grandmother.  I believed him.

Curiously enough, there is a recording of an interview with him where he argues almost exactly the opposite, namely that he can't explain something in sufficient detail to laypeople because of the long inferential distance.