Sometime between the age of 3 and 4, a human child becomes able, for the first time, to model other minds as having different beliefs.  The child sees a box, sees candy in the box, and sees that Sally sees the box.  Sally leaves, and then the experimenter, in front of the child, replaces the candy with pencils and closes the box so that the inside is not visible.  Sally returns, and the child is asked what Sally thinks is in the box.  Children younger than 3 say "pencils", children older than 4 say "candy".

Our ability to visualize other minds is imperfect.  Neural circuitry is not as flexible as a program fed to a general-purpose computer.  An AI, with fast read-write access to its own memory, might be able to create a distinct, simulated visual cortex to imagine what a human "sees".  We humans only have one visual cortex, and if we want to imagine what someone else is seeing, we've got to simulate it using our own visual cortex - put our own brains into the other mind's shoes.  And because you can't reconfigure memory to simulate a new brain from stratch, pieces of you leak into your visualization of the Other.


The diagram above is from Keysar, Barr, Balin, & Brauner (2000).  The experimental subject, the "addressee", sat in front of an array of objects, viewed as seen on the left.  On the other side, across from the addressee, sat the "director", with the view as seen on the right.  The addressee had an unblocked view, which also allowed the addressee to see which objects were not visible to the director.

The experiment used the eye-tracking method: the direction of a subject's gaze can be measured using computer vision.  Tanenhaus et. al. (1995) had previously demonstrated that when people understand a spoken reference, their gaze fixates on the identified object almost immediately.

The key test was when the director said "Put the small candle next to the truck."  As the addressee can clearly observe, the director only knows about two candles, the largest and medium ones; the smallest candle is occluded.

And, lo and behold, subjects' eyes fixated on the occluded smallest candle an average of 1,487 milliseconds before they correctly identified the medium-sized candle as the one the director must have meant.

This seems to suggest that subjects first computed the meaning according to their brains' settings, their knowledge, and then afterward adjusted for the other mind's different knowledge.

Numerous experiments suggest that where there is adjustment, there is usually under-adjustment, which leads to anchoring.  In this case, "self-anchoring".

Barr (2003) argues that the processes are actually more akin to contamination and under-correction; we can't stop ourselves from leaking over, and then we can't correct for the leakage.  Different process, same outcome:

We can put our feet in other minds' shoes, but we keep our own socks on.

Barr, D. J. (2003). Listeners are mentally contaminated. Poster presented at the 44th annual meeting of the Psychonomic Society, Vancouver.

Keysar, B., Barr, D. J., Balin, J. A., & Brauner, J. S. (2000). Taking perspective in conversation: The role of mutual knowledge in comprehension. Psychological Sciences, 11, 32-38.

Perner, J., Leekam, S. R., & Wimmer, H. (1987). Three-year-olds’ difficulty with false belief: The case for a conceptual deficit. British Journal of Developmental Psychology, 5(2), 125–137.

Tanenhaus, M.K., Spivey-Knowlton, M.J., Eberhard, K.M. & Sedivy, J.C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science 268: 1632-1634.

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 4:23 PM

Sally returns, and the child is asked what Sally thinks is in the box. Children younger than 3 say "pencils", children older than 4 say "candy".

I really hope the test is not merely to ask such a question, since that mixes in a test of language mastery with the fundamental test. Better that it be a test that elicits behavior that reveals the child's belief about the other person's belief, such as a game where the successful strategy happens to involve fooling the other person or taking opportunistic advantage of a false belief. Better still that there be different tests that test for the belief in different ways.

After making the above point I decided to Google the issue. I located a study that purports to show:

Results from these experiments seem to demonstrate that 3-year-old children may not understand the relevant sense of the word "think," and therefore, that the common version of the paradigm of the familiar container and the unexpected object is not suitable for assessing their understanding of false belief.

This "common version" is as follows:

The children are then asked what they originally thought the container held, and what another container will hold.

So the "common version" involves asking them questions, i.e. testing using language, which is what I was criticizing above.

(This is about past-self-belief-attribution rather than other-person-belief-attribution, but the study has the same implication for both.)

By the way, those referred-to results about past-self-belief-attribution suggests that the fundamental change, if any, that occurs in children is not other-mind-belief-attribution, but self-or-other-mind-belief-attribution - that is, belief attribution per se, rather than merely other-minds belief attribution. As a result, I would take slight issue with:

Our ability to visualize other minds is imperfect.

This statement implies a contrast between visualizing self-mind and visualizing other-minds, whereas the experimental results seem to show (if they show anything - the language issue needs to be dealt with) that our skill at visualizing own-mind and other-mind matures in parallel, with the same significant advance between 3 and 4. It seems likely that these are at bottom the same skill.

(What I suspect is that nonverbal tests will show the same maturation from 3 to 4.)

I'm tired of bookmarking all of Eliezer's posts on They're all really good.

Tire no more.

Not that I'd encourage Author Bias or anything. Tries not to look guilty

In fact... though I think the benefits outweigh the negatives, if the management would rather delete such naughtiness as that last comment I'll in no way be offended.

Constant- You make an excellent point. One question that often needs to be asked when reading experiments is- "Does the conclusion follow from the evidence presented?" (I often find the answer to be 'maybe not' Is the ability to vizualize a learned skill? Can you train someone to be better at it? I grew up with three brothers all about my age and my mother would often ask, "How would you like it if your brother did that to you?" This had an effect on me (or am I just imagining it?) Anyone know of such a study?

Odd. I would have spent that small space of time deliberating whether to move the the candle that the director couldn't see, just to pull one over on the director.

Sometime between the age of 3 and 4, a human child becomes able, for the first time, to model other minds as having different beliefs.

I'm having to actually teach mine this - when I read her a book at bedtime, she insists on being able to see it at the same time, and I have to point out to her that I can't read it unless I can see it too. She seems to be getting the idea that if I can't see it, she doesn't get it read to her ... giving her a direct incentive to learn something she's capable of learning is really effective.

Huh. My first thought on comprehending Keysar et. al.'s experiment was that it would make a good test for detecting telepaths trying to conceal their abilities (as, for example, in Babylon 5</>). Not something we're ever likely to need in real life, of course, but it could serve the purpose of a Voight-Kampff test in somebody's B-5 fan-fic.

My first reaction to this was, "Duh, of course people saw things from their own perspective."

Then I realized that was hindsight bias.

I guess the first step is realizing I have a problem. :)