The explanation from ancestral environment seems likely. However, there is also a rational argument for refusing to accept a claim unless all the steps have been laid out from your own knowledge to the claim. While there are genuine truth seekers who have genuinely found truth and who we therefore should, ideally, believe, nevertheless a blanket policy of simply taking these people at their word has the unfortunate side-effect of also rendering us vulnerable to humbug, because we are not equipped to tell apart the humbug from the true statements many steps removed from our knowledge.
At the same time, people do not universally reject claims that are many steps removed from their own experience. After all, scientists have made headway with the public. And unfortunately, humbug also regularly makes headway. There have always been niches in society for people claiming esoteric knowledge.
The young seem especially vulnerable to accepting whatever they are told. Santa Claus and all that, but also any nonsense fed to them by their schools. Schools for the young are particularly effective instruments for indoctrinating a population. In contrast, the old tend to be quite a bit more resistant to new claims - for better and for worse.
An evolutionary explanation for this is fairly easy to come up with, I think. Children have a survival need to learn as much as they can as quickly as they can, and adults have a vital role as their teachers. In their respective roles, it is best for adults to be unreceptive to new claims, so that their store of knowledge remains a reliable archive of lessons from the past, and it is best for the young to accept whatever they are told without wasting a lot of time questioning it.
It is too easy to come up with a just so story like this. How would you rephrase it to make it testable?
Here is a counterstory:
Children have a survival need to learn only well-tested knowledge; they cannot afford to waste their precious developmental years believing wrong ideas. Adults, however, have already survived their juvenile years, and so they are presumably more fit. Furthermore, once an adult successfully reproduces, natural selection no longer cares about them; neither senescence nor gullibility affect an adult's fitness. Therefore, we should expect children to be skeptical and adults to be gullible.
This counterstory doesn't function.
A child's development is not consciously controlled; and they are protected by adults; so believing incorrect things temporarily doesn't harm their development at all.
If you wish to produce a counterstory, make it an actual plausible one. Even if it were the case that children tended to be more skeptical of claims, your story would REMAIN obviously false; whereas Constant's story would remain an important factor, and would raise the question of why we don't see what would be expected given the relevant facts.
As long as we're on the subject of evolutionary-psychology/sociobiology/whatever if someone tries to argue against it by saying it's just a bunch of reactionaries trying to justify inequity you can point to the data which says it ain't so. Another soldier sent against the army of reductionism defeated, surely a signal from Providence that all will be assimilated.
For example, talking as if you think "simpler explanation" is a knockdown argument for evolution (which it is)
I don't quite agree - by itself, X being "simpler" is a reason to increase my subjective belief that X is true (assuming that X is in a field where simplicity generally works) but it's not enough to prove e.g. creationism false. Rather, it is the total lack of evidence for anything supernatural that is the knockdown argument - if I had reason to believe that even one instance of say, ghosts or the effects of prayer were true, then I'd have to think that creationism was possible as well.
This is certainly an insightful post. I'm not sure the example is that compelling though.
If you argue with a young earth creationist, they could full well understand what you mean, but simply disagree and claim "God did it," is a simpler explanation still. In fact, if we were to presuppose an intelligent being of infinite power existed and created things, it seems it would actually be a simpler explanation.
Most people, though perhaps not all, who have no belief in an omnipotent designer will pretty quickly accept evolution. So that might not be t...
When you say "A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don't recurse far enough, you're just talking to yourself."
this strongly reminds me of what it is like to try talking, as an atheist, with a christian about any religious issue. I have concluded years ago that I just shouldn't try anymore, that reasonable verbal exchange is not possible...
I suppose that I should recurse... but how and how far where I am not sure.
The best way to disabuse a Christian of any false notions - under the assumption that those notions are false - would be to lead them to Less Wrong. :P
I don't agree. I think the best way to disabuse them of such notions would be to lead them to extremely high status atheists including a community of highly attractive potential mates. You change group affiliation beliefs by changing desired group affiliation.
So yes, your suggestion may lead more Christians to toss their Christianity, but mine makes them more rational thinkers
The same influences that make people toss Christianity are also what will influence people to become more rational. Leading people to lesswrong on average makes them scoff then add things to their stereotype cache.
Which is the greater sin, if Christianity is wrong?
If Christianity is wrong then I'd say neither. ;)
This reminds me of teaching. I think good teachers understand short inferential distances at least intuitively if not explicitly. The 'shortness' of inference is why good teaching must be interactive.
Psychohistorian: it depends on what you mean by "simple" and "explanation". The sense in which "it's the simplest explanation" is a powerful argument for something is not one in which "God did it" is the simplest explanation for anything.
Eliezer_Yudkowsky: I've seen the kinds of failures of explanation you refer to, and there's also the possibility that the explainer just isn't capable of explaining all of the inferential steps because he doesn't know them. In that case, the explainer is basically "manipulating symbols without understanding them". This is why I've formulated that principle (sort of a corollary to what you've argued here) that:
"If you can't explain your idea/job/research to a layman, given enough time, and starting from reference to things he already understands, you don't understand it yourself."
I have experienced this problem before-- the teacher assumes you have prior knowledge that you just do not have, and all of what he says afterwards assumes you've made the logical leap. I wonder to what extent thoughtful people will reconstruct the gaps in their knowledge assuming the end conclusion is correct and working backwards to what they know in order to give themselves a useful (but possibly incorrect) bridge from B to A. For example, I recently heard a horrible biochem lecture about using various types of protein sequence and domain homology to predict function and cellular localization. Now, the idea that homology could be used to partially predict these things just seemed logical, and I think my brain just ran with the idea and thought about how I would go about using the technique, and placed everything he said piece-wise into that schema. When I actually started to question specifics at the end of the lecture, it became clear that I didn't understand anything the man was saying at all outside of the words "homology" and "prediction", and I had just filled in what seemed logical to me. How dangerous is it to try to "catch up" when people take huge inferential leaps?
TGGP: Yes for people below some IQ threshold. No for someone of the same IQ as the explainer.
(I probably should have added the intelligence criterion the first time around, I guess, but I was simplifying a bit.)
This is an excellent post, Eliezer!
Taking this phenomenon into consideration not only gives me cause to go back over my own teaching technique (of a rather specialized trade) and make sure I am not leaving out any steps that seem obvious to me (the specialist), but, like Laura, it helps me to understand times when I was baffled by a speaker or writer whose tone implied I'd be an idiot not to follow along easily.
When I write for a very bright "puzzle-solving-type" audience, I do the mental equivalent of deleting every fourth sentence or at least the tail part of every fourth sentence to prevent the reader from getting bored. I believe that practice helps my writings to compete with the writings around it for the critical resource of attention. There are of course many ways of competing for attention, and this is one of the least prejudicial to rational thought. I recommend this practice only in forums in which the reader can easily ask followup questi...
Richard, you may or may not care that having read the above my willingness to read anything you write in future has somewhat decreased.
I would add, Richard, that writing "dear reader" on a medium like this comes off as patronizing.
Some of your claims about the EEA are counterintuitive to me. Basically, it's not obvious that all information not strictly private would have been public. I'm thinking, for example, of present-day isolated cultures in which shamans are trained for several years: surely not all of their knowledge can be produced in a publicly comprehensible form. There must be a certain amount of "Eat this herb -- I could tell you why, but it would take too long to explain". Or so I imagine.
So how much of your description of knowledge in the EEA is your guessimation, and how much is the consensus view? And where can I find papers on the consensus view? My Google-fu fails me.
I find an easy way to get some of the complicated inferential jumps for free is to find a similar set of inferential jumps they have made in a similar subject. It is much easier to correct a "close" inferential jump than it is to create a new one out of thin air.
Example: When discussing the concept of programming you can use the concept of an assembly line to get their head into a procedural mode of thinking. Once they think about an object visiting a bunch of stations in a factory you can replace "object" with "program" and "station" with "line of code." They still have no idea how programming works, but they can suddenly create a bunch of inferential jumps based on assembly lines.
In my experience, they now start asking questions about programming as related to assembly lines and you can fill in the gaps as you find them.
"So what happens at the end of the line?"
"Well, the program generally loops back around and starts over."
"Oh. So it follows the same line forever?"
"Not necessarily. Sometimes the line takes a detour and heads off into a new area of the plant for awhile. But it generally will come ...
As someone who has done (some) teaching, I think this is absolutely correct. In fact, the most difficult thing I find about teaching is trying to find the student's starting knowledge, and then working from there. If the teacher does not goes back enough 'inferential steps', the student won't learn anything - or worse, they might think they know when they don't.
Excellent stuff.
Now I think of it, this reminds of something Richard Dawkins used to say at some talks: that we (the modern audience) could give Aristotle a tutorial. Being a fantasist myself, I've sometimes wondered how that could be possible. Leaving aside the complications of building a time machine (I leave that to other people), I wondered how would it be to actually meet Aristotle and explain to him some of the things we now know about life, the universe & everything.
First of all, I'd have to learn ancient greek, of course, or no communication would be possible...
Actually, evolution might be the easiest one. It's inevitable if you have variation and selection. It's a really pretty theory.
I don't know how hard it would be to convey that observation and experimentation will take you farther than just theorizing.
If I brought back some tech far advanced over Aristotle's period (and I wonder what would be most convincing), it might add weight to my arguments.
And personally, even if I had a time machine and the knowledge of ancient Greek, I don't know what hard it would be to get him to listen to a woman.
From a practical point of view teaching the germ theory of disease would probably have the most immediate benefit.
A hunting or sniper rifle, a pistol, a remote controlled helicopter with wireless video, broad spectrum antibiotics, powerful painkillers, explosives.
A plastic bottle out of the trash. It's transparent but flexible and almost weightless. See how well the lid has been made? It makes a water-tight seal.
It might be the most valuable object in Greece.
And then when you've got his attention, show him decimal notation.
And stirrups for his horse. And lances.
Once he's hooked, show him why things float. And how a ball rolling down an inclined plane moves 1, 4, 9, 16 as it accelerates.
Show him Cartesian geometry. And how to play go with lines scratched in the ground and coloured stones. Make a recorder and play him some songs.
He'll teach you Greek.
Show him how to send messages using flashing mirrors. Show him Playfair's cipher. Perspective drawing. How to make a magnifying glass. Newton's cradle. Make a model boat out of bronze.
I suspect in a day in Ancient Greece, you'd see so many easily solved problems that my list would look naive. You don't need modern technology. You need the things that were discovered just after the mediaevals recovered what the Greeks already knew.
I suspect in a day in Ancient Greece, you'd see so many easily solved problems that my list would look naive.
This is one of the more interesting approaches to the Connecticut Yankee in King Arthur's Court (as I dub this species of thought problem) - that you don't need any special preparation because your basic background means that you'll spend the rest of your life in the past rushing around yelling 'don't do that, you idiot, do it this way!'
Diplomacy might actually be the best preparation.
See also: www.justfuckinggoogleit.com, www.lmgtfy.com, Reddit anti-"repost" rage and the comments like this that appear in practically every online community.
This is the reason it's a Bad Thing that so many of the deeper concepts of Mormonism have become public knowledge. The first question I get asked, upon revealing that I'm a Mormon, is often, "So, you believe that if you're good in this life, you'll get your own planet after you die?" There are at least three huge problems with this question, and buried deep beneath them, a tiny seed of truth. But I can't just say "The inferential distance is too great for me to immediately explain the answer to that question. Let me tell you about the Plan o...
This is one of those things that I realize is so obvious once I thought about it, but until it was pointed out to me, I would have never seen it.
...To Mazur’s consternation, the simple test of conceptual understanding showed that his students had not grasped the basic ideas of his physics course: two-thirds of them were modern Aristotelians...“I said, ‘Why don’t you discuss it with each other?’” Immediately, the lecture hall was abuzz as 150 students started talking to each other in one-on-one conversations about the puzzling question. “It was complete chaos,” says Mazur. “But within three minutes, they had figured it out. That was very surprising to me—I had just spent 10 minutes trying to explain t
If there is a probability of faulty inference, then longer inferences are exponentially less likely to be valid, with the length of the valid inference being proportional to logarithm of the process fidelity. Long handwaved inferences can have unbelievably low probability of correctness, and thus be incredibly weak as evidence.
Furthermore, informal arguments very often rely on 'i can't imagine an alternative' in multiple of their steps, and this itself has proven unreliable. It is also too easy to introduce, deliberately or otherwise, a huge number of impl...
The lack of this knowledge got me a nice big "most condescending statement of the day award" in lab a year ago.
I don't think this is quite right, but taking up the challenge may be helpful when writing:
I have always figured that if I can't explain something I'm doing t oa group of bright undergraduates, I don't really understand it myself, and that challenge has shaped everything I have written.
Daniel Dennett
And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...
"I know [evolution] sounds crazy -- it didn't make sense to me at first either. I can explain how it works if you're curious, but it will take me a long time, because it's a complicated idea with lots of moving parts that you probably haven't seen before. Sometimes even simple questions like 'where did the first humans come from?' turn out to have complicated answers."
And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...
"This is going to take a while to explain."
Did I do it? Did I win rationalism?!
Expecting short Inferential Distances wouldn't that be a case of rational thought producing beliefs which are themselves evidence? :P Manifested in over-explaining to the point of cognitive-dissonance? How about applying Occam's Razor and going the shorter distance: improve the clarity of the source by means of symbolism though a reflective correction (as if to compensate for the distortion in the other lens). To me it means to steel-man the opponent's argument to the point where it becomes not-falisifible. See, the fact that science works by falsificati...
I think this concept may be fundamental in explaining the mecanics of cohesion and division in society. This could help understand why politics tend to get more and more divided. Especially on the internet, but also IRL, people tend to confirm their ideas rather than confront them to different ones, as first observed by C. Wason (Peter C. Wason, « On the failure to eliminate hypotheses in a conceptual task », in Quarterly Journal of Experimental Psychology, 1960), and confirmed since. Or, one could argue, that reinforcing one's idea...
It's been nearly a century since relativity and quantum physics took the stage, and we have gathered considerably more data, and filled in a great many areas since then, yet we are still waiting for the next big breakthrough.
The problem may be in the way we approach scientific discovery.
Currently, when a new hypothesis is advanced, it is carefully considered as to how well it stands on its own and whether, eventually, it is worthy of being adopted as a recognized theory and becoming a part of our overall model of how everything works.
This might be the prob...
Homo sapiens' environment of evolutionary adaptedness (aka EEA or "ancestral environment") consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory.
In a world like that, all background knowledge is universal knowledge. All information not strictly private is public, period.
In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don't have to explain to your fellow tribe members what an oasis is, or why it's a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge. But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge. When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously.
In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.
In the ancestral environment, anyone who says something with no obvious support, is a liar or an idiot. You're not likely to think, "Hey, maybe this guy has well-supported background knowledge that no one in my band has even heard of," because it was a reliable invariant of the ancestral environment that this didn't happen.
Conversely, if you say something blatantly obvious and the other person doesn't see it, they're the idiot, or they're being deliberately obstinate to annoy you.
And to top it off, if someone says something with no obvious support and expects you to believe it - acting all indignant when you don't - then they must be crazy.
Combined with the illusion of transparency and self-anchoring, I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience - or even communicating with scientists from other disciplines. When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners, assuming that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.
A biologist, speaking to a physicist, can justify evolution by saying it is "the simplest explanation". But not everyone on Earth has been inculcated with that legendary history of science, from Newton to Einstein, which invests the phrase "simplest explanation" with its awesome import: a Word of Power, spoken at the birth of theories and carved on their tombstones. To someone else, "But it's the simplest explanation!" may sound like an interesting but hardly knockdown argument; it doesn't feel like all that powerful a tool for comprehending office politics or fixing a broken car. Obviously the biologist is infatuated with his own ideas, too arrogant to be open to alternative explanations which sound just as plausible. (If it sounds plausible to me, it should sound plausible to any sane member of my band.)
And from the biologist's perspective, he can understand how evolution might sound a little odd at first - but when someone rejects evolution even after the biologist explains that it's the simplest explanation, well, it's clear that nonscientists are just idiots and there's no point in talking to them.
A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don't recurse far enough, you're just talking to yourself.
If at any point you make a statement without obvious justification in arguments you've previously supported, the audience just thinks you're a cult victim.
This also happens when you allow yourself to be seen visibly attaching greater weight to an argument than is justified in the eyes of the audience at that time. For example, talking as if you think "simpler explanation" is a knockdown argument for evolution (which it is), rather than a sorta-interesting idea (which it sounds like to someone who hasn't been raised to revere Occam's Razor).
Oh, and you'd better not drop any hints that you think you're working a dozen inferential steps away from what the audience knows, or that you think you have special background knowledge not available to them. The audience doesn't know anything about an evolutionary-psychological argument for a cognitive bias to underestimate inferential distances leading to traffic jams in communication. They'll just think you're condescending.
And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...