Raising the Sanity Waterline

To paraphrase the Black Belt Bayesian:  Behind every exciting, dramatic failure, there is a more important story about a larger and less dramatic failure that made the first failure possible.

If every trace of religion was magically eliminated from the world tomorrow, then—however much improved the lives of many people would be—we would not even have come close to solving the larger failures of sanity that made religion possible in the first place.

We have good cause to spend some of our efforts on trying to eliminate religion directly, because it is a direct problem.  But religion also serves the function of an asphyxiated canary in a coal mine—religion is a sign, a symptom, of larger problems that don't go away just because someone loses their religion.

Consider this thought experiment—what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions?  In fact—imagine that we're going to go and survey all your students five years later, and see how many of them have lost their religions compared to a control group; if you make the slightest move at fighting religion directly, you will invalidate the experiment.  You may not make a single mention of religion or any religious belief in your classroom, you may not even hint at it in any obvious way.  All your examples must center about real-world cases that have nothing to do with religion.

If you can't fight religion directly, what do you teach that raises the general waterline of sanity to the point that religion goes underwater?

Here are some such topics I've already covered—not avoiding all mention of religion, but it could be done:

But to look at it another way—

Suppose we have a scientist who's still religious, either full-blown scriptural-religion, or in the sense of tossing around vague casual endorsements of "spirituality".

We now know this person is not applying any technical, explicit understanding of...

  • ...what constitutes evidence and why;
  • ...Occam's Razor;
  • ...how the above two rules derive from the lawful and causal operation of minds as mapping engines, and do not switch off when you talk about tooth fairies;
  • ...how to tell the difference between a real answer and a curiosity-stopper;
  • ...how to rethink matters for themselves instead of just repeating things they heard;
  • ...certain general trends of science over the last three thousand years;
  • ...the difficult arts of actually updating on new evidence and relinquishing old beliefs;
  • ...epistemology 101;
  • ...self-honesty 201;
  • ...etcetera etcetera etcetera and so on.

When you consider it—these are all rather basic matters of study, as such things go.  A quick introduction to all of them (well, except naturalistic metaethics) would be... a four-credit undergraduate course with no prerequisites?

But there are Nobel laureates who haven't taken that course!  Richard Smalley if you're looking for a cheap shot, or Robert Aumann if you're looking for a scary shot.

And they can't be isolated exceptions.  If all of their professional compatriots had taken that course, then Smalley or Aumann would either have been corrected (as their colleagues kindly took them aside and explained the bare fundamentals) or else regarded with too much pity and concern to win a Nobel Prize.  Could you—realistically speaking, regardless of fairness—win a Nobel while advocating the existence of Santa Claus?

That's what the dead canary, religion, is telling us: that the general sanity waterline is currently really ridiculously low.  Even in the highest halls of science.

If we throw out that dead and rotting canary, then our mine may stink a bit less, but the sanity waterline may not rise much higher.

This is not to criticize the neo-atheist movement.  The harm done by religion is clear and present danger, or rather, current and ongoing disaster.  Fighting religion's directly harmful effects takes precedence over its use as a canary or experimental indicator.  But even if Dawkins, and Dennett, and Harris, and Hitchens should somehow win utterly and absolutely to the last corner of the human sphere, the real work of rationalists will be only just beginning.

 

Part of the sequence The Craft and the Community

Next post: "A Sense That More Is Possible"

(start of sequence)

208 comments, sorted by
magical algorithm
Highlighting new comments since Today at 3:10 AM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

I already mentioned this as a comment to another post, but it's worth repeating here: The human brain has evolved some "dedicated hardware" for accelerating certain tasks.

I already mentioned in that other post that one such hardware was for recognizing faces, and that false-positives generated by this hardware caused us have a feeling of hauntedness and ghosts (because the brain receives a subconscious signal indicating the presence of a face, but consciously looking around we see no one around).

Another such hardware (which I only briefly alluded to in the other post) was "agency detection". I.e. trying to figure out whether a certain event occurred "naturally", or because another agent (a friend, a foe, or a neutral?) caused it to happen. False positives from this hardware would cause us to "detect agency" where none was, and if the event seems something way out of the capacity for a human to control, and since humans seem to be the most powerful "natural" beings in the universe, the agent in question must be something supernatural, like God.

I don't have all the details worked out, but it seems plausible that agency-detection could have been naturally selected for, perhaps to be able to integrate better into a society, and to help with knowing when it is appropriate to cooperate and when it is appropriate to defect. It's a useful skill to be able to differentiate between "something good happened to me, because this person wanted something good to happen to me and made it happen. They cooperated (successfully). I should become their friend." versus "something good happened to me, despite this person wanting something bad to happen to me, but it backfired on them. They defected (unsuccessfully). I should be wary of them."

From there, bring in Anna Salamon and Steve Rayhawkideas about tag-along selection, and it seems like religion really may be a tag-along evolutionary attribute.

Anyway, I used to be scared of ghosts and the dark and stuff like that, but once I found out about the face-recognition hardware and its false positives (and other hardware, such as sound-location) this fear has almost completely disappeared almost instantaneously.

I was already atheist or agnostic (depending on what definitions you assign to those words) when I found out about the hardware false-positives, so I can't say for sure whether had I been religious, this would have converted me.

But if it worked at making me stop "believing"[1] in ghosts, then perhaps it could work at making people stop beliving in God as well.

1: Here I am using the term "believe" in the sense of Yvain's post on haunted rationalists. Like everyone else, I would assert that ghosts didn't really exist, and would be willing to make a wager that they didn't exist. And yet, like everyone else, I was still scared of them.

Excellent description. Reminds me a little of Richard Dawkins in "The God Delusion," explaining how otherwise useful brain hardware 'misfires' and leads to religious belief.

You mention agency detection as one of the potential modules that misfire to bring about religious belief. I think we can generalize that a little more and say fairly conclusively that the ability to discern cause-and-effect was favored by natural selection, and given limited mental resources, it certainly favored errors where cause was perceived even if there was none, rather than the opposite. In the simplest scenario, imagine hearing a rustling in the bushes: you're better off always assuming there's a cause and checking for predators and enemies. If you wrote it off as nothing, you'd soon be removed from the gene pool.

Relatedly, there is evidence that the parts of the brain responsible for our ability to picture absent or fictional people are the same ones used in religious thought. It's understandable why these were selected for: if you come back to your cave to find it destroyed or stolen, it helps to imagine the neighboring tribe raiding it.

These two mechanisms seem to apply to religion: people see a cause behind the most mundane events, especially rare or unusual events. Of course they disregard the giant sample size of times such events failed to happen, but those are of course less salient. It's a quick hop to imagining an absent/hidden/fictional person -and agent - responsible for causing these events.

Undermining religion on rational grounds must thus begin with destroying the idea that there is necessarily an agent intentionally causing every effect. This should get easier: market economies are famously results of human action, but not of human design - any given result may be the effect of an agent's action, but not necessarily its intended cause. Thus, such results are not fundamentally different from, say, storms: effects of physical causes but with no intent behind them.

It would probably also help to remind people of sample size. I recently heard a story by a religious believer who based her faith on her grandfather's survival in the Korean war, which happened against very high odds. Someone like that must be reminded that many people did not survive similar incidents, and that there is likely no force behind it but random chance, much like, if life is possible on 0.000000001% of planets, and exists on the same percentage of those, given enough planets you will have life.

Agency misfires and causal misfires can help to suggest religion. For that suggestion to get past your filters, the sanity waterline has to be low. I don't invent a new religion every time I see a face in the clouds or three dandelions lined up in a row.

Neither do I, though I'm often tempted to find a reason for why my iPod's shuffle function "chose" a particular song at a particular time. ["Mad World" right now.]

It seems that our mental 'hardware' is very susceptible to agency and causal misfires, leaving an opening for something like religious belief. Robin explained religious activities and beliefs as important in group bonding [http://www.overcomingbias.com/2009/01/why-fiction-lies.html], but the fact that religion arose may just be a historical accident. It's likely that something would have arisen in the same place as a group bonding mechanism - perhaps religion just found the gap first. From an individual perspective, this hardly means that the sanity waterline is low. In fact, evolutionarily speaking, playing along may be the sanest thing to do.

The relevant sentence from Robin's post: "Social life is all about signaling our abilities and cooperativeness, and discerning such signals from others." As Norman points out [link below], self-deception makes our signals more credible, since we don't have to act as believers if we are believers. As a result, in the ancestral environment at least, it's "sane" to believe what others believe and not subject it to a conscious and costly rationality analysis. You'd basically expend resources to find out a truth that would make it more difficult for me to deceive others, which is costly in itself.

Of course today, the payoff from signaling group membership is far lower than ever before, which is why religious belief, and especially costly religious activities, violate sanity. Which, perhaps, is why secularism is on the rise: http://www.theatlantic.com/doc/200803/secularism

I think this is a good answer to Eliezer's thought experiment. Teach those budding rationalists about the human desire to conform even in the face of the prima facie ridiculousness of the prevailing beliefs.

Teach them about greens and blues; teach them about Easter Islanders building statues with their last failing stock of resources (or is that too close to teaching about religion?). Teach them how common the pattern is: when something is all around you, you are less likely to doubt its wisdom.

Human rationality (at least for now) is still built on the blocks and modules provided to us by evolution. They can lead us astray, like the "posit agency" module firing when no agent is there. But they can also be powerful correctives. A pattern-recognizing module is a dangerous thing when we create imaginary patterns... but, oh boy, when there actually is a pattern there, let that module rip!

If I recall, that trope corresponds to prior points stating that humans are driven by evolutionary heuristics to assign agency based causality to a random probability distribution. However, the laconic does summarize that fallacy rather well. Narrative examples such as tropes do tend to ease comprehension. +1 Karma

This should get easier: market economies are famously results of human action, but not of human design - any given result may be the effect of an agent's action, but not necessarily its intended cause. Thus, such results are not fundamentally different from, say, storms: effects of physical causes but with no intent behind them.

The conspiracy theory of economics remains prevalent, however, and very difficult to disabuse people of. So I'm not sure this is that helpful a handle to disabuse people of religion.

Here's another way of evaluating the sanity of religious belief:

It's arguable that the original believers of religion were insane (e.g. shamans with schizotypical personality disorder, temporal lobe epilepsy, etc...), yet with each subsequent believer in your culture, you are less and less insane to believe in it. During past history, it would only take a few insane or gullible people with good oratorical skills getting together to make religion sanely believable.

If you are religious because you see spirits, you are insane. If you are religious because your friend Shaman Bob sees spirits and predicts the rainfall, you aren't very smart, but you aren't insane either. If you are religious because your whole tribe believes in the spirits seen by Shaman Bob and has indoctrinated you from birth, you are not insane at all, you are a typical human.

Even better:

Evidence for the existence of God: my ancestors saw God and talked to him, and he did really great things for them, and so they passed down stories about it so that we'd remember. Everybody knows that.

Evidence for the existence of Jesus: same.

Evidence for the existence of Hercules: same.

Evidence for the existence of Socrates: same.

Evidence for the existence of Newton: same. Okay, we have a few more records of this one.

Exactly. These are all sane beliefs, even though only some of them are rational.

When a coin comes out tails ten times in a row, you'll bet on it being rigged strictly depending on your prior belief about how much you expect it to be rigged. Evidence only makes sense given your prior belief, inferred from other factors. If I hear a report of a devastating hurricane, I believe it more than if the very same report stated that a nuclear bomb went off in a city, and I won't believe it at all if it stated that the little green men have landed in front of the White House.

This is one of the principles of rationality I'm proud to say I discovered on my own: http://heresiology.blogspot.com/2006/01/intelligent-design-1.html

Short version: An interesting formation on earth might be a sign of human involvement, and interesting formation on mars, not so much.

It's arguable that the original believers of religion were insane (e.g. shamans with schizotypical personality disorder, temporal lobe epilepsy, etc...), yet with each subsequent believer in your culture, you are less and less insane to believe in it.

But this would be true only if the subsequent believers were not taking into account previous believers as evidence - if they had all come to the same view independently. Otherwise we have an information cascade.

Information cascades may be irrational, but they seem fully sane and neurotypical.

If you are religious because your whole tribe believes in the spirits seen by Shaman Bob and has indoctrinated you from birth, you are not insane at all, you are a typical human.

The point is that a typical contemporary human is insane. The problem doesn't go away if everyone is suffering from it. Death is still bad even if everyone dies, and believing in nonsense is still insane, even if everyone bends to some reason to do so.

Yes, there is a "problem" that everyone is suffering from. But the problem is stupidity, not insanity. There is no reasonable basis to assign insanity to typical contemporary humans just because their brains can't achieve the rationality that a minority of human brains can, unless someone actually has some arguments showing some brain malfunction.

Believing in nonsense is not at all insane if your brain is hardwired to be biased towards certain types of nonsense, or if you aren't smart enough to figure out that you are encountering nonsense. And this is exactly how normal human beings are.

The normal, healthy, and sane functioning of typical contemporary human brains is to be susceptible to certain biases. That's the whole thesis of Overcoming Bias. The sooner that we atypical rationalists get used to this, the better, because myopically characterizing bias as insanity will disguise the fact that one of the biggest threats to rationality is certain perfectly healthy processes in the typical human brain.

Taboo "sane". "Neurotypical" might be a good substitute.

If you want people to repeat this back, write it in a test, maybe even apply it in an academic context, a four-credit undergrad course will work.

If you want them to have it as the ground state of their mind in everyday life, you probably need to have taught them songs about it in kindergarten.

If you want them to have it as the ground state of their mind in everyday life, you probably need to have taught them songs about it in kindergarten.

I don't know; I agree with you about the likely effects of the four-credit class, but OB has had substantial effects on me and various other people I know, despite not reaching us in kindergarten. Why does OB work as well as it does?

Also, I think it's the way OB's teachings get reinforced daily. You don't just study one course and then forget about it: if you read OB/LW regularly, you get constant tiny nudges in the right direction. There's research suggesting that frequent small events have a stronger effect on one's happiness than rare big ones, and I suspect it's the same when it comes to learning new patterns of thought. Our minds are constantly changing and adapting, so if you just make a change once, it'll be drowned out in the sea of other changes. You'll want to bring it up to the point where it becomes self-reinforcing, and that takes time.

This is the reason why I suspect Eliezer's book won't actually have as big of an effect as many may think. Most people will probably read it, think it amazing, think they absolutely have to apply it to their normal lives... then go on and worry about their bills and partners and forget about the book. The main benefit will be for those who'll actually be startled enough to go online and find out more - if they end up as regular readers of OB and LW, or find some other rationality resource, then they have hope. Otherwise, probably not.

This is a very good point that I'll try to keep in mind, and another solution would be to have a decent community.

Perhaps Eliezer's book should have a note -- please read one chapter per day?

I don't know, I came in and read a little over a year's worth of Eliezer's OB posts in a couple months' exploration, and I think it had a pretty solid impact on me.

Unrepresentative sample. Nobody would start reading OB unless they were already at least a rationalist-wannabe.

I agree about the unrepresentative sample. It would be interesting to try teaching OB in a small class-sized four-credit college seminar, with a follow-up a year later, to see if the material can be presented so as to have impact on ordinary university students, or on ordinary students at a selective university. Probably worthwhile as an experiment, after we do some more basic research seeing if we can detect this "rationality" thing in a survey or something of the OB readership (so we'd know what to test for).

But even given that OB is starting with all or mostly rationalist wannabes, I'm surprised at the impact its had on my and others' thinking, relative to what happens to rationalist wannabes who don't read OB, or who aren't members of this community.

I'd be interested in trying to drag the age range down as low as possible - could 13-year-olds handle uncut OB? I think yes.

I can only speak for myself here, but personally what changed my thinking after reading OB was understanding both how things work, and why they necessarily must be that way and no other. Now when I think about that, I realize it allowed me to completely prune many search trees and redirect a lot of wasted effort "sitting on fences".

I started reading OB because I liked Robin Hanson as an economist. I continued reading because I liked Yudkowsky as a writer. I agree I'm still part of an unrepresentative sample (people who are willing to read and consider Yudkowsky's long ramblings), but not everyone found the site because of an interest in rationality per se.

Unfortunately, anyone taking a college course probably would be interested in rationality qua rationality. But the lessons are still valuable for those poor souls who, like I once was, are still religious despite it. The same for those who are religious fence-sitters.

There was a time in history when religion was completely eliminated from the social and scientific life -- the Soviet period, roughly from 1920s to 1980s.

I'm not informed well enough to judge the effects the removal of religion had on the Soviet science. Granted, the country went from rubble to Sputnik and nuclear weapons, but it is hard for me to untangle the causes of this -- there were other powerful factors at work (e.g. "if you don't do good science, we'll send you and your family to GULAG").

One thing, however, is certain -- after the Soviet Union collapsed, religion conquered its lost positions back in a matter of a few years. The memetic sterilization that has been going on for several generations didn't help at all.

Now, about 20 years after the collapse, we see quite a lot of academics publicly mentioning God in their TV interviews, and you'll never hear a public politician mentioning that he is an atheist -- after doing so, his career would be instantly ruined.

To sum up, I have to agree with the posters suggesting that the 'God-shaped hole' wanting to be filled is innate. Figuring out whether religion is an epistemic need, a signaling tool, or both of these mixed in some proportion is another story.

It doesn't have to be a 'God-shaped hole' -- there probably is a hole, and over the past few millennia, the Goddists have learned some excellent strategies to fill it, and to exploit it for the replication of their memes. People like Sagan and Dawkins have spent their lives trying to show that science, properly understood and appreciated, fills the hole better, fits it more truly, than do the ideas of religion.

Bottom line: we're not selling Sweet'n'Low here. If we slap "I Can't Believe It's Not Christ!" on the jar, if we act as though religion is the "real thing," and we've got a convenient stop-gap, people are going to want to go back to the "real thing" every time.

Agreed, the term 'God-shaped hole' is misleading. Actually, I didn't mean any specific monotheistic God, but rather 'One or more anthropomorphic entities with supernatural powers who created the observable world'.

Yes, the Goddists learned to exploit the Hole quite well, but couldn't it be because the Hole provided a better environment for survival of memes involving powerful anthropomorphic entities than for other kinds of memes?

As for science filling the hole better, I of course agree with this, but a layperson may have a different definition of 'better' for this context. You, Dawkins, Sagan and most OB/LW readers define 'better' as 'more closely corresponding to reality', while a layperson may define 'better' as 'making me feel more comfortable'.

(Also, I don't quite understand what part of my post can be interpreted as suggesting to "act as though religion is the "real thing," or that scientific worldview is a quick-and-easy hole filler -- it obviously isn't. Perhaps I wasn't clear enough -- I'm not a native English speaker.)

(Also, I don't quite understand what part of my post can be interpreted as suggesting to "act as though religion is the "real thing," or that scientific worldview is a quick-and-easy hole filler -- it obviously isn't. Perhaps I wasn't clear enough -- I'm not a native English speaker.)

Sorry, I didn't mean to imply that you were implying that religion is epistemically the real thing. More that...our sense of sweetness is supposed to detect sugar. Sugar is the real referent of our pleasure in sweet-tasting things, while something like sucralose is simply a substitute, a way of replacing it. I worry that by saying "God-shaped hole," we imply that the supernatural -- whether or not it exists -- really is the original referent of the desires which religion exploits. This could be true, but I do not think it is, and I do not think it is a point we should concede just yet.

I just read a nice blog post at neurowhoa.blogspot.com/2009/03/believer-brains-different-from-non.html, covering research on brain differences of believers vs. non-believers. The take away from the recent study was "religious conviction is associated with reduced neural responsivity to uncertainty and error". I'm hesitant to read too much into this particular study, but if there is something to this then the best way to spread rational thought would be to try to correct for this deficiency. Practicing not to let uncertainty or errors slide by, no matter how small, would result in a positive habit and develop their rationality skills.

It seems to me that the principal issue is that, even if you know all those things... that doesn't guarantee that you're actually applying them to your own beliefs or thought processes. There is no "view source" button for the brain, nor even a way to get a stack trace of how you arrived at a particular conclusion... and even if there were, most of us, most of the time, would not push the button or look at the trace, if we were happy with our existing/expected results.

In addition, most people are astonishingly bad at reasoning from the general to the specific... which means that if you don't mention religion explicitly in your hypothetical course, very few people will actually apply the skills in a religious context... especially if that part of their life is working out just fine, from their point of view.

It may be fictional evidence, but I think S.P. Somtow's idea that "The breaking of joy is the beginning of wisdom" has some applicability here... as even highly-motivated individuals have trouble learning to see their beliefs, as beliefs -- and therefore subject to the skills of rationality.

That is, if you think something is part of the territory, you're not going to apply something you think of as map-reading skills.

Hm, in fact, here's an interesting example. One of my students in the Mind Hackers' Guild just posted to our forum, complaining that by eliminating all his negative motivation regarding work, he now had no positive motivation either. But it was not apparent to him that the very fact he considered this a problem, was also an example of negative motivation.

That's because even though I teach people that ALL negative motivation is counterproductive for achieving long-term, directional goals (as opposed to very short-term or avoidance goals), people still assume that "negative motivation" means "motivations I don't like, or already know are irrational"... and so they make exceptions for all the things they think are "just the way it is". (Like in this man's case, an irrational fear linked to his need to "pay the bills".)

And this happens routinely with people, no matter how explicitly and repeatedly I state that, "no, you have to include those too". It seems like people still have to go through the process at least once or twice with someone pointing one of these out, before they "get it" that those other motivations also "count".

Heck, truth be told, I still sometimes take a while to find what hidden assumption in my thinking is leading to interference... even at times when I'd happily push the "view source" button or look at the stack trace... if only that were possible.

But since I routinely and trivially notice these map-territory confusions when my students do them, even without a view-source button -- heck, I can spot them from just a few words in the middle of their forum posts! -- I have to conclude that there is something innate at issue, besides me just not being a good enough teacher. After all, if I can spot these things in them, but not me, there must be some sort of bias at work.

I suspect you are right; the issue isn't that these people haven't "learned" relevant abstractions or tools. They just don't have enough incentives to apply those tools in these context. I'm not sure you "teach" incentives, so I'm not sure there is anything you can teach which will achieve the goal stated. So I'd ask the question: how can we give people incentives to apply their tools to cases like religion?

It's not incentive either. I have plenty of incentive, and so do my students. It's simply that we don't notice our beliefs as beliefs, if they're already in our heads. (As opposed to the situation when vetting input that's proposed as a new belief.)

Since we don't have any kind of built-in function for listing ALL the beliefs involved in a given decision, we are often unaware of the key beliefs that are keeping us stuck in a particular area. We sit there listing all the "beliefs" we can think of, while the single most critical belief in that area isn't registering as a "belief" at all; it just fades in as part of our background assumptions. To us, it's something like "water is wet" -- sure it's a belief, but how could it possibly be relevant to our problem?

Usually, an irrational fear associated with something like, "but how will I pay the bills?" masquerades as simple, factual logic. But the underlying emotional belief is usually something more like, "If I don't pay the bills, then I'm an irresponsible person and no-one will love me." The underlying belief is invisible because we don't look underneath the "logic" to find the emotion hiding underneath.

Unfortunately, all reasoning is motivated reasoning, which means that to find your irrational beliefs in a given area, you have to first dig up a nontrivial number of rationalizations... knowing that the rationalization you're looking for is probably something you specifically created to prevent you from thinking about the motivation involved in the first place! (After all, revealing to others that you think you're irresponsible isn't good genetic fitness... and if you know, that makes it more likely you'll unintentionally reveal it.)

A simple tool, by the way, for digging up the motivation behind seemingly "factual" statements and beliefs is to ask, "And what's bad about that?" or "And what's good about that?".... usually followed by, "And what does that say/mean about YOU?" You pretty quickly discover that nearly everything in the universe revolves around you. ;-)

I'd say there're two problems: one is incentives, as you say; the other is making "apply these tools to your own beliefs" a natural affordance for people -- something that just springs to mind as a possibility, the way drinking a glass of liquid springs to mind on seeing it (even when you're not thirsty, or when the glass contains laundry detergent).

Regarding incentives: good question. If rationality does make peoples' lives better, but it makes their lives better in ways that aren't obvious in prospect, we may be able to "teach" incentives by making the potential benefits of rationality more obvious to the person's "near"-thinking system, so that the potential benefits can actually pull their behavior. (Humans are bad enough at getting to the gym, switching to more satisfying jobs in cases where this requires a bit of initial effort, etc., that peoples' lack of acted-on motivation to apply rationality to religion does not strongly imply a lack of inventives to do so.)

Regarding building a "try this on your own beliefs" affordance (so that The Bottom Line or other techniques just naturally spring to mind): Cognitive-Behavioral Therapy people explicitly teach the "now apply this method to your own beliefs, as they come up" steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer's scenario where we skip mention of religion). The evidence for CBT's effectiveness is fairly good AFAICT; it's worth studying their teaching techniques.

I think there's a question of understanding here, not just incentives. The knowledge of minds as cognitive engines or the principle of the bottom line, is the knowledge that in full generality you can't draw an accurate map of a city without seeing it or having some other kind of causal interaction with it. This is one of the things that readers have cited as the most important thing they learned from my writing on OB. And it's the difference between being told an equation in school to use on a particular test, versus knowing under what (extremely general) real-world conditions you can derive it.

Like the difference between being told that gravity is 9.8 m/s^2 and being able to use that to answer written questions about gravity on a test or maybe even predict the fall of clocks off a tower, but never thinking to apply this to anything except gravity. Versus being able to do and visualize the two steps of integral calculus that get you from constant acceleration A to 1/2 A t^2, which is much more general than gravity.

If you knew on a gut level - as knowledge - that you couldn't draw a map of a city without looking at it, I think the issue of incentives would be a lot mooter. There might still be incentives whether or not to communicate that understanding, whether or not to talk to others about it, etc., but on a gut level, you yourself would just know.

Even if you "just know", this doesn't grant you the ability to perform an instantaneous search-and-replace on the entire contents of your own brain.

Think of the difference between copying code, and function invocation. If the function is defined in one place and then reused, you can certainly make one change, and get a multitude of benefits from doing so.

However, this relies on the original programmer having recognized the pattern, and then consistently using a single abstraction throughout the code. But in practice, we usually learn variations on a theme before we learn the theme itself, and don't always connect all our variations.

And this limitation applies equally to our declarative and procedural memories. If there's not a shared abstraction in use, you have to search-and-replace... and the brain doesn't have very many "indexes" you can use to do the searching with -- you're usually limited to searching by sensory information (which can include emotional responses, fortunately), or by existing abstractions. ("Off-index" or "table scan" searches are slower and unlikely to be complete, anyway -- think of trying to do a search and replace on uses of the "visitor" pattern, where each application has different method names, none of which include "visit" or use "Visitor" in a class name!)

It seems to me that yours and Robin's view of minds still contains some notion of a "decider" -- that there's some part of you that can just look and see something's wrong and then refuse to execute that wrongness.

But if mind is just a self-modifying program, then not only are we subject to getting things wrong, we're also subject to recording that wrongness, and perpetuating it in a variety of ways... recapitulating the hardware wrongs on a software level, in other words.

And so, while you seem to be saying, "if people were better programmers, they'd write better code"... it seems to me you're leaving out the part where becoming a better programmer has NO effect...

On all the code you've already written.

Think of something you might have said to Kurt Gödel: He was a theist. (And not a dualist: he thought materialism is wrong.) In fact he believed the world is rational and also was a Leibnitzian monadology with God as the central monad. He was certainly NOT guilty of not applying Eliezer's list of "technical, explicit understandings," as far as I can see. I should point out that he separated the question about religion: "Religions are, for the most part, bad -- but religion is not." (Gödel in Wang, 1996.)

To return to the question asked in the original post:

what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions?

My first reaction to the question -- too many constraints. I can't quickly think of anything that satisfies all three of them. However, if I'm allowed to drop one constraint, I'd drop the second one ("useful as a general method of rationality"), and my answer would be evolution.

In my experience, understanding evolution down to chemistry, down to predictable interactions of very simple parts that have nothing mystical or anthropomorphic about them can have a tremendous impact on one's further thinking.

I'd second that. In fact, I think that knowing about evolution is probably a necessary prerequisite to being a rational atheist. Even Dawkins admits that it would have been pretty difficult not to believe in God before there was a plausible naturalist explanation for the complexity of life.

Of course, it's possible to know that there is a plausible naturalist explanation without really understanding the nuts and bolts of how it works (I'd probably put myself in that category), but maybe it really does help to hammer home the point if you really understand that The Creator wasn't necessary to make life, what exactly is He for?

My father grew up in a heavily religious family, and rejected religion at an early age. I'd say he was a clever fellow, but the turning point wasn't intelligence, it was what a horrible little bastard he was as a child, as any of his siblings would tell you.

If you just don't give a shit, all the emotional manipulation in the world will just wash over you like water off a duck's back. And that's all religion really has going for it, appealing to hope, to fear, to love, to respect, to piety, to community.

If you can teach people truly not to care, a huge rotten portion of their psyche falls away. There is a cost, of course... but this will do the job the post demands.

Recently I contemplated writing an "Atheist's Bible", to present the most important beliefs of atheists. Eventually I realized that this Atheist Bible would not mention atheism. "Atheism" is just the default belief state we were born with. Atheism isn't having reasons not to believe religion; it's not having reasons to believe religion. If one knows how the world works, there are no gaps for religion to fill.

The French Encyclopedia of the late 18th century was by design an atheist work; it carried out this design by not mentioning religion.

On the contrary, I would argue that our default belief state is one full of scary monsters trying to kills us and whirling lights flying around overhead and oh no what this loud noise and why am I wet

...I can't imagine a human ancestor in that kind of situation not coming up with some kind of desperate Pascal's wager of, "I'll do this ritualistic dance to the harvest goddess because it's not really that much trouble to do in the grand scheme of things, and man if there's any chance of improving the odds of a good harvest, I'm shakin' my rain-maker." Soon you can add, "and everyone else says it works" to the list, and bam, religion.

On the contrary, I would argue that our default belief state is one full of scary monsters trying to kills us and whirling lights flying around overhead and oh no what this loud noise and why am I wet

There is no mention of God in that state; therefore it is atheism. Any person who does not believe in God is an atheist. Anyone who has never thought about whether there is a god, or doesn't have the concept of god, is therefore an atheist.

Plug that data into a brain that's been optimized by evolution for thinking about agents and their motives rather than about atmospheric physics, and it's no surprise that you get outputs like "Who threw that rain at me!? What'd I ever do to you, rain-agent? Why are you pissed off at me? What can I do to make you do what I want you to do?"

What I meant was, the moment anyone comes up with such a concept, it would appear so completely and undeniably sensible that it would instantly take hold as accepted truth and only become dislodged with great effort of the combined philosophical efforts of humanity's greatest minds over thousands of years.

It's not technically "default", but that's like saying a magnet is not attracted to a nearby piece of iron "by default" because there's no nearby piece of iron implied by the existence of the magnet. It's technically true, but it kind of misses the important description of a property of magnets.

This book might already exist, in the form of "System of Nature" by Baron d'Holbach. It's a refreshing read, highly recommended :-)

Edit: You can read it here at Project Gutenberg: Vol. 1, Vol.2.

There are a couple of large gorillas in this room.

First, the examples of great scientists who were also religious shows that you don't have to be an atheist to make great discoveries. I think the example of Isaac Newton is especially instructive: not only did Newton's faith not interfere with his ability to understand reality, it also constituted the core of his motivation to do so (he believed that by understanding Nature he would come to a greater understanding of God). Faraday's example is also significant: his faith motivated him to refuse to work on chemical weapons for the British government.

Second, evidence shows that religious people are happier. Now, this happiness research is of course murky, and we should hesitate to make any grand conclusions on the basis of it. But if it is true, it is deeply problematic for the kind of rationality you are advocating. If rationalists should "just win", and we equate winning with happiness, and the faithful are happier than atheists, then we should all stop reading this blog and start going to church on Sundays.

There are subtleties here that await discovery. Note for example Taleb's hypothesis that the ancients specifically promoted religion as a way of preventing people from going to doctors, who killed more people than they saved until the 19th century. Robin made a similar point about the cost effectiveness of faith healing.