It ain’t a true crisis of faith unless things could just as easily go either way.

—Thor Shenkel

Many in this world retain beliefs whose flaws a ten-year-old could point out, if that ten-year-old were hearing the beliefs for the first time. These are not subtle errors we’re talking about. They would be child's play for an unattached mind to relinquish, if the skepticism of a ten-year-old were applied without evasion. As Premise Checker put it, "Had the idea of god not come along until the scientific age, only an exceptionally weird person would invent such an idea and pretend that it explained anything."

And yet skillful scientific specialists, even the major innovators of a field, even in this very day and age, do not apply that skepticism successfully. Nobel laureate Robert Aumann, of Aumann’s Agreement Theorem, is an Orthodox Jew: I feel reasonably confident in venturing that Aumann must, at one point or another, have questioned his faith. And yet he did not doubt successfullyWe change our minds less often than we think.

This should scare you down to the marrow of your bones. It means you can be a world-class scientist and conversant with Bayesian mathematics and still fail to reject a belief whose absurdity a fresh-eyed ten-year-old could see. It shows the invincible defensive position which a belief can create for itself, if it has long festered in your mind.

What does it take to defeat an error that has built itself a fortress?

But by the time you know it is an error, it is already defeated. The dilemma is not “How can I reject long-held false belief X?” but “How do I know if long-held belief X is false?” Self-honesty is at its most fragile when we’re not sure which path is the righteous one. And so the question becomes:

How can we create in ourselves a true crisis of faith, that could just as easily go either way?

Religion is the trial case we can all imagine.2 But if you have cut off all sympathy and now think of theists as evil mutants, then you won’t be able to imagine the real internal trials they face. You won’t be able to ask the question:

What general strategy would a religious person have to follow in order to escape their religion?

I’m sure that some, looking at this challenge, are already rattling off a list of standard atheist talking points—“They would have to admit that there wasn’t any Bayesian evidence for God’s existence,” “They would have to see the moral evasions they were carrying out to excuse God’s behavior in the Bible,” “They need to learn how to use Occam’s Razor—”

Wrong! Wrong wrong wrong! This kind of rehearsal, where you just cough up points you already thought of long before, is exactly the style of thinking that keeps people within their current religions.  If you stay with your cached thoughts, if your brain fills in the obvious answer so fast that you can't see originally, you surely will not be able to conduct a crisis of faith.

Maybe it’s just a question of not enough people reading Gödel, Escher, Bach at a sufficiently young age, but I’ve noticed that a large fraction of the population—even technical folk—have trouble following arguments that go this meta.3 On my more pessimistic days I wonder if the camel has two humps.

Even when it’s explicitly pointed out, some people seemingly cannot follow the leap from the object-level “Use Occam’s Razor! You have to see that your God is an unnecessary belief!” to the meta-level “Try to stop your mind from completing the pattern the usual way!” Because in the same way that all your rationalist friends talk about Occam’s Razor like it’s a good thing, and in the same way that Occam’s Razor leaps right up into your mind, so too, the obvious friend-approved religious response is “God’s ways are mysterious and it is presumptuous to suppose that we can understand them.” So for you to think that the general strategy to follow is “Use Occam’s Razor,” would be like a theist saying that the general strategy is to have faith.

“But—but Occam’s Razor really is better than faith! That’s not like preferring a different flavor of ice cream! Anyone can see, looking at history, that Occamian reasoning has been far more productive than faith—”

Which is all true. But beside the point. The point is that you, saying this, are rattling off a standard justification that’s already in your mind. The challenge of a crisis of faith is to handle the case where, possibly, our standard conclusions are wrong and our standard justifications are wrong. So if the standard justification for X is “Occam’s Razor!” and you want to hold a crisis of faith around X, you should be questioning if Occam’s Razor really endorses X, if your understanding of Occam’s Razor is correct, and—if you want to have sufficiently deep doubts—whether simplicity is the sort of criterion that has worked well historically in this case, or could reasonably be expected to work, et cetera. If you would advise a religionist to question their belief that “faith” is a good justification for X, then you should advise yourself to put forth an equally strong effort to question your belief that “Occam’s Razor” is a good justification for X.4

If “Occam’s Razor!” is your usual reply, your standard reply, the reply that all your friends give—then you’d better block your brain from instantly completing that pattern, if you’re trying to instigate a true crisis of faith.

Better to think of such rules as, “Imagine what a skeptic would say—and then imagine what they would say to your response—and then imagine what else they might say, that would be harder to answer.”

Or, “Try to think the thought that hurts the most.”

And above all, the rule:

Put forth the same level of desperate effort that it would take for a theist to reject their religion.

Because if you aren’t trying that hard, then—for all you know—your head could be stuffed full of nonsense as bad as religion.

Without a convulsive, wrenching effort to be rational, the kind of effort it would take to throw off a religion—then how dare you believe anything, when Robert Aumann believes in God?

Someone (I forget who) once observed that people had only until a certain age to reject their religious faith. Afterward they would have answers to all the objections, and it would be too late. That is the kind of existence you must surpass. This is a test of your strength as a rationalist, and it is very severe; but if you cannot pass it, you will be weaker than a ten-year-old.

But again, by the time you know a belief is an error, it is already defeated. So we’re not talking about a desperate, convulsive effort to undo the effects of a religious upbringing, after you’ve come to the conclusion that your religion is wrong. We’re talking about a desperate effort to figure out if you should be throwing off the chains, or keeping them. Self-honesty is at its most fragile when we don’t know which path we’re supposed to take—that’s when rationalizations are not obviously sins.

Not every doubt calls for staging an all-out Crisis of Faith. But you should consider it when:

  • A belief has long remained in your mind;
  • It is surrounded by a cloud of known arguments and refutations;
  • You have sunk costs in it (time, money, public declarations);
  • The belief has emotional consequences (note this does not make it wrong);
  • It has gotten mixed up in your personality generally.

None of these warning signs are immediate disproofs. These attributes place a belief at risk for all sorts of dangers, and make it very hard to reject when it is wrong. And they hold for Richard Dawkins’s belief in evolutionary biology, not just the Pope’s Catholicism.

Nor does this mean that we’re only talking about different flavors of ice cream. Two beliefs can inspire equally deep emotional attachments without having equal evidential support. The point is not to have shallow beliefs, but to have a map that reflects the territory.

I emphasize this, of course, so that you can admit to yourself, “My belief has these warning signs,” without having to say to yourself, “My belief is false.”

But what these warning signs do mark is a belief that will take more than an ordinary effort to doubt effectively. It will take more than an ordinary effort to doubt in such a way that if the belief is in fact false, you will in fact reject it. And where you cannot doubt in this way, you are blind, because your brain will hold the belief unconditionally.  When a retina sends the same signal regardless of the photons entering it, we call that eye blind.

When should you stage a Crisis of Faith?

Again, think of the advice you would give to a theist: If you find yourself feeling a little unstable inwardly, but trying to rationalize reasons the belief is still solid, then you should probably stage a Crisis of Faith. If the belief is as solidly supported as gravity, you needn’t bother—but think of all the theists who would desperately want to conclude that God is as solid as gravity. So try to imagine what the skeptics out there would say to your “solid as gravity” argument. Certainly, one reason you might fail at a crisis of faith is that you never really sit down and question in the first place—that you never say, “Here is something I need to put effort into doubting properly.”

If your thoughts get that complicated, you should go ahead and stage a Crisis of Faith. Don’t try to do it haphazardly; don’t try it in an ad-hoc spare moment. Don’t rush to get it done with quickly, so that you can say, “I have doubted, as I was obliged to do.” That wouldn’t work for a theist, and it won’t work for you either. Rest up the previous day, so you’re in good mental condition. Allocate some uninterrupted hours. Find somewhere quiet to sit down. Clear your mind of all standard arguments; try to see from scratch. And make a desperate effort to put forth a true doubt that would destroy a false—and only a false—deeply held belief.

Elements of the Crisis of Faith technique have been scattered over many essays:

  • Avoiding Your Belief’s Real Weak Points—One of the first temptations in a crisis of faith is to doubt the strongest points of your belief, so that you can rehearse your good answers. You need to seek out the most painful spots, not the arguments that are most reassuring to consider.
  • The Meditation on Curiosity—Roger Zelazny once distinguished between “wanting to be an author” versus “wanting to write,” and there is likewise a distinction between wanting to have investigated and wanting to investigate. It is not enough to say, “It is my duty to criticize my own beliefs”; you must be curious, and only uncertainty can create curiosity. Keeping in mind conservation of expected evidence may help you update yourself incrementally: for every single point that you consider, and each element of new argument and new evidence, you should not expect your beliefs to shift more (on average) in one direction than another. Thus you can be truly curious each time about how it will go.
  • Original Seeing—To prevent standard cached thoughts from rushing in and completing the pattern.
  • The Litany of Gendlin and the Litany of Tarski—People can stand what is true, for they are already enduring it. If a belief is true, you will be better off believing it, and if it is false, you will be better off rejecting it. You would advise a religious person to try to visualize fully and deeply the world in which there is no God, and to, without excuses, come to the full understanding that if there is no God then they will be better off believing there is no God. If one cannot come to accept this on a deep emotional level, one will not be able to have a crisis of faith. So you should put in a sincere effort to visualize the alternative to your belief, the way that the best and highest skeptic would want you to visualize it. Think of the effort a religionist would have to put forth to imagine, without corrupting it for their own comfort, an atheist’s view of the universe.
  • Tsuyoku Naritai!—The drive to become stronger.
  • The Genetic Heuristic—You should be extremely suspicious if you have many ideas suggested by a source that you now know to be untrustworthy, but by golly, it seems that all the ideas still ended up being right.
  • The Importance of Saying “Oops”—It really is less painful to swallow the entire bitter pill in one terrible gulp.
  • Singlethink—The opposite of doublethink. See the thoughts you flinch away from, that appear in the corner of your mind for just a moment before you refuse to think them. If you become aware of what you are not thinking, you can think it.
  • Affective Death Spirals and Resist the Happy Death Spiral—Affective death spirals are prime generators of false beliefs that it will take a Crisis of Faith to shake loose. But since affective death spirals can also get started around real things that are genuinely nice, you don’t have to admit that your belief is a lie, to try and resist the halo effect at every point—refuse false praise even of genuinely nice things. Policy debates should not appear one-sided.
  • Hold Off On Proposing Solutions—Don’t propose any solutions until the problem has been discussed as thoroughly as possible. Make your mind hold off on knowing what its answer will be; and try for five minutes before giving up—both generally, and especially when pursuing the devil’s point of view.

And these standard techniques, discussed in How to Actually Change Your Mind and Map and Territory, are particularly relevant:

But really, there’s rather a lot of relevant material, here and on Overcoming Bias. There are ideas I have yet to properly introduce. There is the concept of isshokenmei—the desperate, extraordinary, convulsive effort to be rational. The effort that it would take to surpass the level of Robert Aumann and all the great scientists throughout history who never broke free of their faiths.

The Crisis of Faith is only the critical point and sudden clash of the longer isshoukenmei—the lifelong uncompromising effort to be so incredibly rational that you rise above the level of stupid damn mistakes. It’s when you get a chance to use the skills that you’ve been practicing for so long, all-out against yourself.

I wish you the best of luck against your opponent. Have a wonderful crisis!

1See “Occam’s Razor” (in Map and Territory).

2Readers born to atheist parents have missed out on a fundamental life trial, and must make do with the poor substitute of thinking of their religious friends.

3See “Archimedes’s Chromophone” (http://lesswrong.com/lw/h5/archimedess_chronophone) and “Chromophone Motivations” (http://lesswrong.com/lw/h6/chronophone_motivations).

4Think of all the people out there who don’t understand the Minimum Description Length or Solomonoff induction formulations of Occam’s Razor, who think that Occam’s Razor outlaws many-worlds or the simulation hypothesis. They would need to question their formulations of Occam’s Razor and their notions of why simplicity is a good thing. Whatever X in contention you just justified by saying “Occam’s Razor!” is, I bet, not the same level of Occamian slam dunk as gravity.

Crisis of Faith
New Comment
250 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This is an unusually high quality post, even for you Eliezer; congrats!

It seems that it takes an Eliezer-level rationalist to make an explicit account of what any ten-year-old can do intuitively. For those not quite Eliezer-level or not willing to put in the effort, this is really frustrating in the context of an argument or debate.

I suspect that there are many people in this world who are, by their own standards, better off remaining deluded. I am not one if them; but I think you should qualify statements like "if a belief is false, you are better off knowing that it is false".

It is even possible that some overoptimistic transhumanists/singularitarians are better off, by their own standards, remaining deluded about the potential dangers of technology. You have the luxury of being intelligent enough to be able to utilize your correct belief about how precarious our continue... (read more)

Many in this world retain beliefs whose flaws a ten-year-old could point out

Very true. Case in point: the belief that "minimum description length" or "Solomonoff induction" can actually predict anything. Choose a language that can describe MWI more easily than Copenhagen, and they say you should believe MWI; choose a language that can describe Copenhagen more easily than MWI, and they say you should believe Copenhagen. I certainly could have told you that when I was ten...

The argument in this post is precisely analogous to the following:

Bayesian reasoning cannot actually predict anything. Choose priors that result in the posterior for MWI being greater than that for Copenhagen, and it says you should believe MWI; choose priors that result in the posterior for Copenhagen being greater than that for MWI, and it says you should believe Copenhagen.

The thing is, though, choosing one's own priors is kind of silly, and choosing one's own priors with the purpose of making the posteriors be a certain thing is definitely silly. Priors should be chosen to be simple but flexible. Likewise, choosing a language with the express purpose of being able to express a certain concept simply is silly; languages should be designed to be simple but flexible.

It seems to me that you're waving the problem away instead of solving it. For example, I don't know of any general method for devising a "non-silly" prior for any given parametric inference problem. Analogously, what if your starting language accidentally contains a shorter description of Copenhagen than MWI?

3[anonymous]
If you're just doing narrow AI, then look at your hypothesis that describes the world (e.g. "For any two people, they have some probability X of having a relationship we'll call P. For any two people with relationship P, every day, they have a probability Y of causing perception A."), then fill in every parameter (in this case, we have X and Y) with reasonable distributions (e.g. X and Y independent, each with a 1/3 chance of being 0, a 1/3 chance of being 1, and a 1/3 chance of being the uniform distribution). Yes, I said "reasonable". Subjectivity is necessary; otherwise, everyone would have the same priors. Just don't give any statement an unusually low probability (e.g. a probability practically equal to zero that a certain physical constant is greater than Graham's number), nor any statement an unusually high probability (e.g. a 50% probability that Christianity is true). I think good rules are that the language your prior corresponds to should not have any atoms that can be described reasonably easily (perhaps 10 atoms or less) using only other atoms, and that every atom should be mathematically useful. If the starting language accidentally contains a shorter description of Copenhagen than MWI? Spiffy! Assuming there is no evidence either way, Copenhagen will be more likely than MWI. Now, correct me if I'm wrong, but MWI is essentially the idea that the set of things causing wavefunction collapse is empty, while Copenhagen states that it is not empty. Supposing we end up with a 1/3 chance of MWI being true and a 2/3 chance that it's some other simple thing, is that really a bad thing? Your agent will end up designing devices that will work only if a certain subinterpretation of the Copenhagen interpretation is true and try them out. Eventually, most of the simple, easily-testable versions of the Copenhagen interpretation will be ruled out--if they are, in fact, false--and we'll be left with two things: unlikely versions of the Copenhagen interpretation, and

(Do I get a prize for saying "e.g." so much?)

Yes. Here is an egg and an EEG.

-1Ronny Fernandez
The minimum description length formulation doesn't allow for that at all. You are not allowed to pick whatever language you want, you have to pick the optimal code. If in the most concise code possible, state 'a' has a smaller code than state 'b', then 'a' must be more probable than 'b', since the most concise codes possible assign the smallest codes to the most probable states. So if you wanna know what state a system is in, and you have the ideal (or close to ideal) code for the states in that system, the probability of that state will be strongly inversely correlated with the length of the code for that state.
2Oscar_Cunningham
Aren't you circularly basing your code on your probabilities but then taking your priors from the code?
0Ronny Fernandez
Yep, but that's all the proof shows: the more concise your code, the stronger the inverse correlation between the probability of a state and the code length of that state.
7[anonymous]
I haven't read anything like this in my admittedly limited readings on Solomonoff induction. Disclaimer: I am only a mere mathematician in a different field, and have only read a few papers surrounding Solomonoff. The claims I've seen revolve around "assembly language" (for some value of assembly language) being sufficiently simple that any biases inherent in the language are small (some people claim constant multiple on the basis that this is what happens when you introduce a symbol 'short-circuiting' a computation). I think a more correct version of Anti-reductionist's argument should run, "we currently do not know how the choice of language affects SI; it is conceivable that small changes in the base language imply fantastically different priors." I don't know the answer to that, and I'd be very glad to know if someone has proved it. However, I think it's rather unlikely that someone has proved it, because 1) I expect it will be disproven (on the basis that model-theoretic properties tend to be fragile), and 2) given the current difficulties in explicitly calculating SI, finding an explicit, non-trivial counter-example would probably be difficult. Note that is not such a counter-example, because we do not know if "sufficiently assembly-like" languages can be chosen which exhibit such a bias. I don't think the above thought-experiment is worth pursuing, because I don't think we even know a formal (on the level of assembly-like languages) description of either CI or MWI.
0Ronny Fernandez
Not Solomonoff, minimum description length, I'm coming from an information theory background, I don't know very much about Solomonoff induction.
0[anonymous]
OP is talking about Solomonoff priors, no? Is there a way to infer on minimum description length?
0Ronny Fernandez
What is OP?
0Vladimir_Nesov
EY
0[anonymous]
I meant Anti-reductionist, the person potato originally replied to... I suppose grandparent would have been more accurate.
0Ronny Fernandez
He was talking about both.
1[anonymous]
So how do you predict with minimum description length?
1lessdazed
With respect to the validity of reductionism, out of MML and SI, one theoretically predicts and the other does not. Obviously.

Bo, the point is that what's most difficult in these cases isn't the thing that the 10-year-old can do intuitively (namely, evaluating whether a belief is credible, in the absence of strong prejudices about it) but something quite different: noticing the warning signs of those strong prejudices and then getting rid of them or getting past them. 10-year-olds aren't specially good at that. Most 10-year-olds who believe silly things turn into 11-year-olds who believe the same silly things.

Eliezer talks about allocating "some uninterrupted hours", but for me a proper Crisis of Faith takes longer than that, by orders of magnitude. If I've got some idea deeply embedded in my psyche but am now seriously doubting it (or at least considering the possibility of seriously doubting it), then either it's right after all (in which case I shouldn't change my mind in a hurry) or I've demonstrated my ability to be very badly wrong about it despite thinking about it a lot. In either case, I need to be very thorough about rethinking it, both because that way I may be less likely to get it wrong and because that way I'm less likely to spend the rest of my life worrying that I missed somethin... (read more)

0wizzwizz4
If I believe something that's wrong, it's probably because I haven't thought about it, merely how nice it is that it's true, or how I should believe it… or I've just been rehearsing what I've read in books about how you should think about it. A few uninterrupted hours is probably enough to get the process of actually thinking about it started.

Some interesting, useful stuff in this post. Minus the status-cocaine of declaring that you're smarter than Robert Aumann about his performed religious beliefs and the mechanics of his internal mental state. In that area, I think Michael Vassar's model for how nerds interpret the behavior of others is your God. There's probably some 10 year olds that can see through it (look everybody, the emperor has no conception that people can believe one thing and perform another). Unless this is a performance on your part too, and there's shimshammery all the way down!

"How do I know if long-held belief X is false?"

Eliezer, I guess if you already are asking this question you are well on your way. The real problem arises when you didn't even manage to pinpoint the possibly false believe. And yes I was a religious person for many years before realizing that I was on the wrong way.

Why didn't I question my faith? Well, it was so obviously true to me. The thing is: did you ever question heliocentrism? No? Why not? When you ask the question "How do I know if Heliocentrism is false?" You are already on your ... (read more)

Good post but this whole crisis of faith business sounds unpleasant. One would need Something to Protect to be motivated to deliberately venture into this masochistic experience.

All these posts present techniques for applying a simple principle: check every step on the way to your belief. They adapt this principle to be more practically useful, allowing a person to start on the way lacking necessary technical knowledge, to know which errors to avoid, which errors come with being human, where not to be blind, which steps to double-check, what constitutes a step and what a map of a step, and so on. All the techniques should work in background mode, gradually improving the foundations, propagating the consequences of the changes to m... (read more)

Fact check: MDL is not Bayesian. Done properly, it doesn't even necessarily obey the likelihood principle. Key term: normalized maximum likelihood distribution.

My father is an atheist with Jewish parents, and my mother is a (non-practicing) Catholic. I was basically raised "rationalist", having grown up reading my father's issues of Skeptical Inquirer magazine. I find myself in the somewhat uncomfortable position of admitting that I acquired my belief in "Science and Reason" in pretty much the same way that most other people acquire their religious beliefs.

I'm pretty sure that, like everyone else, I've got some really stupid beliefs that I hold too strongly. I just don't know which ones they are!

Great post. I think that this sort of post on rationality is extremely valuable. While one can improve everyday judgment and decision making by learning about rationality from philosophy, econ and statistics, I think that these informal posts can also make a significant difference to people.

The recent posts on AI theorists and EY's biography were among my least favorite on OB. If you have a choice, please spend more time on either technical sequences (e.g. stuff on concepts/concept space, evolutionary bio, notion of bias in statistics) or stuff on rationality like this.

A good reminder. I've recently been studying anarcho-capitalism. It's easy to get excited about a new, different perspective that has some internal consistency and offers alternatives to obvious existing problems. Best to keep these warnings in mind when evaluating new systems, particularly when they have an ideological origin.

0Normal_Anomaly
EDIT: This comment is redacted. Replace "anarcho-capitalism" with "singularitarianism" and that's the experience I'm having. It's not so much wondering if a long-held belief is false as wondering if the new belief I'm picking up is false.

"Try to think the thought that hurts the most."

This is exactly why I like to entertain religious thoughts. My background, training, and inclination are to be a thoroughgoing atheist materialist, so I find that trying to make sense of religious ideas is good mental exercise. Feel the burn!

In that vein, here is an audio recording of Robert Aumann on speaking on "The Personality of God".

Also, the more seriously religious had roughly the same idea, or maybe it's the opposite idea. The counterfactuality of religious ideas is part of their strength, apparently.

Here's a doubt for you: I'm a nerd, I like nerds, I've worked on technology, and I've loved techie projects since I was a kid. Grew up on SF, all of that.

My problem lately is that I can't take Friendly AI arguments seriously. I do think AI is possible, that we will invent it. I do think that at some point in the next hundreds of years, it will be game over for the human race. We will be replaced and/or transformed.

I kind of like the human race! And I'm forced to conclude that a human race without that tiny fraction of nerds could last a good long tim... (read more)

1JohnH
Have you ever heard of the term hubris? If you can't imagine ways in which the human race can be destroyed by non-nerds then that shows a lack of imagination not that it can not be done. Also, it isn't like nerds and non-nerds are actually a different species, people that do not have a natural aptitude for a subject are still capable of learning the subject,. If nerds all moved to nerdtopia other people would study what material there was on the subject and attempt to continue on. If this is not possible then you have applied the term nerd to be too broad such that it contains the majority of people and all that would be left are people that are incapable to fully taking care of themselves without some form of outside assistance and would thus destroy the human race by sheer ineptitude at basic survival skills.
1Дмитрий Зеленский
The vast majority of people is both incapable of and uninterested in creating new technology OR doing science (and their incapability supports their lack of interest). So, if nerds move to nerdtopia taking some already-deadly technologies with them, the remaining world will never create something AI-like... well, given that newborns with nerds' skills are taken away early. People are generally stupid - not only in the sense of exhibiting specific biases discussed by Eliezer but also in the sense of lack of both curiosity and larger-than-three working memory (or larger-than-120 IQ, whitherever you prefer) in the majority (and larger-than-two/larger-than-100 in a big group). Having intelligence - IQ above roughly 120 or any isomorphic measure - is something so rare that from standard p<0.05 view it's inexistent (Bell's curve, 100 as mean, 10 as sigma).
1tlhonmey
I would say that the non-nerds can't save the human race either though.  Without nerds our population never exceeds what can be supported by hunting, gathering, and maybe some primitive agriculture. Which isn't much.  We'd be constantly hovering just short of being wiped out by some global cataclysm.  And there's some evidence that we've narrowly missed just that at least once in our history.  If we want to survive long-term we need to get off this rock, and then we need to find at least one other solar system.  After that we can take a breather while we think about finding another galaxy to colonize. Yes, we might destroy ourselves with new technology.  But we're definitely dead without it.  And if you look at how many new technologies have been denounced as being harbingers for the end of the world vs how many times the world has actually ended, I'd have to think that gut feelings about what technologies are the most dangerous and how badly we'll handle them are probably wrong more often than they're right.

I'd be interested in a list of questions you had decided to have a crisis of faith over. If I get round to it I might try and have one over whether a system can recursively self-improve in a powerful way or not.

A lot of truths in EY's post. Though I also agree with Hopefully Anon's observations -- as is so often the case, Eliezer reminds me of Descartes -- brilliant, mathematical, uncowed by dogma, has his finger on the most important problems, is aware of how terrifyingly daunting those problems are, thinks he has a universal method to solve those problems.

Trying to set up an artificial crisis in which one outcome is as likely as another is a very bad idea.

If your belief is rationally unjustfiable, a 'crisis' in which one has only a fifty-fifty chance of rejecting the belief is not an improvement in rationality. Such a crisis is nothing more than picking a multiple-choice answer at random -- and with enough arbirarily-chosen options, the chance of getting the right one becomes arbitrarily small.

A strategy that actually works is setting your specific beliefs aside and returning to a state of uncertainty, then testing one possibility against the other on down to first principles. Uncertainty != each possibility equally likely.

3tut
I think he meant that each possibility appears equally likely before you look at the evidence. Basically reset your prior, if that were possible.

Thank you for this post, Eliezer. I must painfully question my belief that a positive Singularity is likely to occur in the foreseeable future.

Nazir Ahmad Bhat, you are missing the point. It's not a question of identity, like which ice cream flavor you prefer. It's about truth. I do not believe there is a teapot orbiting around Jupiter, for the various reasons explained on this site (see Absence of evidence is evidence of absence and the posts on Occam's Razor). You may call this a part of my identity. But I don't need people to believe in a teapot. Actually, I want everyone to know as much as possible. Promoting false beliefs is harming people, like slashing their tires. You don't believe in a flying teapot: do you need other people to?

Nazir, must there be atheists in order for you to believe in a god? The "identity" of those who believe that the world is round does not depend on others believing that the world is flat, or vice versa. Truth does not require disagreement.

Matthew C.,

You've been suggesting that for a while:

http://www.overcomingbias.com/2007/01/godless_profess.html#comment-27993437 http://www.overcomingbias.com/2008/09/psychic-powers.html#comment-130445874

Those who have read it (or the hundreds of pages available on Google Books, which I have examined) don't seem to be impressed.

Why do you think it's better than Broderick's book? If you want to promote it more effectively in the face of silence (http://www.overcomingbias.com/2007/02/what_evidence_i.html), why not pay for a respected reviewer's time and a writ... (read more)

Do these methods actually work? There were a few posts here on how more evidence and bias awareness don't actually change minds or reduce bias, at least not without further effort. Can a practical "Deduce the Truth in 30 Days" guide be derived from these methods, and change the world?

A fifty-fifty chance of choosing your previous belief does not constitute a reasonable test. If your belief is unreasonable, why would treating it as equally plausible as the alternative be valid?

The trick is to suspend belief and negate the biasing tendencies of belief when you re-evaluate, not to treat all potentials as equal.

Eliezer:

If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it.
I think you should try applying your own advice to this belief of yours. It is usually true, but it is certainly not always true, and reeks of irrational bias.

My experience with my crisis of faith seems quite opposite to your conceptions. I was raised in a fundamentalist family, and I had to "make an extraordinary effort" to keep believing in Christianity from the time I was 4 and started reading through the Bible, and findin... (read more)

7[anonymous]
Agreed. Every time I changed my mind about something, it felt like "quitting," like ceasing the struggle to come up with evidence for something I wanted to be true but wasn't. Realizing "It's so much easier to give up and follow the preponderance of the evidence." Examples: taking an economics class made it hard to believe that government interventions are mostly harmless. Learning about archaeology and textual analysis made it hard to believe in the infallibility of the Bible. Hearing cognitive science/philosophy arguments made it hard to believe in Cartesian dualism. Reading more papers made it hard to believe that looking at the spectrum of the Laplacian is a magic bullet for image processing. Extensive conversations with a friend made it hard to believe that I was helping him by advising him against pursuing his risky dreams. When something's getting hard to believe, consider giving up the belief. Just let the weight fall. Be lazy. If you're working hard to justify an idea, you're probably working too hard.
-3JohnH
One of the problems with your examples in both economics and archeology is that less is known on the subject then what you think is known, especially if you have just taken introductory courses on the subject.

From "Twelve virtues of rationality" by Eliezer:

The third virtue is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you.

Eliezer uses almost the same words as you do.( Oh, and this document is from 2006, so he has not copied your lines.) Some posts earlier Eliezer accused you of not reading his writings and just making stuff up regarding his viewpoints.......

0Kenny
The posts on making an extraordinary effort didn't explicitly exclude preserving the contents of one's beliefs as an effort worth being made extraordinarily, so you've definitely identified a seeming loophole, and yet you've simultaneously seemed to ignore all of the other posts about epistemic rationality.

MichaelG:

On the other hand, I don't think a human race with nerds can forever avoid inventing a self-destructive technology like AI.

The idea is that if we invent Friendly AI first, it will become powerful enough to keep later, Unfriendly ones in check (either alone, or with several other FAIs working together with humanity). You don't need to avoid inventing one forever: it's enough to avoid inventing one as the first thing that comes up.

If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it.
It is easy to construct at least these 2 kinds of cases where this is false:

  • You have a set of beliefs optimized for co-occurence, and you are replacing one of these beliefs with a more-true belief. In other words, the new true belief will cause you harm because of other untrue (or less true) beliefs that you still hold.
  • If an entire community can be persuaded to adopt a false belief, it may enable them to overcome a tragedy-of-the-common
... (read more)

I was raised in a christian family, fairly liberal Church of England, and my slide into agnosticism started when I about 5-7 when I asked if Santa Claus and God were real. I refused to get confirmed and stopped going to church when I was 13ish I think.

In other words, you are advocating a combative, Western approach; I am bringing up a more Eastern approach, which is not to be so attached to anything in the first place, but to bend if the wind blows hard enough.

The trouble is that you cannot break new ground this way. You can't do einstein like feats. Y... (read more)

If an entire community can be persuaded to adopt a false belief, it may enable them to overcome a tragedy-of-the-commons or prisoners'-dilemma situation.

In a PD, agents hurt each other, not themselves. Obviously false beliefs in my enemy can help me.

Study this deranged rant. Its ardent theism is expressed by its praise of the miracles God can do, if he choses.

And yet,... There is something not quite right here. Isn't it merely cloakatively theistic? Isn't the ringing denounciation of "Crimes against silence" militant atheism at its most strident?

So here is my idea: Don't try to doubt a whole core belief. That is too hard. Probe instead for the boundary. Write a little fiction, perhaps a science fiction of first contact, in which you encounter a curious character from a different culture. Wri... (read more)

If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it. I think evolution facilitated self-delusion precisely because that is not the case.

I was a Fred Phelps style ultra-Calvinist and my transition involved scarcely any effort.

Also, anti-reductionist, that's the first comment you've made I felt was worth reading. You may take it as an insult but I felt compelled to give you kudos.

I suspect that there are many people in this world who are, by their own standards, better off remaining deluded. I am not one if them; but I think you should qualify statements like "if a belief is false, you are better off knowing that it is false".

Of course I deliberately did not qualify it. Frankly, if you're still qualifying the statement, you're not the intended audience for a post about how to make a convulsive effort to be rational using two dozen different principles.

0christopherj
And you seriously believe that, in all circumstances and for all people with any false belief, those people are better off believing the truth concerning that belief? The obvious counterexample is the placebo effect, where a false belief is scientifically proven to have a benefit. The beneficial effects of false beliefs are so powerful, that you can't conduct a pharmaceutical study without accounting for them. And you are no doubt familiar with that effect. Another example would be believing that you're never better off believing a false belief, because then you have more incentive to investigate suspicious beliefs.
5Eliezer Yudkowsky
The difficult epistemic state to get into is justifiably believing that you're better off believing falsely about something without already, in some sense, knowing the truth about it.
0christopherj
It's actually very easy and common to believe that you're better off believing X, whether or not X is true, without knowing the truth about it. This is also well-justified in decision theory, and by your definition of rationality, if believing X will help you win. A common example is choosing to believe that your date has an "average romantic history" and choosing not to investigate. If you think you can't do this, I propose this math problem. Using a random number generator over all American citizens, I have selected Bob (but not identified him to you). If you can guess Bob's IQ (with margin of error +/- 5 points), you get a prize. Do you think it is possible for Bob's IQ to be higher or lower than you expected, and if so, do you believe you're better off not having any expectation at all rather than a potentially false expectation? See, as soon as a question is asked, you fill in the answer with [expected answer range of probabilities] rather than [no data]. It's much easier to believe something and not investigate, than to investigate and try to deceive yourself. And unless you add as an axiom "unlike all other humans for me false beliefs are never beneficial" (which sounds like a severe case of irony), then a rationalist on occasion must be in favor of said false beliefs. Just out of curiosity, why the switch from "rational" to "epistemic"?

Eliezer, what do you mean here? Do you mean:

(A1) Individuals in the reference class really are always better off with the truth, with sufficient probability that the alternative does not bear investigating;

(A2) Humans are so unreliable as judges of what we would and would not benefit from being deceived about that the heuristic "we're always better off with the truth" is more accurate than the available alternatives;

(B) Individuals must adopt the Noble Might-be-truth "I'm always better off with the truth" to have a chance at the Crisis of Faith technique?