Belief in Belief

Carl Sagan once told a parable of a man who comes to us and claims: "There is a dragon in my garage." Fascinating! We reply that we wish to see this dragon—let us set out at once for the garage! "But wait," the claimant says to us, "it is an invisible dragon."

Now as Sagan points out, this doesn't make the hypothesis unfalsifiable. Perhaps we go to the claimant's garage, and although we see no dragon, we hear heavy breathing from no visible source; footprints mysteriously appear on the ground; and instruments show that something in the garage is consuming oxygen and breathing out carbon dioxide.

But now suppose that we say to the claimant, "Okay, we'll visit the garage and see if we can hear heavy breathing," and the claimant quickly says no, it's an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, "The dragon is permeable to flour."

Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse.

Some philosophers have been much confused by such scenarios, asking, "Does the claimant really believe there's a dragon present, or not?" As if the human brain only had enough disk space to represent one belief at a time! Real minds are more tangled than that. As discussed in yesterday's post, there are different types of belief; not all beliefs are direct anticipations. The claimant clearly does not anticipate seeing anything unusual upon opening the garage door; otherwise he wouldn't make advance excuses. It may also be that the claimant's pool of propositional beliefs contains There is a dragon in my garage. It may seem, to a rationalist, that these two beliefs should collide and conflict even though they are of different types. Yet it is a physical fact that you can write "The sky is green!" next to a picture of a blue sky without the paper bursting into flames.

The rationalist virtue of empiricism is supposed to prevent us from this class of mistake. We're supposed to constantly ask our beliefs which experiences they predict, make them pay rent in anticipation. But the dragon-claimant's problem runs deeper, and cannot be cured with such simple advice. It's not exactly difficult to connect belief in a dragon to anticipated experience of the garage. If you believe there's a dragon in your garage, then you can expect to open up the door and see a dragon. If you don't see a dragon, then that means there's no dragon in your garage. This is pretty straightforward. You can even try it with your own garage.

No, this invisibility business is a symptom of something much worse.

Depending on how your childhood went, you may remember a time period when you first began to doubt Santa Claus's existence, but you still believed that you were supposed to believe in Santa Claus, so you tried to deny the doubts. As Daniel Dennett observes, where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it. What does it mean to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green? The statement is confusing; it's not even clear what it would mean to believe it—what exactly would be believed, if you believed. You can much more easily believe that it is proper, that it is good and virtuous and beneficial, to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green.  Dennett calls this "belief in belief".

And here things become complicated, as human minds are wont to do—I think even Dennett oversimplifies how this psychology works in practice. For one thing, if you believe in belief, you cannot admit to yourself that you only believe in belief, because it is virtuous to believe, not to believe in belief, and so if you only believe in belief, instead of believing, you are not virtuous. Nobody will admit to themselves, "I don't believe the Ultimate Cosmic Sky is blue and green, but I believe I ought to believe it"—not unless they are unusually capable of acknowledging their own lack of virtue. People don't believe in belief in belief, they just believe in belief.

(Those who find this confusing may find it helpful to study mathematical logic, which trains one to make very sharp distinctions between the proposition P, a proof of P, and a proof that P is provable.  There are similarly sharp distinctions between P, wanting P, believing P, wanting to believe P, and believing that you believe P.)

There's different kinds of belief in belief. You may believe in belief explicitly; you may recite in your deliberate stream of consciousness the verbal sentence "It is virtuous to believe that the Ultimate Cosmic Sky is perfectly blue and perfectly green." (While also believing that you believe this, unless you are unusually capable of acknowledging your own lack of virtue.) But there's also less explicit forms of belief in belief. Maybe the dragon-claimant fears the public ridicule that he imagines will result if he publicly confesses he was wrong (although, in fact, a rationalist would congratulate him, and others are more likely to ridicule him if he goes on claiming there's a dragon in his garage). Maybe the dragon-claimant flinches away from the prospect of admitting to himself that there is no dragon, because it conflicts with his self-image as the glorious discoverer of the dragon, who saw in his garage what all others had failed to see.

If all our thoughts were deliberate verbal sentences like philosophers manipulate, the human mind would be a great deal easier for humans to understand. Fleeting mental images, unspoken flinches, desires acted upon without acknowledgement—these account for as much of ourselves as words.

While I disagree with Dennett on some details and complications, I still think that Dennett's notion of belief in belief is the key insight necessary to understand the dragon-claimant. But we need a wider concept of belief, not limited to verbal sentences. "Belief" should include unspoken anticipation-controllers.  "Belief in belief" should include unspoken cognitive-behavior-guiders. It is not psychologically realistic to say "The dragon-claimant does not believe there is a dragon in his garage; he believes it is beneficial to believe there is a dragon in his garage."  But it is realistic to say the dragon-claimant anticipates as if there is no dragon in his garage, and makes excuses as if he believed in the belief.

You can possess an ordinary mental picture of your garage, with no dragons in it, which correctly predicts your experiences on opening the door, and never once think the verbal phrase There is no dragon in my garage. I even bet it's happened to you—that when you open your garage door or bedroom door or whatever, and expect to see no dragons, no such verbal phrase runs through your mind.

And to flinch away from giving up your belief in the dragon—or flinch away from giving up your self-image as a person who believes in the dragon—it is not necessary to explicitly think I want to believe there's a dragon in my garage. It is only necessary to flinch away from the prospect of admitting you don't believe.

To correctly anticipate, in advance, which experimental results shall need to be excused, the dragon-claimant must (a) possess an accurate anticipation-controlling model somewhere in his mind, and (b) act cognitively to protect either (b1) his free-floating propositional belief in the dragon or (b2) his self-image of believing in the dragon.

If someone believes in their belief in the dragon, and also believes in the dragon, the problem is much less severe.  They will be willing to stick their neck out on experimental predictions, and perhaps even agree to give up the belief if the experimental prediction is wrong—although belief in belief can still interfere with this, if the belief itself is not absolutely confident.  When someone makes up excuses in advance, it would seem to require that belief, and belief in belief, have become unsynchronized.

 

168 comments, sorted by
magical algorithm
Highlighting new comments since Today at 6:25 AM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

Oh great, now I'm going to think "There's no dragon in my garage" every time I open my garage door for the next week...

This post helps me understand some of the most infuriating phrases I ever hear (which the title immediately reminded me of): "it doesn't matter what you believe as long as you believe something", "everyone has to believe in something", "faith is a virtue", &c. It makes sense that if a person's second-order belief is stronger than their first-order belief, they would say things like that.

Shocked, it wasn't my first interaction with her.

Anna, this blog is too advanced for you and you should not be commenting on it. Go read The Simple Truth until you understand the relation between a map and the territory.

[EDIT: I deleted an additional comment from Anna in this thread.]

I like Eliezer's essay on belief very much. I've been thinking about the role of belief in religion. (For the sake of full disclosure, my background is Calvinist.) I wonder why Christians say, "We believe in one God," as if that were a particularly strong assertion. Wouldn't it be stronger to say, "We know one God?" What is the difference between belief and knowledge? It seems to me that beliefs are usually based on no data. Most people who believe in a god do so in precisely the same way that they might believe in a dragon in the garage. People are comfortable saying that they know something only when they can refer to supporting data. Believers are valiantly clinging to concepts for which the data is absent. Most people who believe in a god do so in precisely the same way that they might believe in a dragon in the garage.

Regarding the dialogue between the dragon claimant and his challengers, why didn't the challengers simply ask the claimant, "Why do you say that there is an invisible, inaudible, non-respiriating, flour-permeable dragon in your garage?"

Knowledge involves more than belief. You know p if all of the following are true:

1) You believe p. 2) p is true. 3) If p were not true, you wouldn't believe it (justified true belief) 4) If p were true, you would believe it (Gettier belief)

And most beliefs, such as the belief that my keys are in my left pocket, are trivial and true, as well as being based on data.

At least in my mind, the processes that generate beliefs like "my keys are in my left pocket" are not perfectly reliable -- at least once, I have thought my keys were in my left pocket when in fact I left them on the dresser.

So #3 is demonstrably false for me; on this account, I don't know where my keys are.

Which is perfectly internally consistent, though it doesn't match up with the colloquial usage of "to know," which seems to indicate that the speaker's confidence in p is above some threshold.

There's nothing wrong with having precisely defined terms of art, in epistemology or any other field. But it can lead to confusion when colloquial words are repurposed in this way.

And also, "How do you know."

Your question is more helpful, of course. Any person who believes that there is a non-evidentiary dragon in a garage will have some way to answer mine, hopefully without going through too much more stress.

From the post:

While I disagree with Dennett on some details and complications, I still think that Dennett's notion of belief in belief is the key insight necessary to understand the dragon-claimant. But we need a wider concept of belief, not limited to verbal sentences.

If you've read Dennett on beliefs, you'll appreciate that this "wider concept" based on behavior and predictability is really at the heart of things.

I think it is very difficult to attribute a belief in dragons to this "dragon-believer". Only a small subset of his actions - those involving verbal avowals - make sense if you attribute a belief in dragons to him. There is a conflict with the remainder of his beliefs, as can be seen when he nonchalantly enters his garage, or confabulates all sorts of reasons why his dragon can't be demonstrated.

But as you have shown, everything makes sense if you attribute a related, but slightly different belief, namely "I should avow a genuine, heartfelt belief in dragons". Perhaps we can say that this man (and the religious man, since this is the real point) doesn't just believe in belief, but they believe that they believe. He tries to make a second-order belief do the work of a first-order belief.

Belief in disbelief:

One of our neighbors in Tisvilde once fixed a horseshoe over the door to his house. When a mutual acquaintance asked him, 'But are you really superstitious? Do you honestly believe that this horseshoe will bring you luck?' he replied, 'Of course not; but they say it helps even if you don't believe it.'

— Niels Bohr

(Note: This is often retold with Bohr himself as the one with the horseshoe, but this quote appears to be the authentic one.)

I wonder how common that is, believing that you don't believe something but acting in a way that implies more belief in it than you acknowledge. One other example I experienced recently: For whatever reason, my mom had a homeopathic cold remedy lying around. (I think a friend gave it to her.) She and I both had colds recently, so she suggested I try some of it. The thing is, she gives full assent to my explanations of why homeopathy is both experimentally falsified and physical nonsense; she even appeared to believe me when I looked at the ingredients and dilution factors and determined that the bottle essentially contained water, sugar, and purple food colouring. But even after that, she still said we may as well try it because it couldn't hurt. True, it couldn't hurt... but "it can't hurt" doesn't sound like really understanding that the bottle you're holding consists of water, sugar, and purple.

Another instance may be former theists who still act in some ways as though they believe in God (an interesting mirror image of current theists who don't act as though they really believe what they profess to believe), though in my experience many of them consider it to be bad habit they're trying to break, so I'd be less inclined to call it belief in [dis]belief, I'd take that as something more akin to akrasia.

I once took cough drops that really helped with the sore throat from a cold I had, and actually tasted good too. It was only after a day or two that I looked at the packaging and realized they were homeopathic. I didn't think too hard about it and kept taking them, because I wanted the placebo benefits and all the other brands of cough drop I own taste terrible.

The placebo effect is weakened but doesn't disappear if you know it's a placebo.

Here's a study (honestly labeled placebo vs nothing) for irritable bowel syndrome.

I originally got it from a Science et Vie article on a study with four conditions (labeled as placebo vs as treatment; placebo vs treatment), can't remember what for.

I remember this from earlier, see my response in that thread, and my links to Silberman and Lipson.

The study may well be measuring patients' tendency to want to fulfill doctors' expectations rather than any effect on the actual symptoms.

I agree this study is a bit silly. I'll try to dig up the one I saw, but promise nothing.

Agree that the placebo effect may contain lying to doctors. There may also be some regression to the mean - people who are too healthy are excluded from the study, so when everyone moves at random the ones sick enough to be selected get healthier.

My understanding is that the studies establishing a placebo effect were controlled in a way that'd rule out regression to the mean as a cause of the perceived improvements. Lying to doctors does sound plausible, though.

[This citation is a placebo. Pretend it's a real citation.]

What's weird about this is that if this theory works, anything forms an acceptable substitute.

So you don't need to buy any actual homeopathic "medication", you can save lots of money by just eating some sugar. (The homeopathic markup on sugar is just unbelievable.)

Even sugar isn't necessary, since you're stipulating that "what works" isn't any particular mechanism of action but just the action of treating yourself. You could as well choose to believe that taking a deep breath three times in succession is a good remedy against the cold (or whatever else ails you).

When I feel the first signs suggesting an incipient cold, I decide THIS IS NOT GOING TO HAPPEN, and it nearly always goes away.

So far, I've only been able to make this work for colds, not any other malady.

Must remember to try this.

Which incipient signs do you look for?

A roughness in the throat is usually the first thing I notice. Unchecked, it develops into a cough, sore throat, sneezing, and at the peak a couple of days of being completely unable to function.

This happened about once a year on average before I discovered I could banish them by willpower, since when it's been more like once in five years, generally from extreme circumstances like being caught in the rain on a bike ride without adequate clothing.

Just to chuck in a little more anecdotal evidence, my husband applied this belief in the placebo effect, and so long as he can get an early night, he never suffers the little bugs and headaches.

It works in all instances where homeopathy has worked... ;)

The placebo effect rocks!

Tends to work pretty well on my own mental state, but very short term. Complicated (expensive?) impressive rituals help, though.

The real question is not "is there a dragon?", but "why is it having sex with my car?"

Anna, If you're talking about real dragons, the theory that made the most intuitive sense to me (I think I read it in an E.O. Wilson writing?) is that dragons are an amalgamation of things we've been naturally selected to biologically fear: snakes and birds of prey (I think rats may have also been part of the list). Dragons don't incorporate an element of them that looks like a handgun or a piping hot electric stove, probably because they're too new as threats for us to be naturally selected to fear things with those properties.

I enjoyed The Simple Truth, thanks for linking it.

[["If the pebbles didn't do anything," says Autrey, "our ISO 9000 process efficiency auditor would eliminate the procedure from our daily work."]]

This "ISO 9000" hypothesis has not been supported by direct observation, unfortunately...

Wow. So, I'm basically brand new to this site. I've never taken a logic class and I've never read extensively on the subjects discussed here. So if I say something unbearably unsophisticated or naive, please direct me somewhere useful. But I do have a couple comments/questions about this post and some of the replies.

I don't think it's fair to completely discount prayer. When I was a young child, I asked my grandmother why I should bother praying, when God supposedly loved everyone the same and people praying for much more important things didn't get what they wanted all the time.

She told me that the idea is not to pray for things to happen or not happen. If I pray for my basketball team to win our game (or for my son to get well, or to win the lottery, or whatever) then based on how I interpret the results of my prayer I would be holding God accountable for me getting or not getting what I wanted. The point of praying, as she explained it, was to develop a relationship with God so I would be able to handle whatever situation I found myself in with grace. Even though we often structure our prayers as requests for things to happen, the important thing to keep in mind was how Jesus prayed in the garden before he was crucified. Even though he was scared of what was going to happen to him and he didn't want to go through with it, his prayer was "your will, not mine". He didn't pray for things to go his way, although he acknowledged in his prayers that he did have certain things that he wanted. The point of the prayer was not to avoid trials or fix their outcome, but to communicate with God for the strength and courage to hold fast to faith through trials.

Now, I'm certainly not citing my grandmother as a religious or theological expert. But that explanation made sense to me at the time, partially because I think you could probably that it would have the same benefit for people regardless of whether or not there was actually a God to correspond to the prayers, which jives well with how I believe in God.

Maybe I'm misunderstanding the post, but I think I have something like believing that I ought to believe in God, although I've always phrased it as choosing to believe in God. Even though I was raised Catholic, I never felt like I really "believed" it. For as long as I can remember, the idea of "belief" has made me incredible uncomfortable. Every time a TV show character asked "didn't you ever just believe something" I would cringe and wonder how anyone could possibly find such an experience valid when anyone else could have an alternate experience.

Secretly, I'm glad that I've never felt any kind of religious conviction. If I did, then I would have to prize my subjective experience over someone else's subjective experience. I'm quite aware that there are a multitude of people that have had very profound experiences that make them believe in one doctrine or another to the exclusion of all others, and that's something I can't really understand. Knowing that other people exist that feel equal conviction about different ideas of God with the same objective evidence makes it impossible for me to have any sort of belief in a specific God or scripture, at least at the level of someone who believes with enough conviction not to be perfectly comfortable with the idea that I'm wrong.

That said, I consider myself Catholic. I don't agree with all the doctrine and I don't think I could honestly say I think my religion is correct and other religions are wrong in any way that corresponds to an objective reality. But I choose to believe in this religion because what I do really believe deep down is that there is some higher order that gives meaningfulness to human life.

I consider it to be rather like the way I love my family- I don't objectively think that my family is the best family in the world, the particular subset of people most deserving of my love and affection. But they're my family, and I'll have no other. I can love them while still acknowledging that your love for your family is just as real as mine. Just because they're different experiences doesn't make them more or less valid- and just because it isn't tangible or falsifiable doesn't make it any less potent. Even so, I'm always curious if I'm really an atheist, or maybe an agnostic, since I don't really believe it beyond my conscious choice to believe it (and a bit of emotional attachment to my personal history with this specific religion).

Whew. That was a lot of words. Anyways, I'm sure that I've got plenty of logical and rational flaws and holes. Like I said, I'm basically brand new to all the ideas presented here, so I'm going to try and thrash my way through them and see what beliefs I still hold at the end.

Hey, welcome to Less Wrong! You might want to take a moment to introduce yourself at the welcome thread. Hope you find LW enjoyable and educational!

Hi there, nice to know I'm not the only one absolutely new and quaking in my slippers here.

I don't think you're quite making the mistake of believing in belief. I can't model your brain accurately just by reading a few paragraphs of course, but you don't seem to show much flinching-away from admitting the judeo-christian god and the catholic interpretation of it is wrong. I think you're more identifying the religion of your family and peers as your 'group' (tribe, nation, whatever wording you prefer) and shying away from dropping it as part of your identity for the same reason a strong patriot would hate the feeling of betraying their country.

I remember reading a thing about this by... some famous secularist writer, Dawkins or Harris I think. About a million years ago, for all the good my memory is serving me on the matter. I'll try and find it for you.

As for being attracted to a higher order of things, well.. I agree with you. I just happen to think that higher order is quite physical in nature, hidden from us by the mundanity of its appearance. I think you might really want to read the sequences:

http://wiki.lesswrong.com/wiki/Reductionism_(sequence) and http://wiki.lesswrong.com/wiki/Joy_in_the_Merely_Real

Funny, it's the second time this past week or so that I encounter a Lesswronger that identifies as Catholic.

(Welcome to LessWrong by the way!)

At least in the case of religious people who are actually convinced God exists, I think the difference between belief and knowledge is thus: Belief is when you think something is true but it's controversial. Knowledge is when you think someting is true and think everyone would agree with you.

I noticed that I was confused by your dragon analogy. 1) Why did this guy believe in this dragon when there was absolutely no evidence that it exists? 2) Why do I find the analogy so satisfying, when its premise is so absurd.

Observation 1) Religious people have evidence:

The thing about religion is that a given religion's effects on people tend to be predictable. When Christians tell you to accept Jesus into your heart, some of the less effective missionaries talk about heaven, but the better ones talk about positive changes to their emotional states. Often, they will imply that those positive life changes will happen for you if you join, and as a prediction that tends to be a very good one.

As a rationalist, I know the emotional benefits of paying attention when something nice happens, and I recognize that feeling gratitude boosts my altruism. I know I can get high on hypoxia if I ever want to see visions or speak in tongues. I know that spending at least an hour every week building ethical responses into my cached behavior is a good practice for keeping positive people in my life. I recognize the historical edifice of morality that allowed us to build the society we currently live in. This whole suite of tools is built into religion, and the means of achieving the benefits it provides is non-obvious enough that a mystical explanation makes sense. Questioning those beliefs without that additional knowledge means you lose access to the benefits of the beliefs.

Observation 2) We expect people to discard falsifiable parts of their beliefs without discarding all of that belief.

The dragon analogy is nice and uncomplicated. There are no benefits to believing in the dragon, so the person in the analogy can make no predictions with it. I've never seen that happen in the real world. Usually religious people have tested their beliefs, and found that the predictions they've made come true. The fact that those beliefs can't predict things in certain areas doesn't change the fact that they do work in others, and most people don't expect generality from their beliefs. When that guy says that the dragon is permeable to flour, that isn't him making an excuse for the lack of a dragon. That's him indicating a section of reality where he doesn't use the dragon to inform his decisions. Religious people don't apply their belief in their dragon in categories where believing has not provided them with positive results. Disproved hypotheses don't disprove the belief, but rather disprove the belief for that category of experience. And that's pretty normal. The fact that I don't know everything, and the fact that I can be right about some things and wrong about others means that I pretty much have to be categorizing my knowledge.

Thinking about this article has lead me to the conclusion that "belief in belief" is more accurately visualized as compartmentalization of belief, that it's common to everyone, and that it indicates that a belief that I have is providing the right answer for the wrong reasons. I predict that if I train myself to react to predicting that the world will behave strangely in order to not violate my hypothesis by saying out loud "this belief is not fully general" I will find that more often than not that this statement will be correct.

"Those who find this confusing may find it helpful to study mathematical logic, which trains one to make very sharp distinctions between the proposition P, a proof of P, and a proof that P is provable"

This is a bit of a side question, but wouldn't a proof that P is provable be a proof of P? In fact, it sounds like a particularly elegant form of proof.

If you trust base system B, then a proof that P is provable in B is good as gold to you. But it is not a proof in B.

http://lesswrong.com/lw/t6/the_cartoon_guide_to_l%C3%B6bs_theorem/

Hrm... if the system is isn't necessarily trustworthy, then that the system proves that it can prove P doesn't mean that it's actually true that it can prove P, I guess.

EDIT: actually, having it as an explicit axiom "If this proves P, then P" runs you into trouble in any system that has something like Lob's theorem.

("if some specific subset of the rest of this system, (ie, other than this axiom) proves P, then P" can potentially be okay, though)

Seconded - this is an interesting question. (And I suspect that there are some interesting cases in which a proof that P is provable does not constitute a proof, but this is mainly because I've seen mathematicians break similarly intuitive propositions before.)

I suspect that there are some interesting cases in which a proof that P is provable does not constitute a proof, but this is mainly because I've seen mathematicians break similarly intuitive propositions before.

It wouldn't surprise me either. However such cases would have to rely on a precise definition of 'proof' differently to what I use. The result would then be filed under 'fascinating technical example' but not under 'startling revelation' and I would take note of the jargon for use when talking to other mathematicians.

Here's an example of what Doug Hofstadter writes in I Am A Strange Loop. Kurt Goedel discovered that Principia Mathematica by Bertrand Russell does provide reference to itself. So Russell in his book yields the propositions and their proofs, and then Goedel assigns specific numbers to proofs and therefore proves that there is a proof that they are in fact, provable

Outside of mathematics, a statement that is provable is also disprovable. Then it's called a hypothesis.

I'm reminded of the joke where an engineer, a physicist, and a mathematician are going to a job interview. The interviewer has rigged a fire to start in the wastepaper basket, to see how they react in a crisis situation. The engineer sees the fire, sees the water cooler, grabs the water cooler and dumps it on the fire. The physicist sees the fire, sees the water cooler, grabs pencil and paper, calculates the exact amount of water needed to extinguish the fire, then pours that amount of water into the basket, exactly extinguishing the fire. The mathematician sees the fire, sees the water cooler, and says, "Ah! A solution exists!".

Was the reply to Anna serious? That's outrageous.

My main take-away: There is a difference between conscious and subconscious. If you accuse sb with "You do not believe X" then you will get denial because he consciously believes it. The problem is that he subconsicously does not believe it and thus comes up with excuses in advance.

I've always thought that the idea of "believing in" things was very curious. This is a very thought-provoking article. Every time I engage a debate about this subject (the relevance or usefulness of beliefs) someone is sure to say something about beliefs existing for the benefit of the believer. My feeling is that with most beliefs and with most believers, there is an internal acknowledgement of the falsifiablity of their belief which is outweighed by the perception that some benefit is derived from the belief. What I interpret from this is that most believers subtley admit their own practice of belief in belief. I also feel that even the idea of whether or not one believes in believing in belief can enter the mind of the mundane thinker at such an admission.

Do you believe in anything, or is it all feeling and knowing?

"There's something that makes beliving and knowledge quite different, and that’s truth which isn’t inside one person head but out there, in reality."

Ehm, let me ask you this: Are you 100% sure that the sun will come up tomorrow?

All evidence points that way, yes. We have a fair idea of what is going on yes. But that's where the ball stops - we will never know with 100% certainity.

When we stop acknowledging that the science of tomorrow may produce evidence that will turn our whole world-view upside down, is when Science becomes Religion.

I'm not saying that we need to start taking mediums seriously and base our lifealtering decisions on numerology. I'm merely saying that the things you take seriously today, the things you'd base your lifealtering decisions on today may be falsified tomorrow, redeemed the week after, only to be shot down again with the latest research come this time next year.

The 'Truth' may be out there, but it needs to be approcached empirically with a clear understanding of the fact that even repeated measurements of the same thing will only ever give us circumstancial evidence that may be influenced by our abilities to measure and reason. I don't think we ever posess true knowledge. Instead we have beliefs that can or cannot stand up to empirical scrutiny. Beliefs that must still be challenged on a regular basis, and acknowledged for what they are.

Anna: Whether you call it science fiction, heuristics, overcoming bias, history, a belief is a belief. You can't prove belief as it's self-subjective. --That only makes for more of a reason it should only be self affecting, too many people try to influance the actions of others bases on their dragons

You can't tell someone what they feel is wrong. --Yes I can. "There's a dragon in my bathroom"... (careful examination of the bathroom)... "No, there isn't, you're wrong."

Each individual has there equation when it comes to understanding the "dragon" within themself. --And it needs to be overcome with logic and reason.

If dragons can't be verified as they have never been verified based on history, why do people still feel the need to believe in dragons and continue to discuss the subject and be fascinated by it? --simply because they were raised with it, taught to dis-belive any evidence provided, and been shown that those who disagree with it are 'out to get them'

The parable was original with Antony Flew whose Theology and Falsification can be found here http://www.stephenjaygould.org/ctrl/flew_falsification.html

I want my wife to read this, but I don't think she'd believe it.

Hi, I'm new to this blog. I haven't read all the essays and whatnot, so if you read something I say that you think is bull, I'd like it if you linked me to the essay.

Anyway... Regarding the dragon and the garage scenario: what would the dude say if I were to ask "how do you know the Dragon is there if it's impossible to know if he's there?"

It's an interesting question. Judging by Eliezer Yudkowsky's story in Is That Your True Rejection?, they would be likely to say something that sounded good even if it's not their real reason.

"My ancestors thousands of years ago were aided by the provably omnibenevolent dragon, who then assured them he would forever live invisibly in my garage."

Just off the cuff

The question would have an answer for some actual believer in belief - not for a hypothetical character in a thought exercise.

Keeping in mind that the thought exercise has limited isomorphism to belief in God. No one believes in an invisible dragon in their garage ... because there isn't any reason to think there is a dragon there. Theists have reasons to believe in God, atheists just don't agree with those reasons.

Chelo: "I don't think we ever posess true knowledge."

I KNOW I went to Tesco's this morning. Am I wrong? Discuss!

Main post "The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse."

I know this is a bit of a side issue, but how do you justify this claim from the example given? You don't need such a model to give the answers he gives. Surely you once engaged in late-night undergraduate pseudo-intellectual discussions where you held an ultimately untenable viewpoint but still fended off such questions on the fly?

Perhaps though this is just a problem arising from the rather simplistic metaphor. A dragonista can postulate a dragon and then, as in your example, refute all challenges by simply denying all interactions with the real world, although then of course he's not really saying anything at all. The religionist has a much more difficult trick to perform. He cannot take the dragonista's line as his god must interact in some way with the world to have any meaning. He is faced with having to reconcile the interactions he needs from his god (e.g. responses to prayer) with the apparent absence of physical evidence for them. This DOES require the building of the consistent framework you propose, so that he can fend off new challenges without falling into a trap which concedes the non-existence of his god. The convolutions exhibited by fundamentalist Christians when trying to construct such a reconciliation between what they need to believe and the contrary evolutionary evidence are a better example of this.

Does the idea that it is a good thing to subject our beliefs (and even our belief in belief) to logical and analytical scrutiny count as belief in itself or is it so justifiable as to count as knowledge? If so, what is the justification?

I don't think it does. Scrutinizing your beliefs is a corollary - it naturally follows if you believe that "Truth is good and valuable and its pursuit is worthwhile." We value truth, we want our maps to match the territory, and so we scrutinize our beliefs. If anything needs to be justified, it's the value placed on truth and knowledge thereof.

And that's actually an interesting problem. Although my intuition shouts TRUTH IS GOOD, there's not much I can say to prove that statement, outside of "It's useful to be able to make accurate predictions." It seems like the goodness of truth ought to be provable some way. But maybe it's just a moral value that our (sub-)culture happens to hold particularly strongly? Perhaps someone better versed than I am in the arts of rationality can give a better answer.

Although my intuition shouts TRUTH IS GOOD, there's not much I can say to prove that statement

Prove in which way? Not to mention that you need to define "good" first.

Would the observation that people who disregard "truth is good" rarely survive for long be considered a kinda-proof? :-)

"Belief in belief" exists as a phenomenon but is neither necessary nor sufficient to explain the claims of the Dragonist (if I may name his espoused metaphysics thus) in Sagan's parable.

My most recent encounter with someone who believed in belief was someone who did not in addition believe. He had believed once, but he lost his faith (in this case, in God, not dragons) and he wished he could have it back. He believed in belief--that it was a good thing--but alas, he did not believe.

In the above article, Eliezer (if I may so call him) was invoking the concept of belief in belief to explain something--that is, it was a hypothesis of a sort. The phenomenon in question was this Dragonist who claimed to believe but gave some evidence that he did not in that he rejected the most obvious consequences of a dragon being in the garage. Our hypothesis was that he didn't really believe but thought he should and was, in effect, trying to convince himself and others that it was so but (in the case of himself) not so overtly that he'd have to admit to himself he wasn't how he hoped he'd be. If our hypothesis were true, what would we anticipate? If we confronted this guy, that he'd break down and admit he lack of belief? Someone whose belief system runs to invisible dragons is too crazy to let that happen so easily. Maybe what we anticipate is that given sufficient anti-psychotic meds and associated treatments and time, he would recant? What if he didn't? Would we so believe in our hypothesis that we would have faith that given infinite time (say, the amount of time necessary to search all the integers until we identified the last twin prime or the first perfect number that didn't end in 6 or 8) he would recant in principle. Worse still, maybe he would recant to get us off his back but continue to believe in secret.

In short, since our Dragonist's subjective mental state is invisible to us, even were we to sprinkle flour over his head, we are ultimately forced to rely on faith that belief in belief is what is behind this phenomenon.

If his mental state is invisible to us, that means we can't prove what his mental state is, but it should still be possible to have evidence for his mental state and to know it to some degree of certainty that isn't 100%. Which is no different from what science does to "prove" anything else.

It would be difficult to say what this evidence would be. As one who has spent some time with people who would generally be called deluded, I can assure you that finding an understandable explanation for their delusions is non-trivial.

This post taught me a lot, but now "There is no invisible dragon in my garage" will be popping into my head whenever I see a garage.

Very interesting. I have transhumanist beliefs that I claim to hold. My actions imply that I believe that I believe, if I understand this properly.

A prime example would be how I tend to my health. There are simple rational steps I can take to increase my odds of living long enough to hit pay dirt. I take okay care of myself, but could do better. Much better.

Cryonics may be another example. More research is required on my part, but a non-zero last stab is arguably better than nothing. I am not enrolled. It feels a bit like Pascal’s Wager to me. Perhaps it is a more valid form of the argument, though. Hoping for a scientific miracle seems essentially different than hoping for a magical miracle. Scientific miracles abound. Artificial hearts, cochlear implants, understanding our origins, providing succor to imbalanced minds, the list goes on. Magical miracles… not so much.

Heck, I could stop forgetting to floss daily! (There seem to be strong correllations between gum disease and heart disease).

I anticipate as if there will be no radical life extension available within my life time, but I will argue for the possibility and even likelihood. Do I have this correct as a type of belief in belief?

Do I have this correct as a type of belief in belief?

Pretty much. Though it might just be a case of urges not lining up with goals.

In both cases, you profess "I should floss every day" and do not actually floss every day. If it's belief in belief, you might not even acknowledge the incongruence. If it's merely akrasia, you almost certainly will.

It can be even simpler than that. You can sincerely desire to change such that you floss every day, and express that desire with your mouth, "I should floss every day," and yet find yourself unable to physically establish the new habit in your routine. You know you should, and yet you have human failings that prevent you from achieving what you want. And yet, if you had a button that said "Edit my mind such that I am compelled to floss daily as part of my morning routine unless interrupted by serious emergency and not simply by mere inconvenience or forgetfulness," they would be pushing that button.

On the other hand, I may or may not want to live forever, depending on how Fun Theory resolves. I am more interested in accruing maximum hedons over my lifespan. Living to 2000 eating gruel as an ascetic and accruing only 50 hedons in those 2000 years is not a gain for me over an Elvis Presley style crash and burn in 50 years ending with 2000 hedons. The only way you can tempt me into immortality is a strong promise of massive hedon payoff, with enough of an acceleration curve to pave the way with tangible returns at each tradeoff you'd have me make. I'm willing to eat healthier if you make the hedons accrue as I do it, rather than only incrementally after the fact. If living increasingly longer requires sacrificing increasingly many hedons, I'm going to have to solve some estimate of integrating for hedons per year over time to see how it pays out. And if I can't see tangible returns on my efforts, I probably won't be willing to put in the work. A local maximum feels satisfying if you can't taste the curve to the higher local maximum, and I'm not all that interested in climbing down the hill while satisfied.

Give me a second order derivative I can feel increasing quickly, and I will climb down that hill though.

That's helpful input, thanks. After reading the link and searching the wiki I suspect that it is more likely an akrasia/urges v. goals sort of thing based upon my reaction to noticing the inconsistency. I felt a need to bring my actions in line with my professed beliefs.

Surprised not to find Pascal's wager linked to this discussion since he faced the same crisis of belief. It's well known he chose to believe because of the enormous (inf?) rewards if that turned out to be right, so he was arguably hedging his bets.

It's less well known that he understood it (coerced belief for expediency's sake) to be something that would be obvious to omniscient God, so it wasn't enough to choose to believe, but rather he actually Had To. To this end he hoped that practice would make perfect and I think died worrying about it. this is described in the Wikipedia article in an evasive third person, but a philosophy podcast I heard attributed the dilemma of insincere belief to Pascal directly.

Fun stuff.

Might belief in belief occasionally be valuable when overcoming bias? It would be better to correct my beliefs, but sometimes those beliefs come from bias. I might be convinced in my head that standing on the glass floor of an airplane and looking down is totally safe - this specially-modified-for-cool-views airplane has flown hundreds of flights - yet in my heart deeply believe that if I step onto it I will fall through. I might then choose to "believe in the belief that it is safe to take a step", while all my instinctual reactions are based on a false model. The cognitive dissonance is due to my inability to integrate something so foreign to the evolutionary environment into my belief structure.

X: There's something that makes beliving and knowledge quite different, and that’s truth which isn’t inside one person head but out there, in reality. I’m sure that if we ask this man if he knows there is a dragon in the garage he will reply affirmatively, no doubt about it, but the truth is that there is no dragon and he just think he knows is in there. But the man doesn’t know anything, he believe a lie and he is making excuses to protect the lie, and one of those excuses is that he knows is in there, is not a belief.

I think this is one of humanity greatest weakness; the need to detach from reality and defend beliefs that are obviously wrong, I understand the psychological need to do so, but, in my opinion, still is a sign of weakness. As Eliezer said we should find joy in what is real.

There is a distinction between Belief and Knowledge. We can believe things that are untru and disbelieve things that are true.

The parable your refer to by Sagan I think should be attributed to Antony Flew whose Theology and Falsification is available online. http://www.stephenjaygould.org/ctrl/flew_falsification.html

The rationalist virtue of empiricism...

I'm not disagreeing with any of the content above, but a note about terminology--

LessWrong keeps using the word "rationalism" to mean something like "reason" or possibly even "scientific methodology". In philosophy, however, "rationalism" is not allied to "empiricism", but diametrically opposed to it. What we call science was a gradual development, over a few centuries, of methodologies that harnessed the powers both of rationalism and empiricism, which had previously been thought to be incompatible.

But if you talk to a modernist or post-modernist today, when they use the term "rational", they mean old-school Greek, Platonic-Aristotelian rationalism. They, like us, think so much in this old Greek way that they may use the term "reason" when they mean "Aristotelian logic". All post-modernism is based on the assumption that scientific methodology is essentially the combination of Platonic essences, Aristotelian physics, and Aristotelian logic, which is rationalism. They are completely ignorant of what science is and how it works. But this is partly our fault, because they hear us talking about science and using the term "rationality" as if science were rationalism!

(Inb4 somebody says Plato was a rationalist and Aristotle was an empiricist: Really, really not. Aristotle couldn't measure things, and very likely couldn't do arithmetic. In any case the most important Aristotelian writings to post-modernists are the Physics, which aren't empirical in the slightest. No time to go into it here, though.)

This is fascinating. When I look back at the thought patterns of my younger self, I can see so much of this belief-in-belief. Despite being raised religious, I came to an agnostic conclusion at around age ten, and it terrified me, because I very much wanted to believe. To my mind, people with faith had a far greater sense of morality than those without, and I didn't want to fall into that latter category.

So I proceeded as if I believed, and eventually came to make justifications along the lines of 'ritual X accomplishes outcome Y' where Y was something psychologically valuable, for example a sense of community. That made X a good idea even if I didn't truly believe the theology involved.

When I was first told I had a second-order relationship to my belief, I was very insulted. It was as if I'd defined a good person as a religious one, and by challenging my belief that person was challenging my intrinsic worth (despite the fact that they were an atheist themselves and clearly thought nothing of the sort.) Cognitive distortion at its finest.

It took a profound shift in my thinking about the role of religion in morality before I could accept that it was alright not to believe. The rest followed nicely.

As for Santa Claus? I pretended I believed (despite knowing the absolute impossibility of it being true) for three whole years. The idea that there was such a great conspiracy that every adult seemed to be complicit in really worried me, at six years old, and made me afraid to speak the truth. I knew they wanted me to believe, so I let them think I did.

If I ever have children of my own, needless to say, Santa will be introduced as an enjoyable fiction and nothing more.

Blarg... okay this one is tripping me up. There are two parts to this comment. The first part is quasi-math; the other is not. It is very much a brain dump and I have not edited it thoroughly.

EDIT: I think I managed to get it cleared up and responded with a second comment. FYI.

Let B(X) mean belief in X where belief is defined as a predictor of reality so that reality contains event X. Using "There is a dragon in my garage" as X we get:

  • B("There is a dragon in my garage.")
  • B("There is not a dragon in my garage.")

I think it is okay to write the latter as:

  • B(~X) where X is "There is a dragon in my garage."

So far okay and both can be verified. The problem comes when X is "There is an unverifiable dragon in my garage."

  • B("There is an unverifiable dragon in my garage.")
  • B("There is not an unverifiable dragon in my garage.")

Both of these are unverifiable, but the latter is okay because it matches reality? As in, we see no unverifiable dragon so the ~X is... what, the default? This confuses me. Perhaps my notation is wrong. Is it better to write:

  • B(X)
  • ~B(X)

If B(X) is belief in X, B(~X) != ~B(X). This way we can throw out the unverifiable belief without creating a second unverifiable belief. All of this makes sense to me. Am I still on track with the intent of the post? This implies that B(X) and B(~X) are equally unverifiable when X is unverifiable.

Next is belief in belief:

  • B(B(X))

Of which I think you are arguing that B(B(X)) does not imply B(X). But are you also saying that B(X) implies B(B(X))? And this is how people can continue to believe in something unverifiable?

I feel like I am drifting far away from the purpose of this post. Where did I misstep?

Here is my second attempt, this time with no math:

Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse.

Would there be any experimental results that he wouldn't need to excuse? Is there some form of invisiodragonometer that beeps when he goes into his garage? Would the scenario change any if the subject was genuinely surprised when no sounds of breathing were heard and the oxygen levels remained the same and still offered up excuses of inaudible and non-breathing? How would the typical believer in atoms defend their existence if we wandered into the garage and complained about no breathing sounds?

I can think of simple answers to all of these questions, but it makes me think less of the usefulness of your conclusion. When I think of unverifiable beliefs I think of examples where people will spend their whole life looking for physical proof and are constantly disappointed when they do not find it. These people don't have an accurate model of the situation in their mind. The example of invisible dragons still applies to these people while your claim that they dodge in advance does not seem to apply.

So... again, I feel like I am missing some key point here.

[I]f you believe in belief, you cannot admit to yourself that you only believe in belief, because it is virtuous to believe, not to believe in belief, and so if you only believe in belief, instead of believing, you are not virtuous. Nobody will admit to themselves, "I don't believe the Ultimate Cosmic Sky is blue and green, but I believe I ought to believe it" - not unless they are unusually capable of acknowledging their own lack of virtue. People don't believe in belief in belief, they just believe in belief.

I can think of examples where someone fully admits that they believe it would be better to believe X but as hard as they try and as much as they want to, they cannot. These people are often guilt ridden and have horrible, conflicting desires, but it doesn't take much imagination to think of someone who simply states the belief in belief X without emotion but admits to not believing X. At least, I can hear myself saying these words given the right circumstances.

Believing in belief of belief seems like something else entirely unrelated to dragons in garages or unverifiable beleifs. This, again, makes me feel as if I am missing a crucial piece of understanding throughout all of this. If I had to potshot at the missing pieces I would aim toward the definitions of belief. Specifically, what you are calling beliefs aside from predictors of reality. (And even there, I do not know if I have a correct translation.)

I do not know if you have any desire to discuss this subject with me. Perhaps someone else who knows the material is willing? I sincerely apologize if these types of responses are frustrating. This is how I ask for help. If there is a better way to ask I am all ears.

Would there be any experimental results that he wouldn't need to excuse?

The idea here is that if you really believed you had an invisible dragon in your garage, if somebody proposes a new test (like measuring CO2), your reaction should be "Oh, hey! There's a chance my dragon breathes air, and if so, this would actually show it's there! Of course, if not, I'll need to see it as less likely there's an invisible dragon."

If instead, your instant reaction is always to expect that the CO2 test returns nothing, and to spend your first thoughts (even before the test!) coming up with an excuse why this doesn't disconfirm the dragon... then the part of you that's actually predicting experiences knows there isn't actually a dragon, since it instantly knows that any new test for it will come up null.

Do people actually do that? I couldn't think of anyone I know who would do that. I finally came up with an example of someone I know who has a belief in belief, but it still doesn't translate into someone who acts like you described.

I am not saying it is impossible; I've just never met anyone who acted like this and wasn't blatantly lying (which I am assuming disqualifies them from belief in belief).

Umm... have you met a religious person? As soon as you mention anything about evidence or tests, they'll tell you why they won't/don't work. These sorts of excuses are especially common if you talk about testing the efficacy of prayer.