I am not sure that it is productive to tell certain people that they do not really believe what they claim to believe, and that they only believe they believe it. I have an alternative suggestion that could possibly be more useful.

 

Binary Beliefs

It seems that human beings have two kinds of beliefs: binary beliefs and quasi-Bayesian beliefs. The binary beliefs are what we usually think of as beliefs, simple statements which are true or false like "Two and two make four," "The sun will rise tomorrow," "The Messiah is coming," and so on. 

Binary beliefs are basically voluntary. We can choose such beliefs much as we can choose to lift our arms and legs. If I say "the sun will rise tomorrow," I am choosing to say this, just as I can choose to lift my arm. I can even choose the internal factor. I can choose to say to myself, "the sun will rise tomorrow." And I can also choose to say that the sun will NOT rise. I can choose to say this to others, and I can even choose to say it to myself, within my own head. 

Of course, it would be reasonable to respond to this by saying that this does not mean that someone can choose to believe that the sun will not rise. Even if he says this to himself, he still does not act as though the sun is not going to rise. He won't start making preparations for a freezing world, for example. The answer to this is that choosing to believe something is more than choosing to say it to oneself and to others. Rather, it is choosing to conform the whole of one's life to the idea that this is true. And someone could indeed choose to believe that the sun will not rise in this sense, if he thought he had a reason to do so. If he did so choose, he would indeed begin to make preparations for a dark world, because he would be choosing to conform his actions to that opinion. And he would do this voluntarily, just as someone can voluntarily lift his arm.

 

Quasi-Bayesian Beliefs

At the same time, human beings have quasi-Bayesian beliefs. These are true degrees of belief like probabilities, never really becoming absolutely certain of the truth or falsity of anything, but sometimes coming very close. These are internal estimates of the mind, and are basically non-voluntary. Instead of depending on choice, they actually depend on evidence, although they are influenced by other factors as well. A person cannot choose to increase or decrease this estimate, although he can go and look for evidence. On account of the flawed nature of the mind, if someone only looks for confirming evidence and ignores disconfirming evidence, this estimate in principle can go very high even when the objective state of the evidence does not justify this.

 

Belief in Belief

It seems to me that what we usually call belief in belief basically means that someone holds a binary belief together with a quasi-Bayesian belief which conflicts with it. So he says "The Messiah is coming," saying it to himself and others, and in every way acting as though this is true, even though his internal Bayesian estimate is that after all these thousands of years, the evidence is strongly against this. So he has a positive binary belief while having a very low estimate of the probability of this belief.

The reason why this often happens with religion in particular is that religious beliefs very often do not have huge negative consequences if they are mistaken. In principle, someone can choose to believe that if he jumps from the window of the tenth story of a building, he will be ok. In practice, no one will choose this on account of his non-voluntary Bayesian estimate that he is very likely to be hurt if does so. But a person does not notice much harm from believing the Messiah is coming, and so he can choose to believe it even if his internal estimate says that it is likely to be false.

A cautionary note: one might be tempted to think that religious people in general have belief in belief in this sense, that they all really know that their religions are unlikely to be true. This is not the case. There are plenty of ways to distort the internal estimate, even though one cannot directly choose this estimate. I know many very religious people who clearly have an extremely high internal estimate of the truth of their religion. They REALLY BELIEVE it is true, in the fullest possible sense. But on the other hand I also know others, also extremely devout, who clearly have an internal estimate which is extremely low: they are virtually certain that their religion is false, and yet in every way, externally and internally, they act and think as though it were true.

 

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 9:39 AM

It looks to me as if this conflates two distinctions that, a priori, seem like separate things.

  • Between voluntarily adopted beliefs on whose basis we choose what to say and do ("basically voluntary"), and ones we simply find ourselves with ("internal estimates of the mind").
  • Between beliefs we treat as binary, and beliefs with degrees of certainty that never get to exactly 0 or 1.

I can see why these might turn out to be related -- e.g., because those voluntarily adopted beliefs are adopted explicitly, using language, which tends to encourage propositional reasoning, whereas implicit unconscious beliefs are handled by fuzzy neural-network things inside our hardware. So maybe the two distinctions do go together.

But the names you've adopted for them are based on the binary-versus-qualitative distinction, whereas (I think) what's important about them -- what determines the role they play in our thought and speech and action -- is the voluntary-versus-automatic distinction. (A person could voluntarily decide to act as though the probability of some important proposition is 75%. That would be better treated as what you're calling a "binary belief", no?)

And it seems to me that if we adopt terminology adapted to the distinction that's actually relevant here -- say, "voluntary" versus "automatic" -- we've gone much of the way back to talking about "belief in belief" again. (Not all the way, but I have another point to make about that which I'll put in another comment because it's logically separate.)

You are probably right about the voluntary or non-voluntary aspects of the behavior being more important here. I can certainly choose to say "the probability of X is 75%" and choose to conform my behavior to that. So maybe the article could be rewritten to emphasize that aspect more. But it would still happen that people have two kinds of assessment of reality which do not necessarily line up completely: a voluntary one and an automatic one. The problem I see with "belief in belief" is that it seems to suggest that people are wrong about what assessment they have. Instead of this, it seems to me that what people are talking about when they say they believe something is precisely the voluntary assessment. When they say "I believe in God", they mean that they are choosing to act -- including in their own minds, insofar as they can control this -- as if God exists. They do not necessarily mean that they have a high automatic assessment of the idea that God exists. I agree that if they did mean the latter, you would sometimes find people who are mistaken about their own assessment. One thing I did not talk about in the article is the fact that people are also more or less consciously aware of the contrast between their automatic assessment and their voluntary assessment. A religious person I know said he would he happy with a 30% chance his religion was true, but he did not mean by that he would act as though it had a 30%; he acts as though it is absolutely true. So he realizes that his voluntary assessment and his automatic assessment do not match.

You suggest that what distinguishes the beliefs commonly criticized as "belief in belief" from others is that they are voluntarily, explicitly adopted (and that this goes along with treating them in all-or-nothing fashion rather than as having degrees of assent). I think there's a lot of truth in this. But your taxonomy of beliefs seems to be missing a category. You say:

The answer to this is that choosing to believe something is more than choosing to say it to oneself and to others. Rather, it is choosing to conform the whole of one's life to the idea that this is true.

But a large part of the point of talking about "belief in belief" is that some people say they believe things (voluntarily, black-and-white-ly) but don't act in ways fully consistent with believing them. For instance, some (a small minority, and I'm not intending to insinuate e.g. that most religious people are like this) profess to believe that in the very near future the Second Coming of Christ will occur, putting an end to pretty much all the institutions of present-day human society -- but they aren't taking out enormous loans that will come due after the alleged Second Coming, they are still sending their children to school, they still have pensions, etc., etc., etc. Which suggests that maybe they don't really believe that the world as we know it is going to end in a few years, they just think they're supposed to say they do. (Very similar things are sometimes said about singularitarians. Again, I am not claiming that they are correctly said of more than a small minority.)

In other words, the point is that someone who says "The Messiah is coming" is not

in every way acting as though this is true

and indeed this is how we might diagnose that

his internal Bayesian estimate is that [...] the evidence is strongly against this

and declare him a practitioner of "belief in belief". Unknowns, consider the religious people in the second category you mention in your last paragraph. How do you know that they

clearly have an internal estimate which is extremely low: they are virtually certain that their religion is false

if, as you say,

in every way, externally and internally, they act and think as though it were true?

Ok, you are right that I missed the situation of people who say things but fail to live up to them in obvious ways. That is probably simply because I don't really see this in the religious people I am most acquainted with. My guess is that if we leave out people who are actually not telling the truth about what they believe (i.e. people who are literally lying about what they hold), that happens mainly because of akrasia and other similar situations where their desires are in conflict with their explicit beliefs about what they should do, and they follow their desires instead of their belief. If this is right, it wouldn't be necessary to make a distinct category of belief here, just as it isn't necessary to say that I have a "belief in belief" that I should not waste so much time browsing the internet, even if I do it. The desire is a sufficient explanation without having to modify the belief.

About the people virtually certain their religion is false, I should have said "in every way under their control". In other words it is precisely the voluntary aspects that they are choosing to conform to the belief. But there are many involuntary aspects that reveal the contrary estimate. There are also voluntary actions which are somewhat indifferent, where they could consistently do the action if the thing was true, but they might be somewhat more likely to perform that action if they held the contrary internal estimate. For example, since emotional reactions are somewhat involuntary they can manifest something about it; someone with a high internal estimate of his religion is more likely to react badly in an emotional way to someone else leaving the religion, while someone with a low internal estimate is likely to be more calm about it. Likewise, as an example of the voluntary case, if someone says "I realize there is plenty of evidence against my religion," this is a somewhat indifferent voluntary action, since it is consistent with the religion being true, but it is more likely to be said by someone who has the low estimate.

someone with a high internal estimate of his religion is more likely to react badly in an emotional way to someone else leaving the religion, while someone with a low internal estimate is likely to be more calm about it.

I don't think there's reason to believe that.

If someone says something I know to be utterly false I laugh. There needs to be a basis of doubt to feel bad emotionally.

If you think your friend is going to hell for eternity, you feel bad. You don't need to doubt.

[-][anonymous]9y50

No, I think concepts like "belief in belief" or "belief in self-deception" have more to do with people who are slightly on the autism spectrum having difficulty understanding neurotypical brains than anything else. Yes, I am specifically thinking EY here.

Basically neurotypical brains are way more social. A religion is professed and that means there are statements repeated aloud, in company, basically like a password for gaining social acceptance. When done early enough from childhood and often enough, you learn to repeat them also inward, inside your brain. It becomes part of your inner voice. It also has a certain emotional effect, maybe reassuring, or frightening.

But at no point in the process does truth play much of a role. You don't believe it in the sense you believe things that require immediate action. You just hear this inner voice saying it. And you repeat it aloud for other people.

The best parallel is probably music. Like song lyrics stuck in your ear. Do you really wonder what is the probability that Rick Astley is never gonna give her up? No, it is just a chain of words heard outside from the radio or inside your brain with a certain emotional effect.

People who are slightly autistic tend to think beliefs are private, you really, seriously decide if X is true or false. But for neurotypicals beliefs aren't private and thus they are not really beliefs in this sense, they do not carry a big stamp putting "this is true" on their beliefs, it is passwords, word-memes repeated aloud for a social function and also heard inward. Since they do not require immediate action, they also do not require actually deciding if they are true.

This is why "I believe people are nicer than they actually are" makes perfect sense. It means "I enjoy hearing an inner voice telling me people are nice. I know it is not really true. But I enjoy this tape so I keep playing it. I will also say it aloud so that others can also enjoy hearing it."

Human intelligence evolved as social intelligence. We are far better equipped to win political debates and get elected than to find truth. This is why Newton type truth-finders tend to be slightly autistic, having Asperger, you need to turn off the social in order to be really interested in truth.

Disagree -but up voted for the Rick Astley reference. And for a plausible and interesting hypothesis.

This is so true! And if you buy into Julian Jaynes's "Bicameral Mind" theory, then ancient religious commandments from god (which were in actuality lessons from parents/chiefs/priests ingrained in one's psyche since childhood but falsely attributed to unseen spiritual forces) literally WERE heard in people's minds like a catchy music tune played over and over.

[-][anonymous]9y30

I never really understood the concept of the individual i.e. an indivisible person. I always felt divisible, something like a Freudian superego/ego/id, but clearly when I face the dilemma of sticking to weight loss or eating a cake, it is not one indivisible person making a choice, but two agents arguing with each other inside me. It is more or less literally heard as an internal dialogue. Not for everybody?

I am fighting an ugly case of alcohol addiction. There is the Higher Self, the one with the low time preference, who wants to live in a healthy and a rational way until 75, and the Lower Self, who wants to indulge in every impulse now, and does not care if it dies at 50. I used to do what I guess is the common case, identify with the Higher Self, as it is far more respectable and feels good to identify with it, and consider the lower self an external demon. However, this means always having to fight the demon. And that is tiresome. So I turned it around, identified with the lower self, and basically accepted I am a pig with poor impulse control, and turned the Higher Self into an exterrnal entity I call The Boss. The advantage is that instead of fighting The Demon, it is now surrendering to The Boss. So it feels like I really want a drink, but cannot, because The Boss forbade it, and no use in fighting The Boss. Surrendering to an externalized internal agent is easier than fighting it. Later on I realized I basically reinvented half of AA's 12-step program as they too build on a surrender to a higher being.

My point is to me the bicameral mind does not even feel weird, I am doing something like that on a daily basis, I just wonder why not everybody, and the only thing that feels weird about it is why only two? It is easy to have 3-4 conflicting ideas generating 3-4 conflicting sub-agents fighting it out. It is like in multithreaded programming, where every important function gets its own thread, every important idea gets an amount of selfhood, agency invested into it.

I am not sure that it is productive to tell certain people that they do not really believe what they claim to believe

The following two questions are distinct.

  • Do some people claim to believe things that they don't "really" believe (whatever exactly we mean by that)?
  • Should we tell them so?

The rest of what you say seems mostly to be addressing the first question, so I'm not sure why you framed the issue in terms of the second.

I actually have both concerns. I do not think people are typically making a false statement when they say they believe something, even when they clearly have an automatic assessment that does not give it a high probability. But I also think that even if people did make an actual mistake, e.g. by saying "I have a high automatic assessment of this" when they don't, it probably isn't very useful or productive to tell them that.

For a multicultural and academically theological perspective I highly recommend reading Karen Armstrong's answer to the question of belief in belief.

I am not sure that it is productive to tell certain people that they do not really believe what they claim to believe, and that they only believe they believe it.

Focusing on "other people" isn't useful. When it comes to life planning it's useful to distinguish beliefs with result in anticipations of results from one's one holds.

I don't see the distinctions that you propose helping with those real world applications.

While it may be unwise to tell others that they're just believing in belief, it may well be a good thing for rationalists to check on about themselves. I think believing in belief doesn't just appear in religious matters, but also in a belief that it's necessary to feel sure of one's beliefs in order to express them forcefully enough for other people to notice.

Questioning other people's motives is sufficiently insulting that I suggest doing it quite cautiously if you're actually trying to change people's minds rather than doing a superiority dance.

However, there are people who promote belief in belief explicitly-- for example, saying that it's a very bad thing to shake other people's religious faith, and I don't see a problem with arguing against that sort of thing.

Translating to your terminology, belief in belief concept states that:

  • All beliefs are what you call quasi-bayesian beliefs
  • When a person "chooses" to believe something, he/she often ends up believing that he/she has that belief.

And it is way more Occam-Razor-compilant, while explaining the same evidence. "Binary Beliefs" is a clearly redundant category; all of its content can be seen as high-confidence quasi-bayesian beliefs.

As for choosing beliefs: I never chose to believe that the Sun is going to rise tomorrow; in fact, I can't remember a single "choosing to belief" act of mine. You either think that something is true, or not, and it isn't a matter of choice, but of plausibility estimation. One can try to override his belief system, but it's hard, and people usually fail that, ending up with belief in belief, since convincing yourself that you believe in something is easier than convincing yourself that something is actually true -- it requires less overriding.

I'm not sure you understand my point about choosing to believe something. The point is that there are many elements of your behavior that most people would call part of believing something, which are entirely voluntary. For example, you say that you never choose to believe that the Sun is going to rise, but if you say that the sun will rise, you do indeed choose to say so, and if you wanted you could choose to say it will not.

We have beliefs in our heads without having to speak them and most people consider it possible to say "The Sun isn't going to rise tommorrow" without having a corresponding belief.

I agree. I pointed this out myself and said that more is required for the belief than just saying the words.

I don't think saying the words is required to have a belief. It's neither sufficient nor necessary.

Then I can't see any significant difference between your model and belief-in-belief model, which you claim to oppose.

It seems to me that what we usually call belief in belief basically means that someone holds a binary belief together with a quasi-Bayesian belief which conflicts with it.

Now it seems to be the definition of belief-in-belief, written in obscure terminology. Replace "quasi-Bayesian belief" to "actual belief", and "holds a binary belief" to "acts as if he had a belief" and that's it.

a person does not notice much harm from believing the Messiah is coming

Say that to the countless martyrs, especially in Roman times, who could have evaded torture and death just by publicly saying "I no longer believe" (even if they continued to believe in secret).

Say that to the countless martyrs

Not so countless as all that...

just by publicly saying "I no longer believe" (even if they continued to believe in secret).

It is because they could escape so easily (and billions of Jews, Muslims, Christians etc have awaited a messiah's coming without much noticeable effect on their life expectancy) which proves the belief itself is harmless and what is harmful is the additional desire for martyrdom or as we call it these days, suicide-by-cop:

The earliest Christian martyrs, tortured and killed by Roman officials enforcing worship of the emperors, won so much fame among their co-religionists that others wished to imitate them to such an extent that a group presented themselves to the governor of Asia, declaring themselves to be Christians, and calling on him to do his duty and put them to death. He executed a few, but as the rest demanded it as well, he responded, exasperated, "You wretches, if you want to die, you have cliffs to leap from and ropes to hang by."

Saying that they all did it mostly for the show, the fame, and as a convenient suicide-by-cop is a pretty bold statement. Are you sure you aren't just cherry-picking one example of such behavior and applying it to all, just because it would prove your point?

I can imagine one or two lunatics to do such a thing just to make their names immortal, it even happens today, but to say that it was the general case of people not believing seriously in their faith and just wanting to die because it was a fad, triggers too many of my warning flags (Occam's razor, knowledge of standard human behavior, statistics of suicidal people) against accepting this statement at face value.

Even in these days there are a lot of peaceful people killed for their faith, many among them Christians, and I haven't heard any examples of people just waltzing into a terrorist camp, declaring their beliefs so they will get killed. Even if such examples existed, would you claim that they are the rule, rather than the exception?

Are you sure you aren't just cherry-picking one example of such behavior and applying it to all, just because it would prove your point?

My point is that the belief itself is inert and harmless, and only in conjunction with other desires and beliefs, since, just as you say, anyone who 'merely' believes in the messiah will take the easy out given to them. Lest you object that choosing martydrom must somehow indicate the belief alone is enough, I offer a trenchant example of the curious social dynamics that can drive some rare instances of apparently irrational behavior.

I can imagine one or two lunatics to do such a thing just to make their names immortal

Much more than that. Of the few thousand Christian martyrs during the brief Roman persecutions, how many courted death rather than had it forced on them without any opportunity to engage in a profunctory face-saving gesture?

I haven't heard any examples of people just waltzing into a terrorist camp, declaring their beliefs so they will get killed.

You haven't? You really need to look into this more. For starters: every Christian missionary into North Korea (South Korean or American), and for that matter, South Korean missionaries in general (the 2007 incident in Afghanistan comes to mind). You would have to have a death-wish to sneak into North Korea and try to evangelize!

Even if such examples existed, would you claim that they are the rule, rather than the exception?

I would say martyrs are by far the extreme exceptions among the billions of people who have held the belief in question; when you look at rare outliers, it usually tends to be the case that the causes are idiosyncratic and themselves rare. It's rare to have a death-wish, rare to be so enamored of religious status one will seek death, rare to have schizophrenia, rare to have a mystical experience which unfortunately ends in the perceived necessity to evangelize.

Similarly, when a passenger jetliner crashes in the USA or Western Europe these days, the cause is often something bizarre or rare like Saudi terrorists or a murder-suicide by the copilot (Germanwings isn't even the only such example), and while one couldn't write off all such cases as murder-suicides or Saudi terrorists, one can predict in general that something weird was going on with a crashed flight and that in general flights don't crash. (In case the analogy isn't clear: martyr : normal believer :: murder-suicide-plane-crash : normal successful flight.)

Of course there are less people who were killed for their faith than people who were not killed, I wasn't contesting this. I was only contesting your claim that people who are killed for their faiths don't really believe and are just suicidal. Do you think that all (or most) missionaries who go to a dangerous area, do it with the explicit purpose of getting killed, and do not believe in their cause? This seems to be a common bias, when people think that as they are always right and their ideological opponents are always wrong, their opponents can't possibly really believe in their cause. E.g. like many pro-abortion activists say that their opponents simply hate women, and anti-abortion activists say their opponents just hate babies.

About your North Korean example, it's not the example I asked for, as they didn't just go directly to an officer or a border guard to announce their faith, I have an educated guess that they would continue their work and take at least some steps for not being found out. But let's suppose I was wrong. Even if we assumed that the missionaries going to north Korea don't believe in their cause and are doing it just because they want to commit suicide and are just lazy or afraid to hang themselves, what do you think about people who are not actively seeking danger, and are killed by death squads because of their beliefs? Are all these victims suicidal? Let's not confine ourselves to religion, but to any belief, be it social or political. Do you think that in oppressive regimes, when people are told "join us or we kill you", the people who don't submit were already wanting to commit suicide and they just found a good opportunity to do so?

I don't know anything personal about you, but I guess there is at least something which you wouldn't do even if forced to do it at gunpoint. If not, then that might explain your opinion about the topic. Now think about yourself, or about anyone who might be in such a situation and not yield: do you think that only those people would refuse cooperation and risk death in this hypothetical situation, who would have already wanted to commit suicide even without this event happening, and this event was just a convenient way to do it?

Another example. As you said, there are fewer people who are killed because of their beliefs than those who are not killed, just as there are fewer occasions of murder-suicide-plane-crashes than normal successful flights. By your logic, there are much fewer mountain climbers than people who don't climb mountains, and mountain climbers are much more likely to die a violent death than the average people. Nevertheless, there are fewer mountain climbers who died during their expedition than those who didn't. Does this mean that the mountain climbers climb mountains because they seek death as their primary reason, and those who died, died because of this? Or that they accept falling from a cliff as a risk, and are climbing the mountain because they love it, not because they want to die?

I'm sorry, but for you to convince me that people who are killed for their beliefs don't really believe but are just suicidal, you would need a lot more proof than just a very extreme example and a statement of your opinion.

Of course there are less people who were killed for their faith than people who were not killed, I wasn't contesting this.

You should be. Let's review. Your first comment was http://lesswrong.com/r/discussion/lw/m0l/is_belief_in_belief_a_useful_concept/c89r , in which you quoted the claim

a person does not notice much harm from believing the Messiah is coming

and objected, 'martyrs!':

Say that to the countless martyrs, especially in Roman times, who could have evaded torture and death just by publicly saying "I no longer believe" (even if they continued to believe in secret).

Now. How do martyrs show that believing in the Messiah is harmful? If something is harmful, then it should make one more likely to die compared to someone who doesn't believe in the Messiah, such as an atheist; however, it's a well-known epidemiological result that the relative risks of believers and non-believers tends to go the other way ie. religious believers (such as Christians and other groups who believe in the Messiah coming) live longer. This is true on a population level, so however many martyrs there are these days, however many morons go to North Korea with delusions of conversion, they do not move the needle; to the extent we want to make any inference about the effects of believing in a Messiah, we would say that believing in a Messiah is healthy, and if something is healthy, one indeed will not 'notice much harm'. One could ask, if martyrs are dying for their beliefs and this is inherent to believing in a Messiah on its own, why are all the other believers (who also believe in the Messiah) not dying for their beliefs?

OK, maybe current figures are unrepresentative and in other periods believing in a Messiah would have had noticeable decreases in correlated life expectancy. While the records of Christian persecution are light on details and headcounts (unsurprising for a brief and half-hearted persecution, of minimal interest to outsiders and poorly documented due to its unimportance, which has been hyped a great deal by certain parties whose interest is understandable), my understanding is that the most realistic estimates of the 'countless' martyrs, based on Eusebius's count, extrapolate to figures in the thousands, not millions. In an empire with a population of 58 million+, this is not noticeable, and given the described mechanics of the persecution in which 'victims' could usually trivially escape punishment, it would be astounding if it were noticeable. (How many victims would Stalin/Mao's gulags and secret police and famines have claimed if one could escape any punishment by simply saying "why yes, I am a communist!" ) Expecting it to matter would be as ridiculous as pointing to the martyrologies (if I may borrow the term) of Travyon Martin et al and saying police killings are a major reason for why black males have shorter life expectancies in the USA (which of course they don't, as that's affected much more by issues like increased heart disease rates).

Since all that shows belief qua belief is harmless or outright healthy, that resolves your objection as simply wrong. The rest is tangents.

But that does still leave an issue as to why a handful of weirdos chose suicide-by-praetor as exemplified in my anecdote, and for that I suggest a toxic mix of status-seeking, mental illness, dangerous auxiliary beliefs ("I believe in the Messiah and that by dying I ... [hasten his coming / spread the Gospel / whatever]"); none of these may strike you as particularly plausible or likely, and none of these explanations explain all of the martyrs simultaneously, but that's fine, since for extremely rare outliers (as martyrs are), there will not usually be any universal explanation and the true explanations will nevertheless be extremely unlikely a priori. (Since my airplane example apparently didn't convey my point, consider a lottery; the chance of a particular number winning is extremely unlikely and no number wins many times, yet someone will win with some number.)

Do you think that all (or most) missionaries who go to a dangerous area, do it with the explicit purpose of getting killed, and do not believe in their cause?

I think for many of them there is a definite death-seeking component to the psychology which made them seek out that dangerous area when there's an entire world to choose from, and that martyrdom and talk of sacrifice attracts those people in particular. This is visible right down to the rhetoric.

About your North Korean example, it's not the example I asked for, as they didn't just go directly to an officer or a border guard to announce their faith, I have an educated guess that they would continue their work and take at least some steps for not being found out. But let's suppose I was wrong. Even if we assumed that the missionaries going to north Korea don't believe in their cause and are doing it just because they want to commit suicide and are just lazy or afraid to hang themselves,

The record of missionaries to NK is not good. They are routinely captured and executed. Clearly, whatever precautions they are taking do not work very well. So why do they do it? Are they just too stupid to realize the danger and that their precautions are insufficient? Well, the exact reason will differ from outlier to outlier...

what do you think about people who are not actively seeking danger, and are killed by death squads because of their beliefs?

Confounded by the many reasons for killing people: cultural, economic, ethnic, governmental. Because religion lines up with so many other divisions (religion is not about belief...), I am doubtful there are many clean cases of religion-only genocide. There are few instances where persecution stops immediately upon recanting - to give some examples, simply converting to Catholicism was not enough to save Jews in Nazi Germany, simply declaring oneself an atheist did not exempt Jews from persecution in the USSR, etc.

By your logic, there are much fewer mountain climbers than people who don't climb mountains, and mountain climbers are much more likely to die a violent death than the average people. Nevertheless, there are fewer mountain climbers who died during their expedition than those who didn't. Does this mean that the mountain climbers climb mountains because they seek death as their primary reason, and those who died, died because of this? Or that they accept falling from a cliff as a risk, and are climbing the mountain because they love it, not because they want to die?

I'm glad you chose that example, since that is one of the better ones for illustrating my point. Imagine a group of mountain enthusiasts, some of which climbed them and some of which expressed their interests in other ways. The handful of climbers frequently die gruesome deaths ("5,656 times with 223 deaths"), and when one looks at life expectancy, the climbing group does indeed live shorter lives, leading to descriptions of key holy sites for these enthusiasts as a "high-altitude lunatic asylum"; one psychology book notes, after discussing various studies correlating mental illness & suicide attempts with risk-taking behavior, that "The person who plays Russian roulette has a one in six chance of dying; the person who climbs Mount Everest has a one in ten chance of dying. Is it suicidal to attempt that climb?" (leading into, amusingly, a mention of early Christian martyrs and the Malay running-amok syndrome). If we looked at the climbers faction of the mountain enthusiast group and asked whether they were 100% psychologically normal, if there was no way we could distinguish them, if they appreciated and liked mountains in the same way as everyone else, we would likely have to answer... no. They are different. What is different probably differs from person to person (to give a LW-relevant example, the CEO of the Intrade prediction market died climbing Mount Everest - the same day his wife was giving birth, IIRC - and his death seems to have led to the exposure of substantial embezzlement or other fraud on his part and the shutdown of Intrade; one has to wonder if there was any connection between his hobbies and professional activities), but it would be bizarre to claim that simply liking mountains is harmful when it's clearly more specific than that; I like mountains, but I don't expect to ever die on one.