Making Beliefs Pay Rent (in Anticipated Experiences)

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences.  The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

It's tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don't see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don't experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?

To answer precisely, you must use beliefs like Earth's gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock's second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

The same brain that builds a network of inferred causes behind sensory experience, can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could oversimply their minds by drawing a little node labeled "Phlogiston", and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configured after the experience, rather than constraining the experience in advance. Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn't connect to sensory experience at all. But you had better remember the propositional assertion that "Wulky Wilkinsen" has the "post-utopian" attribute, so you can regurgitate it on the upcoming quiz. Likewise if "post-utopians" show "colonial alienation"; if the quiz asks whether Wulky Wilkinsen shows colonial alienation, you'd better answer yes. The beliefs are connected to each other, though still not connected to any anticipated experience.

We can build up whole networks of beliefs that are connected only to each other—call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit.  Do you believe that phlogiston is the cause of fire?  Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a post-utopian? Then what do you expect to see because of that? No, not "colonial alienation"; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?

It is even better to ask: what experience must not happen to you?  Do you believe that elan vital explains the mysterious aliveness of living beings?  Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you.  It floats.

When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can't find the difference of anticipation, you're probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don't know what experiences are implied by Wulky Wilkinsen being a post-utopian, you can go on arguing forever. (You can also publish papers forever.)

Above all, don't ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.


246 comments, sorted by
magical algorithm
Highlighting new comments since Today at 1:13 AM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

You write, “suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a ‘post-utopian’. What does this mean you should expect from his books? Nothing.”

I’m sympathetic to your general argument in this article, but this particular jibe is overstating your case.

There may be nothing particularly profound in the idea of ‘post-utopianism’, but it’s not meaningless. Let me see if I can persuade you.

Utopianism is the belief that an ideal society (or at least one that's much better than ours) can be constructed, for example by the application of a particular political ideology. It’s an idea that has been considered and criticized here on LessWrong. Utopian fiction explores this belief, often by portraying such an ideal society, or the process that leads to one. In utopian fiction one expects to see characters who are perfectible, conflicts resolved successfully or peacefully, and some kind of argument in favour of utopianism. Post-utopian fiction is written in reaction to this, from a skeptical or critical viewpoint about the perfectibility of people and the possibility of improving society. One expects to see irretrievably flawed characters, idealistic projects turn to failure, conflicts that are destructive and unresolved, portrayals of dystopian societies and argument against utopianism (not necessarily all of these at once, of course, but much more often than chance).

Literary categories are vague, of course, and one can argue about their boundaries, but they do make sense. H. G. Wells’ “A Modern Utopia” is a utopian novel, and Aldous Huxley’s “Brave New World” is post-utopian.

Indeed. Some rationalists have a fondness for using straw postmodernists to illustrate irrationality. (Note that Alan Sokal deliberately chose a very poor journal, not even peer-reviewed, to send his fake paper to.) It's really not all incomprehensible Frenchmen. While there may be a small number of postmodernists who literally do not believe objective reality exists, and some more who try to deconstruct actual science and not just the scientists doing it, it remains the case that the human cultural realm is inherently squishy and much more relative than people commonly assume, and postmodernism is a useful critical technique to get through the layers of obfuscation motivating many human cultural activities. Any writer of fiction who is any good, for instance, needs to know postmodernist techniques, whether they call them that or not.


That said, it's not too surprising that postmodernists are often the straw opponent of choice.

The idea that the categories we experience as "in the world" are actually in our heads is something postmodernists share with cognitive scientists; many of the topics discussed here (especially those explicitly concerned with cognitive bias) are part of that same enterprise.

I suspect this leads to a kind of uncanny valley effect, where something similar-but-different creates more revulsion than something genuinely opposed would.

Of course, knowing that does not make me any less frustrated with the sort of soi-disant postmodernist for whom category deconstruction is just a verbal formula, rather than the end result of actual thought.

I also weakly suspect that postmodernists get a particularly bad rap simply because of the oxymoronic name.

That said, it's not too surprising that postmodernists are often the straw opponent of choice.

Oh yeah. While it's far from a worthless field, and straw postmodernists are a sign of lazy thinking, it is also the case that postmodernism contains staggering quantities of complete BS.

Thankfully, these are also susceptible to postmodernist analysis, if not by those who wish to keep their status ...

Would you consider Le Guin's The Dispossessed to be post-utopian? I think she intends her Anarres to be a good place on the whole, and a decent partial attempt at achieving a utopia, but still to have plausible problems.

Not to go off on a tangent, but I'd say it's more utopian than critical of utopia - I don't think we can require utopias to be perfect to deserve the name, and Anarres is pretty (perhaps unrealistically) good, with radical (though not complete) changes in human nature for the better.

Brave New World is definitely dystopian, not post-utopian. Nancy's suggestion for post-utopian is exactly right. I definitely agree that we can meaningfully classify cultural production, though.

I think it's both. "Brave New World" portrays a dystopia (Huxley called it a "negative utopia") but it's also post-utopian because it displays skepticism towards utopian ideals (Huxley wrote it in reaction to H. G. Wells' "Men Like Gods").

I don't claim any expertise on this subject: in fact, I hadn't heard of post-utopianism at all until I read the word in this article. It just seemed to me to be overstating the case to claim that a term like this is meaningless. Vague, certainly. Not very profound, yes. But meaningless, no.

The meaning is easily deducible: in the history of ideas "post-" is often used to mean "after; in consequence of; in reaction to" (and "utopian" is straightforward). I checked my understanding by searching Google Scholar and Books: there seems to be only one book on the subject (The post-utopian imagination: American culture in the long 1950s by M. Keith Booker) but from reading the preview it seems to be using the word in the way that I described above.

The fact that the literature on the subject is small makes post-utopianism an easier target for this kind of attack: few people are likely to be familiar with the idea, or motivated to defend it, and it's harder to establish what the consensus on the subject is. By contrast, imagine trying to claim that "hard science fiction" was a meaningless term.

I played a mental game trying to make predictions based on the information, that Wulky Wilkinsen is post-utopian and shows colonial allienation - never heard of any of that before :-). Wulky Wilkinsen is post-utopian ... I expect to find a bunch of critically acclaimed authors, who wrote their most famous books before Wulky wrote his most famous books (5 - 15 years ahead ?), lived in the same general area as Wulky, and portrayed people who were more altruistic and prone to serve general good than we normally see in real life. It does not say too much about the actual writing style of Wulky - he could have written either in the similar way as "the bunch" (utopians), or just the opposite - he could have been just fed up by the utopians' style and portray people more evil than we normally see in everyday life. So my prediction does not tell what Wulky's books feel like, but it is still a prediction, right ? Colonial allienation - the book contains characters that have lived in a colony (e.g. India) for a long time (athough they might have just arrived to the "maternal" colonial country, e.g. Britain). These characters are confronted with other characters that have lived in the "maternal" colonial country for a long time (athough they might have just arrived to the colony :-) ). There are conflicts between these two groups of people, based on their background. They have different preferences when they are making decisions, probably involving other people. Thus they are allienated. Do not tell me this was not the point of Eliezer's post, let me just have some fun !

What good is math if people don't know what to connect it to?

All math pays rent.

For all mathematical theorems can be restated in the form:

If the axioms A, B, and C and the conditions X, Y and Z are satisfied, then the statement Q is also true.

Therefore, in any situations where the statements A,B,C and X,Y,Z are true, you will expect Q to also be verified.

In other words, mathematical statements automatically pay rent in terms of changing what you expect. (Which is) the very thing it was required to show. ■

In practice:

If you demonstrate Pythagoras's Theorem, and you calculate that 3^2+4^2=5^2, you will expect a certain method of getting right angles to work.

If you exhibit the aperiodic Penrose Tiling, you will expect Quasicrystals to exist.

If you demonstrate the impossibility of solving to the Halting Problem, you will not expect even a hypothetical hyperintelligence to be able to solve it.

If you understand why you can't trisect an angle with an unmarked ruler and a compass (not both used at the same time), you will know immediately that certain proofs are going to be wrong.

and so on and so forth.

Yes, we might not immediately know where a given mathematical fact will come in handy when observing the world, but by their nature, mathematical facts tell us exactly when to expect them.

Is it not the purpose of math to tell us "how" to connect things? At the bottom, there are some axioms that we accept as basis of the model, and using another formal model we can infer what to expect from anything whose behavior matches our axioms.

Math makes it very hard to reason about models incorrectly. That's why it's good. Even parts of math that seem particularly outlandish and disconnected just build a higher-level framework on top of more basic concepts that have been successfully utilized over and over again.

That gives us a solid framework on which we can base our reasoning about abstract ideas. Just a few decades ago most people believed the theory of probability was just a useless mathematical game, disconnected from any empirical reality. Now people like you and me use it every day to quantify uncertainty and make better decisions. The connections are not always obvious.

Is pure math a set of beliefs that should be evicted?

Is pure math a set of beliefs that should be evicted?

No, for reasons expressed above by VKS.

Note the word "pure". By definition, pure maths doesn't pay off in experience. If it did, it would be applied.

IMO the distinction between pure and applied math is artificial, or at least contingent; today's pure math may be tomorrow's applied math. This point was made in VKS's comment referenced above:

Yes, we might not immediately know where a given mathematical fact will come in handy when observing the world, but by their nature, mathematical facts tell us exactly when to expect them

The question is whether anyone should believe pure maths now. If you are allowed to believe things that might possibly pay off, then the criterion excludes nothing.

Metabeleifs! Applied math concepts that seem useless now, have, in the past, become useful. Therefore, the belief that "believing in applied math concepts pays rent in experience" pays rent in experience, so therefore you should believe it.

If you believe in applied math, what are the grounds for excluding "pure" math? Most of the time "pure" just means that the mathematician makes no explicit reference to real-world applications and that the theorems are formulated in an abstract setting. Abstraction usually just boils down to figuring out exactly which hypotheses are necessary to get the conclusion you want and then dispensing with the rest.

Let's take the theory of probability as an example. There's nothing in the general theory that contradicts everyday, real-world probability applications. Most of the time the general theory does little other than make precise our intuitive notions and avoid the paradoxes that plague a naive approach. This is an artifact of our insistence on logic. A thorough, logical examination of just about any piece of mathematics will quickly lead to the domain "pure" math.

I am not making the statement "exclude pure math", I am posing the question "if pure math stays, what else stays?"

Maybe post utopianism is an abstract idealisation that makes certain concepts precise.

There are beliefs that directly pay rent, and then there are beliefs that are logical consequences of rent-paying beliefs. The same basic principles that give you applied math will also lead to pure math. We can justify spending effort on pure math on the grounds that it may pay off in the future. However, our belief in pure math is tied to our belief in logic.

If you asked whether this can be applied to something like astrology, I'd ask whether astrology was a logical consequence of beliefs that do pay rent.

Unlike scientific knowledge or other beliefs about the material world, a mathematical fact (e.g. that z follows from X1, X2,..., Xn), once proven, is beyond dispute; there is no chance that such a fact will be contradicted by future observations. One is allowed to believe mathematical facts (once proven) because they are indisputably true; that these facts pay rent is supported by VKS's argument.

Truths of pure maths don't pay rent in terms iof expected experience. EY has put forward a criterion of truth, correspondence, and a criterion of believability, expected experience , and pure maths fits neither. He didn't want that to happen, and the problem remains, here and elsewhere, of how to include abstract maths and still exclude the things you don't like. This is old ground, that the logical postivists went over in the mid 20th century.

I think I see where you are going with this.

My initial interpretation of EY's original post is that he was explicating a scientific standard of belief that would make sense in many situations, including in reasoning about the physical world (EY's initial examples were physical phenomena - trees falling, bowling balls dropping, phlogiston, etc.). I did not really think he was proposing the only standard of belief. This is why I was baffled by your insistence that unless a mathematical fact had made successful predictions about physical, observable phenomena, it should be evicted.

However, later in the original post EY used an example out of literary criticism, and here he appears to be applying the standard to mathematics. So, you may be on to something - perhaps EY did intend the standard to be universally applied.

It seems to me that applying EY's standard too broadly is tantamount to scientism (which I suspect is more-less the point you were making).

Truths of pure maths don't pay rent in terms iof expected experience.

Here is a truth of pure mathematics: every positive integer can be expressed as a sum of four squares.

Expected experiences: there will be proofs of this theorem, proofs that I can follow through myself to check their correctness.

Et voilà!

Truth of astrology: mars in conjunction with Jupiter is dangerous for Leos

Expected experience: there will be astrology articles saying Leo's are in danger when mars is in conjunction with Jupiter.

Of course astrological claims pay rent. The problem with astrology is not that it's meaningless but that it's false, and the problem with astrologers is that they don't pay the epistemological rent.

Also, a proof is a different thing from a mathematician saying so. The rent that is being paid there is not merely that the theorem will be asserted but that there will be a proof.

Of course astrological claims pay rent.

Try telling Eliezer

The original post does not mention astrology. If you want to spy out some place where Eliezer has said that astrological claims are meaningless, go right ahead. I am not particularly concerned with whether he has or not.

Here and now, you are talking to me, and as I pointed out, the belief can pay rent, but astrologers are not making it do so. Those who have seriously looked for evidence, have, so I understand, generally found the beliefs false.

From that belief, the expected experience should be Leo people being less fortunate during those days.

That was the point. Its a cheat to expect astrology truths to product experiences of reading written materials about astrology, so it's a cheat expect to pure maths truths ...

That was the point. Its a cheat to expect astrology truths to product experiences of reading written materials about astrology, so it's a cheat expect to pure maths truths ...

Let me complete the ellipsis with what I actually said. A mathematical assertion leads me to expect a proof. Not merely experiences of reading written materials repeating the assertion.

And a proof still isnt an .experience in the relevant sense. Its not like predicting an eclipse,

I loved this post, but I have to be a worthless pedant.

If you drop a ball off a 120-m tall building, you expect impact in t=sqrt(2H/g)=~5 s. But that would be when the second-hand is on the 1 numeral.

Heh. I got this right originally, then reread it just recently while working on the book, saw what I thought was an error (1 numeral? just one second? why?) and "fixed" it.

Elizer, your post above strikes me, at least, as a restatement of verificationism: roughly, the view that the truth of a claim is the set of observations that it predicts. While this view enjoyed considerable popularity in the first part of the last century (and has notable antecedents going back into the early 18th century), it faces considerable conceptual hurdles, all of which have been extensively discussed in philosophical circles. One of the most prominent (and noteworthy in light of some of your other views) is the conflict between verificationism and scientific realism: that is, the presumption that science is more than mere data-predictive modeling, but the discovery of how the world really is. See also here and here.

It's amazing how many forms of irrationality failure to see the map-territory distinction, and the resulting reification of categories (like 'sound') that exist in the mind, causes: stupid arguments, phlogiston, the Mind Projection Fallacy, correspondence bias, and probably also monotheism, substance dualism, the illusion of the self, the use of the correspondence theory of truth in moral questions... how many more?

I think you're being too hard on the English professor, though. I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them. But I've never experienced a college English class; perhaps my innocent fantasies will be shaken then.

Michael V, you could say that mathematical propositions are really predictions about the behavior of physical systems like adding machines and mathematicians. I don't find that view very satisfying, because math seems to so fundamentally underly everything else - mathematical truths can't be changed by changing anything physical, for instance - but it's one way to make math compatible with anticipation.

I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them

I think Eliezer's point was about the student. "Wulky Wilkinsen is a 'post-utopian'" could be meaningful, if you know what a post-utopian is and is not (I don't, and don't care). The student who learns just the statement, however, has formed a floating belief.

We might even initially use propositional beliefs as indicators of meaningful beliefs about the world. But if we then discuss these highly compressed beliefs without referencing their meaning, we often feel like we are reasoning when really we have ceased to speak about the world. That is, grounded beliefs can become "floaty" and spawn further "floaty" beliefs.

In my sociology class, we talk about how "Man in his natural state has liberty because everyone is equal". "Natural state", "liberty", and "equal" could conceivably be linked to descriptions of social interaction or something. However, class after class we refrain from talking about specific behaviors. Concepts float away from their referents without much resistance - it's all the same to the student, who only needs to make a few unremarkable remarks to get his B+ for class participation. Compare:

"Man in his natural state has liberty because everyone is equal"

"Man in his natural state is equal because everyone has liberty"

"When everyone has liberty and is equal, man is in his natural state"

These statements should express very different beliefs about the world, but to the student they sound equally clever coming out of the professor's mouth.

(Edit for minor grammar and formatting)

Rooney, as discussed in The Simple Truth I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable. We can't do quantum mechanics with kets, but no bras. We can't do Gentzen natural deduction with rules of elimination, but no rules of introduction. We can't do Bayesian updating with observations, but no priors. And I claim that you can't have a theory of meaning which deals only with consequences of statements being true but not with what actions put the universe into a state in which the statement becomes true.

This position of mine comes from my interpretation of the dissertation of Noam Zeilberger of CMU (2005, I think). Zeilberger's main concern lies in Logic and Computer Science, but along the way he discusses theories of truth implicit in the work of Martin-Lof and Dummett.

I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable.

That seems obviously correct. However, unless you pursue knowledge for its own sake, you should probably not be overly concerned with preserving past truths - unless they are going to impact on future decisions.

Of course, the decisions of a future superintelligence might depend on all kinds of historical minutae that we don't regard as important. So maybe we should preserve those truths we regard as insignificant to us for it. However, today, probably relatively few are enslaved to future superintelligences - and even then, it isn't clear that this is what they would want us to do.

An explicit belief that you would not allow yourself to hold under these conditions would be that the tree which falls in the forest makes a sound - because no one heard it, and because we can't sense it afterwards, whether it made sound or not had no empirical consequence.

Every time I have seen this philosophical question posed on lesswrong, the two sophists that were arguing about it were in agreement that a sound would be produced (under the physical definition of the word), so I'd be really surprised if you could let go of that belief.

Hm, yeah. The trouble is how the doctrine handles deductive logic - for example, the belief that a falling tree makes vibrations in the air when the laws of physics say so is really a direct consequence of part of physics. The correct answer definitely appears to be that you can apply logic, and so the doctrine should be not to believe in something when there is no Bayesian evidence that differentiates it from some alternative.

While I fully agree with the principle of the article, something stuck out to me about your comment:

In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

What I noticed was that you were basically defining a universal prior for beliefs, as much more likely false than true. From what I've read about Bayesian analysis, a universal prior is nearly undefinable, so after thinking about it a while, I came up with this basic counterargument:

You say that true beliefs are vastly outnumbered by false beliefs, but I say, how could you know of the existence of all these false beliefs, unless each one had a converse, a true belief opposing it that you first had some evidence for? For otherwise, you wouldn't know whether it was true or false.

You may then say that most true beliefs don't just have a converse. They also have many related false beliefs opposing them. But I would say, those are merely the converses that spring from the connections of that true belief with its many related true beliefs.

By this, I hope I've offered evidence that a fifty-fifty universal T/F prior is at least as likely as one considering most unconsidered ideas to be false. (And I would describe my further thoughts if I thought they would be useful here, but, silly me, I'm replying to a post from almost 8 years ago.)

If you have an arbitrary proposition -- a random sequence of symbols constrained only by the grammar of whatever language you're using -- then perhaps it's about equally likely to be true or false, since for each proposition p there's a corresponding proposition not p of similar complexity.

But the "beliefs" people are mostly interested in are things like these:

  • There is exactly one god, who created the universe and watches over us; he likes forgiveness, incense-burning, and choral music, and hates murder, atheism and same-sex marriage.
  • Two nearby large objects, whatever they are, will exert an attractive force on one another proportional to the mass of each and inversely proportional to the square of the distance between them.

and the negations of these are much less interesting because they say so much less:

  • Either there is no god or there are multiple gods, or else there is one god but it either didn't create the universe or doesn't watch over us -- or else there is one god, who created the universe and watches over us, but its preferences are not exactly the ones stated above.
  • If you have two nearby objects, whatever force there may be between them is not perfectly accurately described by saying it's proportional to their masses, inversely proportional to the square of the distance, and unaffected by exactly what they're made of.

So: yeah, sure, there are ways to pick a "random" belief and be pretty sure it's correct (just say "it isn't the case that" followed by something very specific) but if what you're picking are things like scientific theories or religious doctrines or political parties then I think it's reasonable to say that the great majority of possible beliefs are wrong, because the only beliefs we're actually interested in are the quite specific ones.