This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This week we discuss the sequence Fake Beliefs which introduces the concept of belief in belief and demonstrates the phenomenon in a number of contexts, most notably as it relates to religion. This sequence also foreshadows the mind-killing effects of tribalism and politics, introducing some of the language (e.g. Green vs. Blue) which will be used later.

This post summarizes each article of the sequence, linking to the original LessWrong posting where available, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

Reading: Sequence B: Fake Beliefs (p43-77)


B. Fake Beliefs

11. Making beliefs pay rent (in anticipated experiences). Beliefs networks which have no connection to anticipated experience we call “floating” beliefs. Floating beliefs provide no benefit as they do not constrain predictions in any way. Ask about a belief what you expect to see, if the belief is true. Or better yet what you expect not to see: what evidence would falsify the belief. Every belief should flow to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it. (p45-48)

12. A fable of science and politics. Cautions, though a narrative story, the dangers of that come from feeling attachment to beliefs. Introduces the Greens vs Blues, a fictional debate illustrating the biases which emerge from the tribalism of group politics. (p49-53)

13. Belief in belief. Through the story of someone who claims a dragon lives in their garage, a invisible, inaudible, impermeable dragon which defies all attempts at detection, we are introduced to the concept of belief in belief. The dragon claimant believes that there is a fire-breathing flying animal in his garage, but simultaneously expects to make no observations that would confirm that belief. The belief in belief turns into a form of mental jujutsu where mental models are transfigured in the face of experiment so as to predict whatever would be expected if the belief were not, in fact, true. (p54-58)

14. Bayesian judo. A humorous story illustrating the inconsistency of belief in belief, and the mental jujutsu required to maintain such beliefs. (p59-60)

15. Pretending to be wise. There's a difference between: (1) passing neutral judgment; (2) declining to invest marginal resources in investigating the sides of a debate; and (3) pretending that either of the above is a mark of deep wisdom, maturity, and a superior vantage point. Propounding neutrality is just as attackable as propounding any particular side. (p61-64)

16. Religion's claim to be non-disprovable. It is only a recent development in Western thought that religion is something which cannot be proven or disproven. Many examples are provided of falsifiable beliefs which were once the domain of religion. (p65-68)

17. Professing and cheering. Much of modern religion can be thought of as communal profession of belief – actions and words which signal your belief to others. (p69-71)

18. Belief as attire. It is very easy for a human being to genuinely, passionately, gut-level belong to a group. Identifying with a tribe is a very strong emotional force. And once you get people to identify with a tribe, the beliefs which are attire of that tribe will be spoken with the full passion of belonging to that tribe. (p72-73)

19. Applause lights. Sometimes statements are made in the form of proposals when themselves present no meaningful suggestion, e.g. “We need to balance the risks and opportunities of AI.” It's not so much a propositional statement, as the equivalent of the “Applause” light that tells a studio audience when to clap. Most applause lights can be detected by a simple reversal test: “We shouldn't balance the risks and opportunities of AI.” Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information. (p74-77)

 


This has been a collection of notes on the assigned sequence for this week. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Sequence C: Noticing Confusion (p79-114). The discussion will go live on Wednesday, 20 May 2015 at or around 6pm PDT (hopefully), right here on the discussion forum of LessWrong.

New to LessWrong?

New Comment
34 comments, sorted by Click to highlight new comments since: Today at 10:52 PM

I tnk it would be really useful for everyone to give as many examples of belief in belief as possible, that are concrete and not religious (religion is the example case, and is too easy).

The one I read somewhere on LW, was about someone who believes they believe they are good at chess. They're reluctant to actually play a game, because somewhere they anticipate as if they might lose, but they'll tell you they're very good.

Would people offer more examples so that this can become a really, practical tool?

I think the following is a really common example, especially among LWers, and among intelligent perfectionists in particular:

Sometimes when I study a textbook on a technical topic, I skip the exercises even though I should do them. I believe that I believe that I understand enough to skip the exercises, but when I really just believe that I understand, then I do the exercises, because doing them is trivial, or even fun. I'm really anticipating as though I might not understand the topic yet, and if I don't, then trying to do the exercises will confirm that I don't. This threatens me because if I don't understand, then I'll have to re-read, and the fact that I might have to read something more than once to understand it would reflect poorly on my intelligence, and my intelligence is a great source of self-esteem.

You might even say that there are two instances of belief in belief in that example. The first has to do with understanding the particular material. The second has to do with my innate intelligence. I believe that I believe that I'm genius-level intelligent, only having to read things once, but I'm anticipating as though I'm not genius-level intelligent by avoiding tests that might confirm that I'm not.

A non-religion related example that I think Eliezer also talked about is "the power of positive thinking". Suppose someone hears the claim "If you believe you will succeed, then you will." and believes it. However, this person is unable to convince himself that he can succeed at his goals. He believes that believing in his own ability is virtuous (belief in belief), but he doesn't actually hold the belief.

Also, rationalism impresses the ladies.

I think maybe you should remove this line from the summary of Bayesian Judo. I see the function of any humor in R:AZ as that of a purported hook to keep people reading, but that's not the function of these summaries, which is to summarize. I think the line has no benefit while carrying a non-negligible risk of further propagating the LessWrong-as-heterosexual-sausage-fest meme, which is particularly risky in these threads because there may be non-regulars that have started visiting for the reading group, and those non-regulars are also more likely to come from clusters that tend to find comments like that offensive as opposed to a random sample.

[-][anonymous]9y20

It was exactly because I found that entire article unsavoury that I decided to be snide. I was in fact mildly offended by the implication when I read it.

I've actually always been on the fence about whether Bayesian Judo is a useful concrete example of Belief in Belief that makes the concept more available and easily understood, or masturbatory blogging. Others thought that it was less-than-honest. Likewise, I'm unsure about whether or not it belongs in R:AZ. I was going to say that in the above comment but decided it was outside the scope of my remarks.

Even if I'm more inclined to agree with you than not, maximizing the probability that people stay in the reading group by making your own writing unambiguously politically correct is probably higher impact than acting on impulses to be snide; otherwise, in one sense, you're just repeating Eliezer's mistake. There's also no context to make it clear that you find the implication offensive unless someone reads this comment thread, which won't always be the case.

Also, I'm not against the discussion of opposing viewpoints, as that's part of the purpose of the group, but I do think that they should be made explicit and in good faith, so as to maximize impact. For example, you could have omitted that line in the OP and written a comment down here to the effect of:

I found Bayesian Judo unsavoury, and was mildly offended by the implication that what is construed as rational behavior in the article is a means for men to attract women.

Ostensibly, the people running the show are rationalists. If you actually make the case that Bayesian Judo doesn't belong in there, then they might remove it in later electronic versions and print versions. And if this is in line with your values, then you should, by definition, want this. Furthermore, explication and good faith are instrumental in that regard.

Yes, 'Bayesian Judo' has gotten sufficiently negative reactions that we'll look into how difficult it would be to tone it down or delete it in future editions.

General feedback on things you'd like to see changed in the book -- especially where it doesn't demand large-scale rewriting -- is welcome on the reading group threads. If you don't want to voice your criticisms publicly, you can e-mail them to errata@intelligence.org.

[-][anonymous]9y10

The summary has been edited with the offending line removed. The rest of this comment is not an excuse so much as self-analysis to figure out how I made that mistake:

Addressing the issue in the comments is of course the obvious thing to do if I had thought of doing so. At the time I was feeling an editorial obligation to say more than a single sentence about each essay, especially if that first sentence is rather content-free e.g. "A humorous story." I had faced the same problem with "A fable of science and politics." However in this case there was very little to work with even after a good-faith effort to pull out some well-intentioned meanings. The post didn't seem to serve any purpose except at best as you put it, "masturbatory blogging," and at worst was advocating for or advertising techniques of pickup artistry [sic]. So I decided to do my job of accurately summarizing without op-ed the content of the article, knowing that it would (and should) cause debate. But somehow I failed to realize that I could have left out the offending line and sparked the discussion myself. Availability heuristic :\

This seems an uncharitable reading. What's the issue with just having a fun example post?

[-][anonymous]9y00

What is it an example of?

It is a real life example of what happens when you point out to someone that their beliefs and anticipations are out of sync. Real life examples of discussions with people and seeing how they think, are useful in a setting that discusses people and how they think.

Let me be a bit trollish so as to establish an actual counter-position (though I actually believe everything I say):

This is where the sequences first turn dumb.

For low-hanging fruit, we first see modern mythology misinterpreted as actual history. In reality, phlogiston was a useful theory at the time, which was rationally arrived at and rationally discarded when evidence turned against it (With some attempst at "adding epicycles", but no more than other scientific theories) . And the NOMA thing was made up by Gould when he misunderstood actual religious claims, i.e. it is mostly a straw-man.

On a higher level of abstraction, the whole approach of this sequence is discussing other peoples alleged rationalizations. This is almost always a terrible idea. For comparison, other examples would include Marxist talk about false consciousness, Christian allegations that atheists are angry at God or want a license to sin or the Randian portrayal of irrational death-loving leachers. [Aware of meta-irony following:] Arguments of this type almost always serve to feed the ingroup's sense of security, safely portraying the most scary kinds of irrationality as a purely outgroup thing. And that is the most simple sufficient causal explanation of this entire sequence.

[-][anonymous]9y40

An area where my actions don't seem to correspond to my stated beliefs? How about luck? I've come to the embarrassing conclusion that although I believe in disbelief, I might actually believe in luck.

All my life people have constantly told me how lucky I am, and my family jokes that I have two guardian angels.

No matter how many risks I take, bad things just don't seem to happen to me like they do to other people. People are constantly warning me not to do things, I acknowledge their warnings have merit, but I do them anyway. I seem to believe that I'm just "lucky" in certain ways.

1) I lived in Guatemala City for 15 months, a place with a high crime rate. The vast majority of my American and Guatemalan friends were robbed on the street at least once during my time there, despite being far, far more cautious than I was. I was never robbed (or murdered, but this fits the probability since there were only 3 murders among thousands of passerby on the short street where I lived). Alone, I regularly took the "dangerous" red buses, walked through "dangerous" neighborhoods and alleys even at night (as a white female), rode in "dangerous" white taxis, and shopped at the "dangerous" markets. I left cash unattended in plain sight. I walked around with my ipod visible. Nothing ever happened.

2) I tend to forget it when I drive alone, but I'm an awful driver. I'm impulsive and daily make very obviously illegal u-turns or similar bad driving decisions just to save like 20 seconds. And even when I'm in no hurry, my default highway speed is 18 mph over the limit. I've never gotten a ticket, and I drive quite a lot.

3) My family has an old board game with a "gamble" card, that can swing the game 20 points in either direction. When 2 people gamble, they each roll the dice, and the higher number wins. We've played this game dozens and dozens of times over the years, and I have won every gamble. If I get the gamble card, even if I'm winning and losing the gamble could bump someone else ahead of me, I'll still use it. When someone else gets the gamble card, I'll try to cajole them into gambling with me, and in hopes of breaking my streak, they will. But I never lose, and what's worse, I seem to really believe I have a lower than 50-50 chance of losing this one gamble per game.

4) I treat my internal clock as if it's infallible. I might be lying in bed, realize I forgot to set an alarm for the next day (even if I'm about to get only a few hours of sleep) and still not bother to get up to set it, somehow just believing I'll be "lucky" and wake up just in time. And then I always do. Also, if I have to leave the house at 1:40, I might lose track of time and go several hours without bothering to check the time, and then look with shock to see it's 1:35-1:40... so far, I've never happened to look too late, and I've never overslept and missed class or any event.

Most things other people call me lucky for, I think are based on rational risk-reward calculations. For example, I rarely go to the bank. When paid in cash, I'd let it pile up in my dresser for months before bothering to deposit it. When coming back from Guatemala with the several thousand dollars in cash I had saved from the last few months of working, I put it in my baggage rather than paying $25 for a wire transfer.

Stuff like visiting the bank infrequently makes sense to me, but yikes, what's up with the other examples? The risks seem high and the rewards seem so negligible that I would never recommend others make the same decisions, yet I seem to figure I might as well continue to make them until something bad actually happens to me.

So this is a weird example, but it's all I could come up with. Either I'm really hungry for something interesting to happen to me, or my actions don't correspond to my professed disbelief in luck.

As a rationality exercise, maybe you could play a special version of the game where someone takes the gamble against you over and over again until you lose, so that you can really feel on a gut level that there is no Fortune Fairy protecting you from bad luck. (And if you're really patient, maybe by playing a few times a day for a long time, then you could keep track of your win/loss ratio and watch as it tends toward 1:1.) You might also be selectively remembering times that you were fortunate because they're more available and they confirm your Fortune Fairy hypothesis.

[-][anonymous]9y00

As a rationality exercise...

You know, I think I might actually describe all of my "risk-taking" behavior as a giant rationality exercise, trying to test my so-called luck.

As for your special version idea, it wouldn't be the same game. It's not like I act like I'll win every luck-related game.

My family would confirm that none of them have ever won a gamble against me in dozens of games. If I lost, it would be pretty memorable, so I don't think it's confirmation bias as much as a case of things with low probability actually occurring sometimes. And despite this game not being my favorite, I always wanted to play it, "just for the gamble" because my win streak was so bizarre to me.

Anyway, this is silly. It's not like I actually admit I think I have a > 50-50 chance of winning. My insistence on gambling every time was probably just out of having nothing to lose in most cases, and an even longer and more fun win streak to gain.

So now, I'd no longer say my actions don't correspond to my professed disbelief. I'd say my actions are attempts to prove my professed disbelief correct.

Most things other people call me lucky for, I think are based on rational risk-reward calculations. For example, I rarely go to the bank. When paid in cash, I'd let it pile up in my dresser for months before bothering to deposit it. When coming back from Guatemala with the several thousand dollars in cash I had saved from the last few months of working, I put it in my baggage rather than paying $25 for a wire transfer.

That seems irrational because it has a chance of leading to civil forfeiture if you're in the US. (The Federal government can still confiscate money directly, even though it has recently been cut down for states.)

Also, you need to count the interest that the money could have earned while in the bank against the cost of the wire transfer, as well as the risk of being burglarized and having your money stolen.

[-][anonymous]9y00

Civil forfeiture? Yikes, I never knew this could happen. Maybe if I had heard of this happening, it would have influenced my decision.

This wasn't a ton of money, just my checking account balance of $8,000, so it wasn't earning anything in the bank. The only cost I really considered was the risk of being burglarized, which I figured was smaller than 1/320th.

The only cost I really considered was the risk of being burglarized

The risk of civil forfeiture is the risk of being robbed.

[-][anonymous]9y00

You could have been invested, or had a CD or high yields savings account which would have at least yielded a few hundred dollars.

[-][anonymous]9y00

In just a few months? Yikes, I really should have pay more attention to money matters.

I did invest the money within one week of getting back to the US, but honestly, I hadn't really been expecting much money to have piled up at all. I was in Guatemala mostly for a long, fun vacation, and my small part-time teacher's salary was just a bonus. My bank receipts never listed my current balance, and I was kind of curious about how much I had, but not curious enough to stand in the 20 minute line at the other half of the bank where people inquired about such things (Guatemala isn't known for efficiency). When I finally withdrew, I realized I had spent literally none of it.

Anyway, you've just reminded me that I should get my past four months of untouched nanny salary invested as soon as possible, so thanks!

[-][anonymous]9y10

I was thinking on the order of a year. Here you can get 4% savings rates with a little bit of trickery, which would be $75 after three months, $320 after a full year.

Even if it's just $50, those $50 add up over time. And if you're disciplined to always add more than you take out from savings, the effects of compounding are huge. You're probably leaving a lot more money on the table than you realize.

[-][anonymous]9y-10

Honestly, kid, I don't see anything in this description which would count as evidence against the hypothesis that this is a simple example of confirmation bias.

There's a saying which describes people like you: "young and invincible." There has been no need to coin the corresponding "old and invincible" saying which you would expect to exist. Ponder that.

[-][anonymous]9y10

Honestly, adult, I'm not admitting to actually believing in luck. Just saying that my actions make it seem like I do.

And of course there is no evidence against the confirmation bias hypothesis in my description... If normal unlucky stuff happened to me, and I could recognize it or other people could point it out to me, there would have been no description in the first place.

In the previous reading group you mentioned that you hoped to provide opposing viewpoints whenever possible. That was actually the part I was most looking forward to. Does anybody know of opposing viewpoints to anything in this sequence?

[-][anonymous]9y20

Much of the sequence focused on Daniel Dennett's "beliefs in belief" criticism of religion. I presume there is some rebuttals by theologians in the literature, and I was hoping to find a well written critique of Dennett's book. I just didn't have time to review the literature before the self-imposed Wednesday deadline, and planned on adding it in later. If someone else knows a good opposing viewpoint, I'll incorporate into the OP.

Could you please continuously update the announcement post to have a link to all the reading group pages you've done so far? Thanks.

[-][anonymous]9y20

What "beliefs in belief" have you observed in your own thinking, or those around you?

Romance and interpersonal relationships seem to be areas where beliefs in beliefs and similar phenomena sometimes occur. For example, sometimes people are able to correctly predict a lot of other person's reactions, yet, at the same time, they seem reluctant to update their beliefs about the "big picture". Or a person might actively avoid finding out anything about the other person's past, because they fear that it might be upsetting to know it, and therefore they prefer maintaining an illusion. Of course, these are not very good examples of beliefs in beliefs, because beliefs in question are rarely clearly articulated, therefore it is not always clear what exactly is the content of a belief. In addition to that, even believer himself/herself would often admit their uncertainty, therefore it is more often about avoiding negation of the belief rather than affirming it. But this phenomenon seems closely related and worth mentioning.

Perhaps some occurrences are partially because of wishful thinking and partially because they are closely related to the concept of trust. On the other hand, concept of trust seems important in these areas, which leads to an interesting situation.

Given that beliefs are established on our brains I'm less and less sure whether "If a belief turns deadbeat, evict it. " is meaningful advice.

Most people can't just make a decision to change one of their beliefs. People can quite easily switch the answer they give to a question but they can't as easily change the belief on a deeper level.

Akrasia happens when your rational understanding of a situation differs from your emotional one. When a person isn't willing to admit to himself that he believes A it's very hard to debug and remove A from the brain.

[-][anonymous]9y10

How can the content of this sequence be made practical? Or, how do you plan to apply it in your day to day life?

My day to day life is populated with many who do not understand the lessons in this section. Interaction with these people is paramount in achieving my own goals; I am facing a situation in which the rational choice is to communicate irrationally. In specific, my colleagues and other associates seem to prefer "applause lights" and statements which offer no information. Therefore, attaining my personal, rationally selected goals might mean claiming irrational beliefs. I don't think this is an explicit paradox, but it is an interesting point. There is a middle ground between "other-optimizing" (pointing out these applause lights as what they are) and changing my actual beliefs to those communicated by "applause lights", but I do not believe it is tenable, and it may represent a conflict of goals (personal success in my field vs. spreading rational thought). Perhaps it is a microcosm of the precarious balance between self-optimization and world-optimization.

[-][anonymous]9y00

I make this sequence practical by making and addressing claims at less wrong, and trying to avoid 'the less wrong community.' Claims I can appraise for truth, beauty or strength. 'Community' may have those things, but it is not my interest. For example, demographics at less wrong are not my interest. Claims of members yes, male / female ratio not as much. Another example: effectiveness of altruism perhaps, altruism less so. I am aided by down votes that point out errors and my time is wasted by down votes that are community-based ('that's not how we do it here').

it could be this post is about community and thus self-contradicts. Ah well.

[-][anonymous]9y10

Can you think of a way to test for yourself the information presented in this sequence?