During the sessions at the 2011 rationality minicamp, we learned that some of our biases can be used constructively, rather than just tolerated and avoided.

For example, in an excellent article discussing intuitions and the way they are formed, psychologist Robin Hogarth recommends that "if people want to shape their intuitions, [they should] make conscious efforts to inhabit environments that expose them to the experiences and information that form the intuitions that they want."

Another example: Carl Shulman remarked that due to the availability heuristic we anticipate car crashes with frequencies determined by how many people we know of or have heard about who have gotten into one. So if you don't fear car crashes but you want to acquire a more accurate level of concern about driving, you could seek out news or footage of car crashes. Video footage may work best, because experiential data unconsciously inform our intuitions more effectively than, say, written data.

This fact may lie behind many effective strategies for getting your brain to do what you want it to do:

  • Establishing 'pull' motivation' works best with strong visualization, and is reinforced upon experiencing the completion of the task.
  • Rejection therapy, which many of us minicampers found helpful and effective. The point is to ask people for things they will probably deny you, which trains your body to realize that nothing bad happens when you are rejected. After a time, this improves social confidence.
  • As looking glass self theory states,1 we are shaped by how others see us. This is largely due to the experience of having people react to us in certain ways.

In The Mystery of the Haunted Rationalist we see a someone whose stated beliefs don't match their anticipations. Now we can actually use the brain's machinery to get it to do what we want it to: alieve that ghosts aren't real or dangerous. One method would be for our ghost stricken friend to get people to tell her detailed stories about pleasant nights they spent in haunted houses (complete with spooky details) where nothing bad happened. Alternatively, she could read some books or watch some videos with similar content. Best of all would be if she spent a month living in a 'haunted' house, perhaps after doing some of the other things to soothe her nerves. There are many who will attest that eventually one 'gets used to' the scary noises and frightening atmosphere of an old house, and ceases to be scared when sleeping in similar houses.

I attribute the effectiveness of these tactics mostly to successful persuasion of the non-conscious brain using experiential data.

So, it seems we have a (potentially very powerful) new technique to add to our rationalist arsenal. To summarize:

  1. Find something you want to alieve.
  2. Determine what experiences that alief should cause you to anticipate.
  3. Have those experiences, by proxy if necessary, artificial or not.
  4. Test whether you now anticipate what you want to.
  5. If the test reveals progress, but not enough, repeat.

Examples:

  • Want to alieve that boxing is dangerous2? Watch some footage of boxers being punched painfully in the face, and ask a good boxer to win a fight against you in a painful but non-damaging manner. Now are you reluctant to box someone you have a good chance of beating?
  • Want to alieve that driving is dangerous? Watch footage of lots of car crashes, see Red Asphalt, and take a class from professional stunt drivers on how to crash safely. Now are you more reluctant to drive?
  • Want to alieve that flying is not very dangerous? Get a pilot's view of a flight, and pay attention to how boring it is. Sit next to a pilot while they undergo a very realistic flight simulation that covers many possible accidents, and watch them successfully navigate each scenario. Now are you more willing to fly?
  • Want to alieve snakes are generally not dangerous? Watch videos of safe snake interactions. Watch a pet store employee deal with a snake safely. Play with a snake under supervision without incident. Now do you exhibit less fear when encountering a snake?
  • Want to alieve you are part of the Less Wrong community? Interact with other community members as though you are one, attend meetups, make friends in the community. Now do you empathize more strongly with contributors on Less Wrong than with those elsewhere on the internet?

It can be annoying when our unconsciously moderated aliefs don't match our rationality-influenced beliefs, but luckily our aliefs can be trained.

 

1 Thanks to Hugh Ristik for talking about this at minicamp.

2 Credit for this example goes to Brandon Reinhart.

Special thanks to Luke for all the help

New to LessWrong?

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 6:05 AM

The other side of this is to try to be aware if people are trying to load up your mind with fake experiences to influence your intuition.

This can actually be done unintentionally as well. One of the things that might have caused the original haunted rationalist problem could have been watching/reading too much horror fiction: if most experiences you've seen regarding an old house end up with people tortured and dead, even if you know they were all known to be fictitious, you will still anticipate, however strongly, bad things happening in old houses. This also makes me wary that my anticipations regarding the future are likely highly influenced by all the science fiction I read, so I know to watch my aliefs in that regard very very closely.

I'm not sure my aliefs have been affected that strongly, but I've gotten annoyed by stories which consist of a cool idea followed by disaster. It's lazy plotting.

I tried a similar technique a couple of years ago. I had an irrational fear of drains (the ones at the bottom of a pool) after seeing on the news that a woman drowned when her hair was caught in one. After panicking in a public pool once when I realized I was swimming above one, I decided to stand on top of the drain (shallow end of the pool) at my house for a while. It took a while, but it works over time. The fear diminished a good amount. I still feel a compulsion to avoid them, but don't panic or feel too anxious when around them.

I'm glad this was posted here. This is a good habit to pick up.

Want to alieve snakes are generally not dangerous?

No! Those things can kill you! Perhaps I am safe here in Berkeley for the next month or so but back home I expect most of the snakes I encounter to be capable of killing me if they bite me. They aren't particularly likely to bite me unless I touch them, corner them or stand on them - that's where the fear comes in handy. It makes me feel uncomfortable when walking through long grass, particularly when wearing light footwear. That way I at least pay attention to movements and sounds and so give the snake a chance to move out of the way before I run on him.

This example was intended as a possible alief you might want to hold, whether it is accurate to your beliefs or not. There are some people who can reasonably expect to never encounter a dangerous snake in the wild who are nonetheless very afraid of them (and all other snakes as well); while respect and fear for dangerous and potentially poisonous animals is worthwhile for some, for others it can be a handicap.

I should also mention (though I took this part out of the article) that there are some situations where one might want to alieve things entirely counter to ones beliefs. The technique allows for cultivation of these types of aliefs as well, and not fearing snakes might be one of them. Other examples could be the alief that cake is not delicious, or that drinking/being drunk is boring and often painful. Note that I do not personally advocate lying to oneself in an overly convincing manner, as that way darkness lies.

Establishing 'pull' motivation' works best with strong visualization, and is reinforced upon experiencing the completion of the task.

To be clear, are you making a general statement here, or describing experimental results from the conference? And if this is from experimental results, could you elaborate on the specific evidence that led to these conclusions?

That is, what specifically do you mean by "strong visualization" and "reinforced", not to mention "experiencing the completion"? Thanks!

The statement about strong visualization (essentially simulating experiences as closely as possible) is taken from the video and personal (and anecdotal) experience with the method. The reinforcement from actual completion refers to how once you've completed the task you were motivating yourself to do, you should get the feeling of reward you were imagining to motivate yourself. Actually experiencing the reward makes it easier to simulate if you need to become motivated again later. Additionally the mental connection you'll make between completing the task and the reward makes it less likely that you'll need to repeat the exercise for that task, unless it has an extremely high activation cost: the next time you go to do the task, one of the first things that comes to mind will likely be the reward you felt last time(s) you performed it.

Perhaps I wasn't clear; I wasn't asking for your conclusions (which were already stated) or your hypothesized mechanisms for those conclusions, but rather, I was asking for evidence and definitions. Would you be willing to share the evidence that led you to formulate the above hypotheses?

I am particularly concerned because some of what you have said sounds like the sort of thing that one might anticipate about the process, but which is not actually the case at all. For example, I have seen no evidence of a reinforcement process such as you describe. (Quite the opposite in fact.) So, if you have actually measured or demonstrated such a reinforcement effect, I would be most curious to know how.

There are other things you're saying that also appear to me to be contrary to actual fact (as opposed to one's intuitive expectations that are easily confirmation-biased into appearing real), so I would really like to find out what specific evidence you have and what contrary explanations you've tested, because I don't wish the efficacy of the technique to be overstated. (Thereby presenting others with something to criticize, never mind that I wasn't the one who made the overstated claim(s).)

Thanks.

Okay, thanks for clarifying the question. I've essentially already stated all the "evidence" I'm using for the claim, it's almost entirely anecdotal, and there's certainly no actual studies that I've used to support this particular bullet point. So, there is a good chance I may have stated things in a way which seems overconfident, and I may in fact be overconfident regarding this particular claim, especially considering that I've not tested alternate explanations for the efficacy I've had. I'd be more than willing to have a detailed discussion regarding both of our experiences/intuitions with the method, but I feel as though this probably isn't the place(I've already messaged you), though I'd be happy to update the wording of the article afterwards if it's necessary.

Okay, thanks for clarifying the question. I've essentially already stated all the "evidence" I'm using for the claim, it's almost entirely anecdotal, and there's certainly no actual studies that I've used to support this particular bullet point.

I don't take issue with anecdotal evidence; it's the complete lack of any specifics whatsoever that's a problem. Even well-run studies are routinely misunderstood, misinterpreted and miscommunicated due to lack of relevant detail.

I'd be more than willing to have a detailed discussion regarding both of our experiences/intuitions with the method

I'm curious about the experiences that led you to the claims that you're making. I really don't want the intuitions or the reasoning behind your interpretations, because I don't want to contribute to erasing the information I really want from your brain. i.e., I'm trying to avoid witness tampering, although it may already be too late for that. ;-)

For the same reason, I'm not interested in a "discussion". I just want facts, or at least a reasonably-specific anecdote about them. ;-)

Anyway, if you'd be willing to share the specific experiences that led you to your conclusions -- and only the experiences, not the reasoning or conclusions -- please do so, whether publicly or privately.

Thanks.

Hmm. My brain seems to do somehting very similar automatically, and I can't think of any clear problems that I have of this type (at the moment, it dosn't necessarily mean there ain't any). There is the possibility that some other less positive factor causes my abnormally high apparent alif-belif correlation thou. Still, figuring out what I did to acquire this habit might still be useful to others.

Do you read/watch a lot of fiction? I personally end up selecting for fiction which matches my beliefs somewhat closely, and that in retrospect has likely strongly enforced the connection. This seems like a reasonable candidate for an automatic yet unnoticeable process with those results.

With certain kind of beliefs, yes, but generally using fictional evidence even for somehting like this has disadvantages, as does limiting yourself to fictions that reaffirms our beliefs in general.

You mean alieve, not believe. This is a technique to alieve what you already believe.

It's difficult for my brain to parse a sentence with 'alieve'. I guess I've watched too many commercials, and my brain associates 'Aleve' with 'relieve', which has an approximately opposite meaning. I have to mentally substitute 'alieve' with something like 'actually believe' in order to comfortably read the sentence.

I cannot find alieve in any of the dictionaries I have checked. Is it a Lesswrongism? If so, I strongly recommend dropping it, as suggested by Jordan below, as English-speakers parse it as meaning the opposite of how you appear to be using it, and there is no reference for them to turn to.

EDIT: Ah, now it links to alief, which is better. I'm still leery of the word, though, since googling it produces nothing, wikitionary produces nothing, wikipedia links to a town in Texas, and it sounds like its opposite.

It might be more readable if aleive was replaced with something like 'subconsciously believe' or 'have emotional reaction that...'

It's a rather recent neologism) (not from LW though). I'm not fanatically attached to the word, but it's pretty important to distinguish beliefs held by System-1 and System-2. Trying to change your beliefs (as analytic evidence processing with explicit correction) using the methods in the post would be awful doublethink. Trying to change your beliefs (as gut feelings you feel impulses to act on, or aliefs) to match the former kind of belief is just dandy.

I think the link to aliefs should go on the first mention. You might also want to remove the extra title at the top and eliminate the extra spacing between paragraphs (I've had trouble with this; the post is not updated right way when you make a change to the source, I think you have to wait a few minutes for it to change).

I like this framing a lot.

As looking glass self theory states,1 we are shaped by how others see us. This is largely due to the experience of having people react to us in certain ways.

Could you elaborate on how to use this in a positive way? Presumably in getting people to act toward you in a certain way you can motivate your behaviour to change, but it would seem practically very difficult to do so in normal settings.

Edit spelling & elaboration

You don't need to re-state the title in the body of the post.

Want to alieve that boxing is dangerous? Watch some footage of boxers being punched painfully in the face, and ask a good boxer to win a fight against you in a painful but non-damaging manner. Now are you reluctant to box someone you have a good chance of beating?

I don't avoid boxing because pain is unpleasant. I avoid boxing because it's not worth the brain damage.

Sure. But you can easily believe rapidly accelerated/rotated skull -> brain damage, and there are plenty of famously dead or brain-damaged boxers (or NFL players) from brain-rattling. The idea is to anticipate taking many blows if you ever do fight someone, even if you're better.

I misunderstood the word "alieve," parsing it as "relieve," and so the message I thought I was replying to was not the intended one.