by [anonymous]
8 min read24th Dec 201020 comments

18

 

Imagine you are a sprinter, and your one goal in life is to win the 100m sprint in the Olympics. Naturally, you watch the 100m sprint winners of the past in the hope that you can learn something from them, and it doesn't take you long to spot a pattern.

 

Every one of them can be seen wearing a gold medal around their neck. Not only is there a strong correlation, you then also examine the rules of the olympics and find that 100% of winners must wear a gold medal at some point, there is no way that someone could win and never wear a gold medal. So, naturally, you go out and buy a gold medal from a shop, put it around your neck and sit back, satisfied.

 

For another example, imagine that you are now in charge of running a large oil rig. Unfortunately, some of the drilling equipment is old and rusty, and every few hours a siren goes off alerting the divers that they need to go down again and repair the damage. This is clearly not an acceptable state of affairs, so you start looking for solutions.

 

You think back to a few months ago, before things got this bad, and you remember how the siren barely ever went off at all. In fact, from you knowledge of how the equipment works, the there were no problems, the siren couldn't go off. Clearly the solution the problem, is to unplug the siren.

 

(I would like to apologise in advance for my total ignorance of how oil rigs actually work, I just wanted an analogy)

 

Both these stories demonstrate a mistake which I call 'Dressing Like a Winner' (DLAW). The general form of the error is, person has goal of X, person observes that X reliably leads to Y, person attempts to achieve Y, then sits back, satisfied with their work. This mistake is so obviously wrong that it is pretty much non-existant in near mode, which is why the above stories seem utterly ridiculous. However, once we switch into the more abstract far mode, even the most ridiculous errors become dangerous. In the rest of this post I will point out three places where I think this error occurs.

 

Changing our minds

 

In a debate between two people, it is usually the case that whoever is right is unlikely to change their mind. This is not only an empirically observable correlation, but it's also intuitively obvious, would you change your mind if you were right?

 

At this point, our fallacys steps in with a simple conclusion, "refusing to change your mind will make you right". As we all know, this could not be further from the truth, changing your mind is the only way to become right, or any rate less wrong. I do not think this realization is unique to this community, but it is far from universal (and it is a lot harder to practice than to preach, suggesting it might still hold on in the subconcious).

 

At this point a lot of people will probably have noticed that what I am talking about bears a close resemblance to signalling, and some of you are probably thinking that that is all there is to it. While I will admit that DLAW and Signalling are easy to confuse, I do think they are seperate things., and that there is more than just ordinary signalling going on in the debate.

 

One piece of evidence for this is the fact that my unwillingness to change my mind extends even to opinions I have admitted to nobody. If I was only interested in signalling surely I would want to change my mind in that case, since it would reduce the risk of being humiliated once I do state my opinion. Another reason to believe that DLAW exists is the fact that not only do debaters rarely change their minds, those that do are often criticised, sometimes quite brutally, for 'flip-flopping', rather than being praised for becoming smarter and for demonstrating that their loyalty to truth is higher than their ego.

 

So I think DLAW is at work here, and since I have chosen a fairly uncontroversially bad thing to start off with, I hope you can now agree with me that it is at least slightly dangerous.

 

Consistency

 

It is an accepted fact that any map which completely fits the territory would be self-consistent. I have not seen many such maps, but I will agree with the argument that they must be consistent. What I disagree with is the claim that this means we should be focusing on making our maps internally consistent, and that once we have done this we can sit back because our work is done.

 

This idea is so widely accepted and so tempting, especially to those with a mathematical bent, that I believed it for years before noticing the fallacy that lead to it. Most reasonably intelligent people have gotten over one half of the toxic meme, in that few of them believe consistency is good enough (with the one exception of ethics, where it still seems to apply in full force). However, as with the gold medal, not only is it a mistake to be satisfied with it, but it is a waste of time to aim for it in the first place.

 

In Robin Hanson's article (beware consistency) [http://www.overcomingbias.com/2010/11/beware-consistency.html] we see that the consistent subjects actually do worse than the inconsistent ones, because they are consistently impatient or consistently risk averse. I think this problem is even more general than his article suggests, and represents a serious flaw in our whole epistemology, dating back to the Ancient Greek era.

 

Suppose that one day I notice an inconsistency in my own beliefs. Conventional wisdom would tell me that this is a serious problem, and I should discard one of the beliefs as quickly. All else being equal, the belief that gets discarded will probably be the one I am less attached to, which will probably be the one I acquired more recently, which is probably the one which is actually correct, since the other may well date back to long before I knew how to think critically about an idea.

 

Richard Dawkins gives a good example of this in his book 'The God Delusion'. Kurt Wise, a brilliant young geologist raised as a fundementalist Christian. Realising the contradiction between his beliefs, he took a pair of scissors to the bible and cut out every passage he would have to reject if he accepted the scientific world-view. After realizing his bible was left with so few pages that the poor book could barely hold itself together, he decided to abandon science entirely. Dawkins uses this to make an argument for why religion needs to be removed entirely, and I cannot neccessarily say I disagree with him, but I think a second moral can be drawn from this story.

 

How much better off would Kurt have been if he had just shrugged his shoulders at the contradiction and continued to believe both? How much worse off we be if Robert Aumann had abandoned the study of Rationality when he noticed it contradicted Orthodox Judaism? Its easy to say that Kurt was right to abandon one belief, he just abandoned the wrong one, but from inside Kurt's mind I'm not sure it was obvious to him which belief was right.

 

I think a better policy for dealing with contradictions is to put both beliefs 'on notice', be cautious before acting upon either of them and wait for more evidence to decide between them. If nothing else, we should admit more than two possibilities, they could actually be compatible, or they could both be wrong, or one or both of them could be badly confused.

 

To put this in one sentence "don't strive for consistency, strive for accuracy and consistency will follow".

 

Mathematical arguments about rationality

 

In this community, I often see mathematical proofs that a perfect Bayesian would do something. These proofs are interesting from a mathematical perspective, but since I have never met a perfect Bayesian I am sceptical of their relevance to the real world (perhaps they are useful to AI, someone more experienced than me should either confirm or deny that).

 

The problem comes when we are told that since a perfect Bayesian would do X, then we imperfect Bayesians should do X as well in order to better ourselves. A good example of this is Aumann's Agreement Theorem, which shows that not agreeing to disagree is a consequence of perfect rationality, being treated as an argument for not agreeing to disagree in our quest for better rationality. The fallacy is hopefully clear by now, we have been given no reason to believe that copying this particular by-product of success will bring us closer to our goal. Indeed, in our world of imperfect rationalists, some of whom are far more imperfect than others, an argument against disagreement seems like a very dangerous thing.

 

Elizer has (already)[http://lesswrong.com/lw/gr/the_modesty_argument/] argued against this specific mistake, but since he went on to (commit it)[http://lesswrong.com/lw/i5/bayesian_judo] a few articles later I think it bears mentioning again.

 

Another example of this mistake is (this post)[http://lesswrong.com/lw/26y/rationality_quotes_may_2010/36y9] (my apologies to Oscar Cunningham, this is not meant as an attack, you just provided a very good example of what I am talking about). The post provides a mathematical argument (a model rather than a proof) that we should be more sceptical of evidence that goes against our beliefs than evidence for them. To be more exact, it gives an argument why a perfect Bayesian, with no human bias and mathematically precise calibration should be more sceptical of evidence going against its beliefs than evidence for them.

 

The argument is, as far as I can tell, mathematically flawless. However, it doesn't seem to apply to me at all, if for no other reason than that I already have a massive bias overdoing that job, and my role is to counteract it.

 

In fact, I would say that in general our willingness to give numerical estimates is an example of this fallacy. The Cox theorems prove that any perfect reasoning system is isomorphic to Bayesian probability, but since my reasoning system is not perfect, I get the feeling that saying "80%" instead of "reasonably confident" is just making a mockery of the whole process.

 

This is not to say I totally reject the relevance of mathematical models and proofs to our pursuit. All else being equal if a perfect Bayesian does X. it is evidence that X is good for an imperfect Bayesian. It's just not overwhelmingly strong evidence, and shouldn't be treated as putting as if it puts a stop to all debate and decides the issue one way or the other (unlike other fields where mathematical arguments can do this).

 

How to avoid it

 

I don't think DLAW is particularly insidious as mistakes go, which is why I called it a fallacy rather than a bias. The only advice I would give is to be careful when operating in far mode (which you should do anyway), and always make sure the causal link between your actions and your goals is pointing in the right direction.

 

Note – When I first started planning this article I was hoping for more down-to-earth examples, but I struggled to find any. My current theory is that this fallacy is too obviously stupid to be committed in near mode, but if someone has a good example of DLAW occurring in their everyday life then please point it out in the comments. Just be careful that it is actually this rather than just signalling.

 

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 10:07 PM

After realizing hid bible

Minor typo here.

There's another standard example which might be worth using- cargo cult science (although IMO, Feynman was actually too harsh in that regard).

I don't know why you've deleted this post, aside from the easily fixable formatting issues, this post seems good.

I don't know why you've deleted this post, aside from the easily fixable formatting issues, this post seems good.

And easily recoverable. Would the author consider republishing?

Yes, if someone would tell me how.

You can see the text if you click on the "View original post" link at the top of the page after clicking on the permalink for this very reply (which is the same with any comment and post, deleted or not). Then you can copy and paste. ;)

Any advice on how I can get my formatting right, particularly the links?

It is a GUI. Don't try to do any markup, just click on the buttons. The one for links looks kind of like a chain if I recall. (Select the text you want to make into a link, click the button then paste in the URL to link to.)

Thanks . This site has an annoying resemblance to guessing the teacher's password.

I'd given up on finding a button for links, and have been using the html button.

And it's annoying to need one habit for links for comments and another for links for posts.

Why was this post deleted?

Maybe because the author is having trouble with the formatting problems.

This is correct

Bad stuff:

  • In posts, formatting isn't quite the same, you have to actually use your buttons to tell it to be a hyperlink.

  • I would absolutely love to see the words "correlative fallacy" or "correlation is not causation" somewhere in your post.

Good stuff:

  • It's a good post, thanks. An interesting thing to notice.

I concur. This is a good post, why was it deleted?

I'm not quite sure if this is actually a real-world example, but I often get the feeling that advice for self-improvement, especially regarding social interactions, falls in this area. By this I mean the specific "smile, make eye contact, say your mind" kind of advice, not the vague "just be yourself" kind.

I suspect that smiling and eye contact and the like tend to work socially for people who already are good at socialising, and trying to do them when you're not leads to looking weird (that is, even weirder than you might look if you're not trying).

The reason I'm not sure it's a case is that I'm not sure it never works. It could be that believing the advice works makes you confident enough to pull it off, or maybe it just works even if you fake it.

From information at lesswrong and elsewhere, it seems that the very best social interactors are distinguished from the second-best by "liking a lot of people."

You may be on to something here...

Two areas that DLAW might function in: motivation/akrasia and morality.

Morality I am less sure on; it might overlap with signalling a lot, but people seem to think that wearing winning (moral) beliefs will give them winning actions. Consequentialism gets it round the right way, then.

Motivation, however, I am sure of. People observe motivated people; they note that all motivated people have a method to achieve stuff that they espouse and use. So naturally, people go out and buy these methods and try to wear them, use them. It fails for them because being motivated causes you to develop and use methods to achieve things; people trying to dress like motivated people don't get motivated because motivation doesn't come from the methods. Like you said, the causal link is the wrong way around.

If you think motivation fits your idea and is more down-to-earth, absolutely feel completely free to use it in your post. I would replace the mathematically rational section with a section about motivation; the mathematically rational stuff feels a little bit unlike the other examples.

Your distinction between signalling and DLAW is sound. Your one sentence summary of the consistency section is brilliant. The introduction and the consistency section are particularly excellent.

Basically, you should republish this, even if it means slogging through all the formatting yourself, even if it means submitting it with some formatting errors, and even if it means staying up half the night and leaving on your trip tired. It's that good.

Thanks for that!

As for your morality point, I had noticed a similar thing myself and at one point considered adding a section on virtue-ethics to this post, I eventually decided I didn't have a strong enough case.

You have a good point about motivation, I didn't think of that.

A lot of people have asked why I deleted this post. The main reason was that I couldn't get the formatting right, but there was also an element of my courage failing at the last moment as to whether it was good enough or not.

If anyone is willing to help me format it correctly and re-submit it (fixing some of the errors that have been pointed out) I will be happy to do so. Just please do so quickly, since I am going on a trip for eight days tomorrow and will not have access to the internet.

Yes, this is a very good post, especially the point about consistency.