All of hhadzimu's Comments + Replies

Damon Runyon clearly has not considered point spreads.

"Many beliefs about procedure are exactly the opposite-- take believing that truth can be taken from the Bible. That procedure is self-justifying and there is no way to dispute it from within the assumptions of the procedure."

That's my point about rationality - the way I think about it, it would catch its own contradictions. In essence, a rationalist would recognize that rationalists don't "win." So as a result, committing yourself to rationality doesn't actually commit you to an outcome, as perhaps following a scripture would.

The big... (read more)

"So as a result, committing yourself to rationality doesn't actually commit you to an outcome, as perhaps following a scripture would." Doesn't committing yourself to rationality commit you to the outcome that so and so "will be rational"? I'm not saying that this is the same exact thing as what evangelical christians do, where they actually twist the lines to reason to their preferred conclusion. But it's like Jack said, don't dupe yourself into thinking none of the problems with labeling will apply to you. That's where you get into a tricky place, because you are ignoring a piece of information that does not jibe with your preferred view of yourself.

I'm inclined to agree on your latter point: looking at the results of the survey, it seems like it would be easy to go from 'rationalist' as a procedural label to 'rationalist' as shorthand for 'atheist male computer programmer using bayesian rules.' Of course, that's a common bias, and I think this community is as ready as any to fight it.

As for the former, I tried to address that by pointing out that rationalism means that we've already decided that updating priors is more effective than prayer. That said, I have a perhaps idealistic view of rationality... (read more)

So it might be the case that bayesian updating has some quirky memetic mutation that could lead it to destroy itself if it stopped working. Maybe so-called 'rationalism' is especially bad at absorbing internal contradictions. But this would be a feature of they belief itself-- not a feature of it being a belief about procedure. Many beliefs about procedure are exactly the opposite-- take believing that truth can be taken from the Bible. That procedure is self-justifying and there is no way to dispute it from within the assumptions of the procedure. Mostly, I just don't think the distinction you are trying to make between "procedural" and "substantive" beliefs holds water. Beliefs about political theory and economics, for example, are almost all procedural beliefs (i.e. the right procedure for making a law or stimulating the economy). What about them would make them immune to labeling problems?

I think the danger here is far smaller than people are making it out to be. There is a major difference between the label "rationalist" and most other identities as Paul Graham refers to them. The difference is that "rationalist" is a procedural label; most identities are at least partially substantive, using procedural/substantive in the sense that the legal system does.

"Rationalist," which I agree is an inevitable shorthand that emerges when the topic of overcoming bias is discussed frequently, is exclusively a procedural la... (read more)

Some people here have argued that 'rationalist' refers to someone who wants to act rationally, as opposed to someone who actually puts the necessary techniques into practice. I think the danger is far greater than you suspect.
Why should we think of beliefs about proper procedure as less prone to reifying identity formation than beliefs about things other than procedures? How are beliefs about the best procedure for reasoning or predicting not beliefs about the state of the world? Specifically, are such beliefs not beliefs about the human brain and how it functions? Aren't we all pretty committed to the view that updating priors is a better way of getting things right than praying for the answer? I don't see why beliefs about procedure aren't just as liable to be let by as unchallenged assumption as are beliefs about political systems. Besides, we'd be kidding ourselves if we said that the less wrong community has no shared beliefs other than about procedure. Yeah, a rationalist doesn't have to be an atheist... but there aren't a lot of outspoken evangelicals around these parts. It remains very possible that some or most of us could come to associate other beliefs with the rationalist label, even if the label doesn't explicitly include them right now. There are lots of reasons to call ourselves rationalists- but lets try not to dupe ourselves into thinking we're so special none of the problems with labeling will apply to us.

Exposing yourself to any judgments, period, is risky. The OB crowd is perhaps the best-commenting community I've come across: they read previous comments and engage the arguments made there. How many other bloggers are like Robin Hanson and consistently read and reply to comments? Anyway, as a result, any comment is bound to be read and often responded to by others. There may not have been a point value attached, but judgments were made.

Agreed. I have trouble accepting this as a true irrationality. It strikes me as merely a preference. You lose time you could be listening to song A because of your desire to have the same play count for song B, but this is because you prefer the world where playcounts are equal to the world where they are unequal but you hear a specific song more. Is that really an irrational preference?

I also agree with VN's disclaimer: this time spent [wasted?] on equalizing playcounts could probably be used for something else. But at what point does the preference for a... (read more)

I have to echo orthonormal: information, if processed without bias [availability bias, for example], should improve our decisions, and getting information is not always easy. I don't see how this raises any questions about the rational process, or as you say, principled fashion.

"But by what principled fashion should you choose not to eat the fugu?"

This seems like a situation where the simplest expected value calculation would give you the 'right' answer. In this case, the expected value of eating oysters is 1, the expected value of eating the fug... (read more)

Sorry I wasn't clear, the expected value of oysters is not 1, that is the value you discover after eating. It is unknown, you haven't had it before either. You have had other shell fish which have been dodgy. Whether getting killed by fugu is a failure of rationality or not, it is a failure. It is not hitting a small target in the optimization space. If you want modern examples of these sorts of problems not solvable by web phone, it is things like should we switch on the LHC, or create AI.

Eliezer Yudkowsky does not sleep. He waits.

Mm. I don't like 'waits'; it sounds like he's wasting his time, and it doesn't have enough LW/OB injokes. Maybe 'He updates priors.'?

You're probably right, but this is a workaround around the question. In law school, they'd accuse you of fighting the hypothetical. You're in the least convenient possible world here: you're wide awake, 100%, for the entire relevant duration.

"I am trying to politely tell you that you have a lot to learn about signaling." That's why I'm here :)

I think you bring up an interesting point here. I agreed with pwno that, once everyone is aware of a signal, it's no longer credible, especially if it's cheap. But I think you're right as well that for the signals you mentioned, it doesn't matter who knows that it's a signal or how long it's been around.

The distinction, I think, is what one is trying to signal. Signals of conformity to a group or cooperativeness to an ally might be affected dif... (read more)

Good point. I should have made the distinction between status signals and "conformity" signals clearer. But I do think that there are very distinct mechanisms at work there, even though the ultimate end [higher status] is probably the same. [That is, we signal conformity to an employer to get a job that will give us higher status.]

My concern was mostly that "higher status is the end goal" has very little explanatory power in itself. Understanding more specifically what certain things signal is far more helpful.

I think you're hitting a different, though related point. A business suit and a smile are probably not credible signals, though their absence is a credible signal of the opposite. it's easy to wear a business suit and fake a smile: each applicant to a job opening will likely come with both. Those that don't are almost instantaneously downgraded. It seems that the signal becomes a new baseline for behavior, and though it doesn't credibly signal anything, its absence signals something.

I'm not positive on the mechanism here: it's probably related to the fact that the signal is so low-cost, and that anyone failing to display it is either extremely low status, or signals some other defect.

I am trying to politely tell you that you have a lot to learn about signaling. Suits and smiles do credibly signal things. And the larger point is that the ability of a signal to work usually has little to do with how long it has been around or who knows that it is a signal.

Politics or not, I find this to be a great illustration of the real-world consequences of failure of rationality. The interesting question is at what point the mechanism breaks down.

he logical course of action for rich countries is to study the most effective methods of poverty alleviation and development, and apply those. We can see clearly that this is not happening, but it's unclear as to why:

-Are rich countries wrong about the conditions they're facing, and thus using improper methods? If so, is there a bias that causes them to misperceive conditions? ... (read more)

I actually think it's the marginally different "Have you stopped beating your wife?" which allows for yes/no answers only, except that neither will help you.

That one is the worse variant
Fix to this version

part 2: "So what an expert rationalist should do to avoid this overconfidence trap?"

Apologies for flooding the comments, but I wanted to separate the ideas so they can be discussed separately. The question is how to avoid overconfidence, and bias in general. Picking up from last time:

If we can identify a bias, presumably we can also identify the optimal outcome that would happen in the absence of such bias. If we can do that, can't we also constrain ourselves in such a way that we can achieve the optimal outcome despite giving in to the bias? Fo... (read more)

"So what an expert rationalist should do to avoid this overconfidence trap?"

You mean, how should one overcome bias? Be less wrong, if you will? You've come to the right place. David Balan had a post that didn't receive enough attention over at OB: This comment roughly paraphrases the points I made there.

If we can identify a bias, presumably we can also identify the optimal outcome that would happen in the absence of such bias. There are two ways to achieve this, and I will post them in se... (read more)

Agreed. You just stole that from the banner at OB

"So what an expert rationalist should do to avoid this overconfidence trap? The seeming answer is that we should rely less on our own reasoning and more on the “wisdom of the crowds."

As Bryan Caplan's "Myth of the Rational Voter" pretty convincingly shows, the wisdom of crowds is of little use when the costs of irrationality are low. It's true in democracy: voting for an irrational policy like tariffs has almost no cost, because a single vote almost never matters. The benefit of feeling good about voting for what you like to believe in ... (read more)

Neither do I, though I'm often tempted to find a reason for why my iPod's shuffle function "chose" a particular song at a particular time. ["Mad World" right now.]

It seems that our mental 'hardware' is very susceptible to agency and causal misfires, leaving an opening for something like religious belief. Robin explained religious activities and beliefs as important in group bonding [], but the fact that religion arose may just be a historical accident. It's likely that somethin... (read more)

I think this is a good answer to Eliezer's thought experiment. Teach those budding rationalists about the human desire to conform even in the face of the prima facie ridiculousness of the prevailing beliefs. Teach them about greens and blues; teach them about Easter Islanders building statues with their last failing stock of resources (or is that too close to teaching about religion?). Teach them how common the pattern is: when something is all around you, you are less likely to doubt its wisdom. Human rationality (at least for now) is still built on the blocks and modules provided to us by evolution. They can lead us astray, like the "posit agency" module firing when no agent is there. But they can also be powerful correctives. A pattern-recognizing module is a dangerous thing when we create imaginary patterns... but, oh boy, when there actually is a pattern there, let that module rip!

Excellent description. Reminds me a little of Richard Dawkins in "The God Delusion," explaining how otherwise useful brain hardware 'misfires' and leads to religious belief.

You mention agency detection as one of the potential modules that misfire to bring about religious belief. I think we can generalize that a little more and say fairly conclusively that the ability to discern cause-and-effect was favored by natural selection, and given limited mental resources, it certainly favored errors where cause was perceived even if there was none, rath... (read more)

The conspiracy theory of economics remains prevalent, however, and very difficult to disabuse people of. So I'm not sure this is that helpful a handle to disabuse people of religion.

Agency misfires and causal misfires can help to suggest religion. For that suggestion to get past your filters, the sanity waterline has to be low. I don't invent a new religion every time I see a face in the clouds or three dandelions lined up in a row.

We're forgetting signaling. Robin would never forgive us, because he sees it in a lot of things, and I happen to agree with him that it's far more pervasive than people think.

In fact, the Tversky example gives people two opportunities to signal: not only do they get to demonstrate higher pain tolerance [especially important for men], they also get to "demonstrate" a healthier heart. Both should be boosts in status.

The same goes for Calvinists: though, when you think about it, you truly believe in the elect, you don't think about it most of your ... (read more)

This was exactly my thought, and I now wonder whether it's possible to determine via experiment. So how do you give the information to the subjects but not have them think that the researchers know it. A confederate who's a subject and just happens to gossip about the thing is one way -- if the researchers proceed to deny it, you might be able to split them into groups based on a low status confederate versus a high-status confederate, and a vehement denial vs a "that study hasn't been verified" vs a "that was an urban legend." Or providing a status signal that it's better to have a "bad" heart -- having a high status researcher who says "sure, we may live less long, but there are all sorts of other benefits they're not telling us about" It's really hard to separate the information from the humans passing on the information.

I disagree... I think "limited analysis resources" accounts for the very difference you speak of. I think the "rituals of cognition" you mention are themselves subjection to rationality analysis: if I'm understanding you correctly, you are talking about someone who knows how to be rational in theory but cannot implement such theory in practice. I think you run into three possibilities there.

One, the person has insufficient analytical resources to translate their theory into action, which Robin accounts for. The person is still rational,... (read more)

I think we're missing a fairly basic definition of rationality, one that I think most people would intuitively come to. It involves the question at what stage evidence enters the decision-making calculus.

Rationality is a process: it involves making decisions after weighing all available evidence and calculating the ideal response. Relevant information is processed consciously [though see Clarification below] before decision is rendered.

This approach is opposed to a different, less conscious process, which are our instinctive and emotional responses to situ... (read more)