Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.

What Disagreement Signifies

Let's start by talking about disagreement. There's been a lot of discussion of disagreement on LessWrong, and in particular of Aumann's agreement theorem, often glossed as something like "two rationalists can't agree to disagree." (Or perhaps that we can't foresee to disagree.) Discussion of disagreement, however, tends to focus on what to do about it. I'd rather take a step back, and look at what disagreement tells us about ourselves: namely, that we don't think all that highly of each other's rationality.

This, for me, is the take-away from Tyler Cowen and Robin Hanson's paper Are Disagreements Honest? In the paper, Cowen and Hanson define honest disagreement as meaning that "meaning that the disputants respect each other’s relevant abilities, and consider each person’s stated opinion to be his best estimate of the truth, given his information and effort," and they argue disagreements aren't honest in this sense.

I don't find this conclusion surprising. In fact, I suspect that while people sometimes do mean it when they talk about respectful disagreement, often they realize this is a polite fiction (which isn't necessarily a bad thing). Deep down, they know that disagreement is disrespect, at least in the sense of not thinking that highly of the other person's rationality. That people know this is shown in the fact that they don't like being told they're wrong—the reason why Dale Carnegie says you can't win an argument

On LessWrong, people are quick to criticize each others' views, so much so that I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect). Furthermore when people in LessWrong criticize others' views, they very often don't seem to expect to quickly reach agreement. Even people Yvain would classify as "experienced rationalists" sometimes knowingly have persistent disagreements. This suggests that LessWrongers almost never consider each other to be perfect rationalists.

And I actually think this is a sensible stance. For one thing, even if you met a perfect rationalist, it could be hard to figure out that they are one. Furthermore, the problem of knowing what to do about disagreement is made harder when you're faced with other people having persistent disagreements: if you find yourself agreeing with Alice, you'll have to think Bob is being irrational, and vice versa. If you rate them equally rational and adopt an intermediate view, you'll have to think they're both being a bit irrational for not doing likewise.

The situation is similar to Moore's paradox in philosophy—the impossibility of asserting "it's raining, but I don't believe it's raining." Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

We can find some room for humility in an analog of the preface paradox, the fact that the author of a book can say things like "any errors that remain are mine." We can say this because we might think each individual claim in the book is highly probable, while recognize that all the little uncertainties add up to it being likely there are still errors. Similarly, we can think each of our beliefs are individually rational, while recognizing we still probably have some irrational beliefs—we just don't know which ones And just because respectful disagreement is a polite fiction doesn't mean we should abandon it. 

I don't have a clear sense of how controversial the above will be. Maybe we all already recognize that we don't respect each other's opinions 'round these parts. But I think some features of discussion at LessWrong look odd in light of the above points about disagreement—including some of the things people say about disagreement.

The wiki, for example, says that "Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse. Thus, fostering effective deliberation should be seen as a key goal of Less Wrong." The point of Aumann's agreement theorem, though, is precisely that ideal rationalists shouldn't need to engage in deliberative discourse, as usually conceived, in order to reach agreement.

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it." But when dealing with real people who may or may not have a rational basis for their beliefs, that's almost always the right stance to take.

Intelligence and Rationality

Intelligence does not equal rationality. Need I say more? Not long ago, I wouldn't have thought so. I would have thought it was a fundamental premise behind LessWrong, indeed behind old-school scientific skepticism. As Michael Shermer once said, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments. When I hear that, I think "whaaat? People on LessWrong make bad arguments all the time!" When this happens, I generally limit myself to trying to point out the flaw in the argument and/or downvoting, and resist the urge to shout "YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD." I just think it.

When I reach for an explanation of why terrible arguments from smart people shouldn't surprise anyone, I go to Yvain's Intellectual Hipsters and Meta-Contarianism, one of my favorite LessWrong posts of all time. While Yvain notes that meta-contrarianism often isn't a good thing, though, on re-reading it I noticed what seems like an important oversight:

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 145. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.

The pattern of countersignaling Yvain describes here is real. But it's important not to forget that sometimes, the super-wealthy signal their wealth by buying things even the moderately wealthy can't afford. And sometimes, the very intelligent signal their intelligence by holding opinions even the moderately intelligent have trouble understanding. You also get hybrid status moves: designer versions of normally low-class clothes, complicated justifications for opinions normally found among the uneducated.

Robin Hanson has argued that this leads to biases in academia:

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds. If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis. And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

Robin's post focuses on economics, but I suspect the problem is even worse in my home field of philosophy. As I've written before, the problem is that in philosophy, philosophers never agree on whether a philosopher has solved a problem. Therefore, there can be no rewards for being right, only rewards for showing off your impressive intellect. This often means finding clever ways to be wrong.

I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.

I've found philosophy of religion in particular to be a goldmine of terrible arguments made by smart people. Consider Alvin Plantinga's modal ontological argument. The argument is sufficiently difficult to understand that I won't try to explain it here. If you want to understand it, I'm not sure what to tell you except to maybe read Plantinga's book The Nature of NecessityIn fact, I predict at least one LessWronger will comment on this thread with an incorrect explanation or criticism of the argument. Which is not to say they wouldn't be smart enough to understand it, just that it might take them a few iterations of getting it wrong to finally get it right. And coming up with an argument like that is no mean feat—I'd guess Plantinga's IQ is just as high as the average LessWronger's.

Once you understand the modal ontological argument, though, it quickly becomes obvious that Plantinga's logic works just as well to "prove" that it's a necessary truth that pigs fly. Or that Plantinga's god does not exist. Or even as a general purpose "proof" of any purported mathematical truth you please. The main point is that Plantinga's argument is not stupid in the sense of being something you'd only come up with if you had a low IQ—the opposite is true. But Plantinga's argument is stupid in the sense of being something you'd only come up with it while under the influence of some serious motivated reasoning.

The modal ontological argument is admittedly an extreme case. Rarely is the chasm between the difficulty of the concepts underlying an argument, and the argument's actual merits, so vast. Still, beware the temptation to affiliate with smart people by taking everything they say seriously.

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

The Principle of Charity

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

More frustrating than this simple disagreement over charity, though, is when people who invoke the principle of charity do so selectively. They apply it to people who's views they're at least somewhat sympathetic to, but when they find someone they want to attack, they have trouble meeting basic standards of fairness. And in the most frustrating cases, this gets explicit justification: "we need to read these people charitably, because they are obviously very intelligent and rational." I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses. Whatever its merits, though, they can't depend on the actual intelligence and rationality of the person making an argument. Not only is intelligence no guarantee against making bad arguments, the whole reason we demand other people tell us their reasons for their opinions in the first place is we fear their reasons might be bad ones.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

Beware Weirdness for Weirdness' Sake

There's a theory in the psychology and sociology of religion that the purpose of seemingly foolish rituals like circumcision and snake-handling is to provide a costly and therefore hard-to-fake signal of group commitment. I think I've heard it suggested—though I can't find by who—that crazy religious doctrines could serve a similar purpose. It's easy to say you believe in a god, but being willing to risk ridicule by saying you believe in one god who is three persons, who are all the same god, yet not identical to each other, and you can't explain how that is but it's a mystery you accept on faith... now that takes dedication.

Once you notice the general "signal group commitment in costly ways" strategy, it seems to crop up everywhere. Subcultures often seem to go out of their way to be weird, to do things that will shock people outside the subculture, ranging from tattoos and weird clothing to coming up with reasons why things regarded as normal and innocuous in the broader culture are actually evil. Even something as simple as a large body of jargon and in-jokes can do the trick: if someone takes the time to learn all the jargon and in-jokes, you know they're committed.

This tendency is probably harmless when done with humor and self-awareness, but it's more worrisome when a group becomes convinced its little bits of weirdness for weirdness' sake are a sign of its superiority to other groups. And it's worth being aware of, because it makes sense of signaling moves that aren't straightforwardly plays for higher status.

The LessWrong community has amassed a truly impressive store of jargon and in-jokes over the years, and some of it's quite useful (I reiterate my love for the term "meta-contrarian"). But as with all jargon, LessWrongian jargon is often just a silly way of saying things you could have said without it. For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.

But "blue-green politics," "politics is the mind-killer"... never mind how much content they add, the point is they're obscure enough to work as an excuse to feel superior to anyone whose political views are too mainstream. Outsiders will probably think you're weird, invoking obscure jargon to quickly dismiss ideas that seem plausible to them, but on the upside you'll get to bond with members of your in-group over your feelings of superiority.

A More Humble Rationalism?

I feel like I should wrap up with some advice. Unfortunately, this post was motivated by problems I'd seen, not my having thought of brilliant solutions to them. So I'll limit myself to some fairly boring, non-brilliant advice.

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that.

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy. Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme. It's just a little too easy to forget where "rationality" is supposed to connect with the real world, increasing the temptation for "rationality" to spiral off into signaling games.

Self-Congratulatory Rationalism
New Comment
395 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

I disagree with this, and explained why in Probability Space & Aumann Agreement . To quote the relevant parts:

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeated

... (read more)
8JWP
Yes. There are reasons to ask for evidence that have nothing to do with disrespect. * Even assuming that all parties are perfectly rational and that any disagreement must stem from differing information, it is not always obvious which party has better relevant information. Sharing evidence can clarify whether you know something that I don't, or vice versa. * Information is a good thing; it refines one's model of the world. Even if you are correct and I am wrong, asking for evidence has the potential to add your information to my model of the world. This is preferable to just taking your word for the conclusion, because that information may well be relevant to more decisions than the topic at hand.
7paulfchristiano
There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations). It seems like doubting each other's rationality is a perfectly fine explanation. I don't think most people around here are perfectly rational, nor that they think I'm perfectly rational, and definitely not that they all think that I think they are perfectly rational. So I doubt that they've updated enough on the fact that my views haven't converged towards theirs, and they may be right that I haven’t updated enough on the fact that their views haven’t converged towards mine. In practice we live in a world where many pairs of people disagree, and you have to disagree with a lot of people. I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.
1Wei Dai
The point I wanted to make was that AFAIK there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have, even if they trust each other to be as rational as humanly possible. The result by Scott Aaronson may be of theoretical interest (and maybe even of practical use by future AIs that can perform exact computations with the information in their minds), but seem to have no relevance to humans faced with real-world (i.e., as opposed to toy examples) disagreements. I don't understand this. Can you expand?
1Lumifer
Huh? There is currently no practical method for two humans to reliably reach agreement on some topic, full stop. Exchanging all evidence might help, but given that we are talking about humans and not straw Vulcans, it is still not a reliable method.
1ChrisHallquist
I won't try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong. It seems like two people trading probability estimates shouldn't need to deduce exactly what the other has observed, they just need to make inferences along the lines of, "wow, she wasn't swayed as much as I expected by me telling her my opinion, she must think she has some pretty good evidence." At least that's the inference you would make if you both knew you trust each other's rationality. More realistically, of course, the correct inference is usually "she wasn't swayed by me telling her my opinion, she doesn't just trust me to be rational." Consider what would have to happen for two rationalists who knowingly trust each other's rationality to have a persistent disagreement. Because of conservation of expected evidence, Alice has to think her probability estimate would on average remain the same after hearing Bob's evidence, and Bob must think the same about hearing Alice's evidence. That seems to suggest they both must think they have better, more relevant evidence to the question at hand. And might be perfectly reasonable for them to think that at first. But after several rounds of sharing their probability estimates and seeing the other not budge, Alice will have to realize Bob thinks he's better informed about the topic than she is. And Bob will have to realize the same about Alice. And if they both trust each other's rationality, Alice will have to think, "I thought I was better informed than Bob about this, but it looks like Bob thinks he's the one who's better informed, so maybe I'm wrong about being better informed." And Bob will have to have the parallel thought. Eventually, they should converge.
1Eugine_Nier
Wei Dai's description is correct, see here for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
0Will_Newsome
Wonder if a list of such things can be constructed. Algorithmic information theory is an example where Eliezer drew the wrong implications from the math and unfortunately much of LessWrong inherited that. Group selection (multi-level selection) might be another example, but less clear cut, as that requires computational modeling and not just interpretation of mathematics. I'm sure there are more and better examples.
0RobinZ
The argument can even be made more general than that: under many circumstances, it is cheaper for us to discuss the evidence we have than it is for us to try to deduce it from our respective probability estimates.
0PeterDonis
I'm not sure this qualifier is necessary. Your argument is sufficient to establish your point (which I agree with) even if you do trust the other's rationality.
4ChrisHallquist
Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.
1PeterDonis
Is that because you think it's necessary to Wei_Dai's argument, or just because you would like people to be up front about what they think?
-1Gunnar_Zarncke
Yes. But it entirely depends on how the request for supportive references is phrased. Good: Bad: The neutral leaves the interpretation of the attitude to the reader/addressee and is bound to be misinterpreted (people misinterpreting tone or meaning of email).
1ChrisHallquist
Saying sort of implies you're updating towards the other's position. If you not only disagree but are totally unswayed by hearing the other person's opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).
-2Gunnar_Zarncke
But shouldn't you always update toward the others position? And if the argument isn't convincing you can truthfully tell so that you updated only slightly.

But shouldn't you always update toward the others position?

That's not how Aumann's theorem works. For example, if Alice mildly believe X and Bob strongly believes X, it may be that Alice has weak evidence for X, and Bob has much stronger independent evidence for X. Thus, after exchanging evidence they'll both believe X even more strongly than Bob did initially.

Yup!

One related use case is when everyone in a meeting prefers policy X to policy Y, although each are a little concerned about one possible problem. Going around the room and asking everyone how likely they think X is to succeed produces estimates of 80%, so, having achieved consensus, they adopt X.

But, if people had mentioned their particular reservations, they would have noticed they were all different, and that, once they'd been acknowledged, Y was preferred.

7Viliam_Bur
Even if they both equally strongly believe X, it makes sense for them to talk whether they both used the same evidence or different evidence.
2Will_Newsome
Obligatory link.
0Gunnar_Zarncke
Of course. I agree that doesn't make clear that the other holds another position and that the reply may just address the validity of the evidence. But even then shouldn't you see it at least as weak evidence and thus believe X at least a bit more strongly?

I interpret you as making the following criticisms:

1. People disagree with each other, rather than use Aumann agreement, which proves we don't really believe we're rational

Aside from Wei's comment, I think we also need to keep track of what we're doing.

If we were to choose a specific empirical fact or prediction - like "Russia will invade Ukraine tomorrow" - and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average - then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.

But this doesn't preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:

  • We can both increase our understanding of the issue.

  • We may find a subtler position we can both agree on. If I say "California is hot" and you say "California is cold", instead of immediately jumping to "50% probability either way" we can work out which parts of California are

... (read more)

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm

... (read more)

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

Are ethics supposed to be Aumann-agreeable? I'm not at all sure the original proof extends that far. If it doesn't, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.

I don't think it would cover Eliezer vs. Robin, but I'm uncertain how "real" that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other's estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I'm not sure they wouldn't ag... (read more)

2ChrisHallquist
I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs. FWIW, actual heuristics I use to determine who's worth paying attention to are * What I know of an individual's track record of saying reasonable things. * Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it. * Looking for other crackpot warning signs I've picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views. Which may not be great heuristics, but I'll wager that they're better than IQ (wager, in this case, being a figure of speech, because I don't actually know how you'd adjudicate that bet). It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: "The thing that people forget sometimes, is that even though appearances can be misleading, they're usually not." You've also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I'm fuzzy on what exactly they think about cryonics (more here).

Your heuristics are, in my opinion, too conservative or not strong enough.

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be "intelligence explosions", or that you can upload a human brain.

Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain ... (read more)

2TheAncientGeek
Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, "academic philosophy sucks" is a Crackpot Warning Sign, ie "don't listen to the hidebound establishment".
4ChrisHallquist
So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)
9Protagoras
Also, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can't really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that's also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).
1TheAncientGeek
Since there is no consensus among philosophers, respecting philosophy is about respecting the process. The negative .claims LW makes about philosophy are indeed similar to the negative claims philosophy makes about itself. LW also makes the positive claim that it has a better, faster method than philosophy but in fact just has a truncated version of the same method. As Hallquist notes elsewhere But Alexander misunderstands me when he says I accuse Yudkowsky “of being against publicizing his work for review or criticism.” He’s willing to publish it–but only to enlighten us lesser rationalists. He doesn’t view it as a necessary part of checking whether his views are actually right. That means rejecting the social process of science. That’s a problem. Or, as I like to put it, if you half bake your bread, then you get your bread quicker...but its half baked,
0TheAncientGeek
If what philosophers specialise in clarifying questions, they can trusted to get the question right. A typical failure mode of amateur philosophy is to substitute easier questions for harder ones.
0Vaniver
You might be interested in this article and this sequence (in particular, the first post of that sequence). "Academic philosophy sucks" is a Crackpot Warning Sign because of the implied brevity. A measured, in-depth criticism is one thing; a smear is another.
-1TheAncientGeek
Read them ,not generally impressed.
1torekp
Counterexample: your own investigation of natural law theology. Another: your investigation of the Alzheimer's bacterium hypothesis. I'd say your own intellectual history nicely demonstrates just how to pull off the seemingly impossible feat of detecting reasonable people you disagree with.
0ChrisHallquist
With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything. Examples? I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.) Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true." "Good arguments" is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say "this guy's arguments are terrible," and you say, "you should read those arguments more charitably," it doesn't do much good for you to defend that claim by saying, "well, he has a track record of making good arguments."
2Scott Alexander
I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions. But again, I don't feel like that's strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones? Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that "my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age." So Marshall decided since he couldn't get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick. Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission "talked about alien babies being adopted by Nancy Reagan", before they made it into legitimate medical journals. I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index..."believes the establishment is ignoring him because of their biases", "believes his discovery will instantly solve a centuries-old problem with no side effects", "does his studies on himself", "studies get published in tabloid rather than journal", but these were just things he naturally felt or had to do because the establishment wouldn't take him seriously and he couldn't do things "right". I think it is much much less than the general public, but I don't
7Jiro
The extent to which science rejected the ulcer bacterium theory has been exaggerated. (And that article also addresses some quotes from Marshall himself which don't exactly match up with the facts.)
2ChrisHallquist
What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that. That leaves us with "proper logical form," about which you said: In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.
2Kawoomba
If they were, uFAI would be a non-issue. (They are not.)
-5pcm
-11TheAncientGeek
0TheAncientGeek
Not being charitable to people isn't a problem, providing you don't mistake your lack of charity for evidence that they are stupid or irrational.
4Solvent
That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.

Three somewhat disconnected responses —

For a moral realist, moral disagreements are factual disagreements.

I'm not sure that humans can actually have radically different terminal values from one another; but then, I'm also not sure that humans have terminal values.

It seems to me that "deontologist" and "consequentialist" refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. ("Moral responses" are things like approving, disapproving, praising, punishing, feeling pride or guilt, and so on. They are adaptations being executed, not optimized reflections of fundamental values.)

2blacktrance
The danger of this approach is obvious, but it can have benefits as well. You may not know that a particular LessWronger is sane, but you do know that on average LessWrong has higher sanity than the general population. That's a reason to be more charitable.
8[anonymous]
Besides which, we're human beings, not fully-rational Bayesian agents by mathematical construction. Trying to pretend to reason like a computer is a pointless exercise when compared to actually talking things out the human way, and thus ensuring (the human way) that all parties leave better-informed than they arrived.
6elharo
FYI IQ, whatever it measures, has little to no correlation with either epistemic or instrumental rationality, For extensive discussion of this topic see Keith Stanovich's What Intelligence Tests Miss In brief, intelligence (as measured by an IQ test), epistemic rationality (the ability to form correct models of the world), and instrumental rationality (the ability to define and carry out effective plans for achieving ones goals) are three different things. A high score on an IQ test does not correlate with enhanced epistemic or instrumental rationality. For examples, of the lack of correlation between IQ and epistemic rationality, consider the very smart folks you have likely met who have gotten themselves wrapped up in incredibly complex and intellectually challenging belief systems that do not match the world we live in: Objectivism, Larouchism, Scientology, apologetics, etc. For examples of the lack of correlation between IQ and instrumental rationality, consider the very smart folks you have likely met who cannot get out of their parents basement, and whose impact on the world is limited to posting long threads on Internet forums and playing WoW.
2Kaj_Sotala
LW discussion.
[-]shware370

A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.

SlateStarCodex

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

[-]JWP170

Identifying as a "rationalist" is encouraged by the welcome post.

We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist

Edited the most recent welcome post and the post of mine that it linked to.

Does anyone have a 1-syllable synonym for 'aspiring'? It seems like we need to impose better discipline on this for official posts.

8somervta
Consider "how you came to aspire to rationality/be a rationalist" instead of "identify as an aspiring rationalist". Or, can the identity language and switch to "how you came to be interested in rationality".
3CCC
Looking at a thesaurus, "would-be" may be a suitable synonym. Other alternatives include 'budding', or maybe 'keen'.
2Bugmaster
FWIW, "aspiring rationalist" always sounded quite similar to "Aspiring Champion" to my ears. That said, why do we need to use any syllables at all to say "aspiring rationalist" ? Do we have some sort of a secret rite or a trial that an aspiring rationalist must pass in order to become a true rationalist ? If I have to ask, does that mean, I'm not a rationalist ? :-/
2wwa
demirationalist - on one hand, something already above average, like in demigod. On the other, leaves the "not quite there" feeling. My second best was epirationalist Didn't find anything better in my opinion, but in case you want to give it a (somewhat cheap) shot yourself... I just looped over this
0brazil84
The only thing I can think of is "na" e.g. in Dune, Feyd Rauthah was the "na-baron," meaning that he had been nominated to succeed the baron. (And in the story he certainly was aspiring to be Baron.) Not quite what you are asking for but not too far either.
4Oscar_Cunningham
And the phrase "how you came to identify as a rationalist" links to the very page where in the comments Robin Hanson suggests not using the term "rationalist", and the alternative "aspiring rationalist" is suggested!

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That's the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his "Why I Am A Rationalist", for example, Russell identifies as a rationalist but also says, "We are not yet, and I suppose men and women never will be, completely rational."

This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using "rationalist" when talking about Aumann's agreement theorem—use "ideal rationalists" or "perfect rationalist". I also tend to use phrases like "members of the online rationalist community," but that's more to indicate I'm not talking about Russell or Dawkins (much less Descartes).

8Nornagest
The -ist suffix can mean several things in English. There's the sense of "practitioner of [an art or science, or the use of a tool]" (dentist, cellist). There's "[habitual?] perpetrator of" or "participant in [an act]" (duelist, arsonist). And then there's "adherent of [an ideology, doctrine, or teacher]" (theist, Marxist). Seems to me that the problem has to do with equivocation between these senses as much as with the lack of an "aspiring". And personally, I'm a lot more comfortable with the first sense than the others; you can after all be a bad dentist. Perhaps we should distinguish between rationaledores and rationalistas? Spanglish, but you get the picture.
2polymathwannabe
The -dor suffix is only added to verbs. The Spanish word would be razonadores ("ratiocinators").
0Vaniver
"Reasoner" captures this sense of "someone who does an act," but not quite the "practitioner" sense, and it does a poor job of pointing at the cluster we want to point at.
6A1987dM
Recency illusion?

I've recently had to go on (for a few months) some medication which had the side effect of significant cognitive impairment. Let's hand-wavingly equate this side effect to shaving thirty points off my IQ. That's what it felt like from the inside.

While on the medication, I constantly felt the need to idiot-proof my own life, to protect myself from the mistakes that my future self would certainly make. My ability to just trust myself to make good decisions in the future was removed.

This had far more ramifications than I can go into in a brief comment, but I can generalize by saying that I was forced to plan more carefully, to slow down, to double-check my work. Unable to think as deeply into problems in a freewheeling cognitive fashion, I was forced to break them down carefully on paper and understand that anything I didn't write down would be forgotten.

Basically what I'm trying to say is that being stupider probably forced me to be more rational.

When I went off the medication, I felt my old self waking up again, the size of concepts I could manipulate growing until I could once again comprehend and work on programs I had written before starting the drugs in the first place. I... (read more)

7John_Maxwell
What medication?

One thing I hear you saying here is, "We shouldn't build social institutions and norms on the assumption that members of our in-group are unusually rational." This seems right, and obviously so. We should expect people here to be humans and to have the usual human needs for community, assurance, social pleasantries, and so on; as well as the usual human flaws of defensiveness, in-group biases, self-serving biases, motivated skepticism, and so on.

Putting on the "defensive LW phyggist" hat: Eliezer pointed out a long time ago that knowing about biases can hurt people, and the "clever arguer" is a negative trope throughout that swath of the sequences. The concerns you're raising aren't really news here ...

Taking the hat off again: ... but it's a good idea to remind people of them, anyway!


Regarding jargon: I don't think the "jargon as membership signaling" approach can be taken very far. Sure, signaling is one factor, but there are others, such as —

  • Jargon as context marker. By using jargon that we share, I indicate that I will understand references to concepts that we also share. This is distinct from signaling that we are social allies; it
... (read more)