A long blog post explains why the author, a feminist, is not comfortable with the rationalist community despite thinking it is "super cool and interesting". It's directed specifically at Yvain, but it's probably general enough to be of some interest here.


I'm not sure if I can summarize this fairly but the main thrust seems to be that we are overly willing to entertain offensive/taboo/hurtful ideas and this drives off many types of people. Here's a quote:

In other words, prizing discourse without limitations (I tried to find a convenient analogy for said limitations and failed. Fenders? Safety belts?) will result in an environment in which people are more comfortable speaking the more social privilege they hold.

The author perceives a link between LW type open discourse and danger to minority groups. I'm not sure whether that's true or not. Take race. Many LWers are willing to entertain ideas about the existence and possible importance of average group differences in psychological traits. So, maybe LWers are racists. But they're racists who continually obsess over optimizing their philanthropic contributions to African charities. So, maybe not racists in a dangerous way?

An overly rosy view, perhaps, and I don't want to deny the reality of the blogger's experience. Clearly, the person is intelligent and attracted to some aspects of LW discourse while turned off by other aspects.

New Comment
882 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Since it has suddenly become relevant, here are two results from this year's survey (data still being collected):

When asked to rate feminism on a scale of 1 (very unfavorable) to 5 (very favorable), the most common answer was 5 and the least common answer was 1. The mean answer was 3.82, and the median answer was 4.

When asked to rate the social justice movement on a scale of 1 (very unfavorable) to 5 (very favorable), the most common answer was 5 and the least common answer was 1. The mean answer was 3.61, and the median answer was 4.

In Crowder-Meyer (2007), women asked to rate their favorability of feminism on a 1 to 100 scale averaged 52.5, which on my 1 to 5 scale corresponds to a 3.1. So the average Less Wronger is about 33% more favorably disposed towards the feminist movement than the average woman (who herself is slightly more favorably disposed than the average man).

I can't find a similar comparison question for social justice favorability, but I expect such a comparison would turn out the same way.

If this surprises you, update your model.

the average Less Wronger is about 33% more favorably disposed towards the feminist movement than the average woman

Maybe that's exactly what makes LW a good target. There are too many targets on the internet, and one has to pick their battles. The best place is the one where you already have support. If someone would write a similar article about a website with no feminists, no one on the website would care. Thus, wasted time.

In the same way, it is more strategic to aim this kind of criticism towards you personally than it would be e.g. towards me. Not because you are a worse person (from a feminist point of view). But because such criticism will worry you, while I would just laugh.

There is something extremely irritating about a person who almost agrees with you, and yet refuses to accept everything you say. Sometimes you get angry about them more than about your enemies, whose existence you already learned to accept. At least, the enemies are compatible with the "us versus them" dichotomy, while the almost-allies make it feel like the "us" side is falling apart.

EDIT: Seems like you already know this.

"A heretic is someone who shares almost all of your beliefs. Kill him." - Some card game

There is something extremely irritating about a person who almost agrees with you, and yet refuses to accept everything you say. Sometimes you get angry about them more than about your enemies, whose existence you already learned to accept. At least, the enemies are compatible with the "us versus them" dichotomy, while the almost-allies make it feel like the "us" side is falling apart.

Upvoted for that.


In my experience, groups that want something to attack will attack groups that are generally aligned with them, rather than groups that are further away -- possibly due to the perceived threat of losing members to the similar group.

I've seen so many Communists get called Nazis by other Communist groups -- and those groups never go after people who actually call themselves Nazis.


Perhaps this is obvious already, but the positions people explicitly endorse on surveys are not necessarily those they implicitly endorse in blog comments.

Also, people are free to interpret blog comments as it suits their goals.


Anyone want to set up an implicit association test for LW?


Update: Likely that feminist-inclined LWers are less likely to comment/vote and more more likely to take surveys.

Meta-update: This hypothesis ruled highly-improbable based on more data from Yvain.

Among lurkers, the average feminism score was 3.84. Among people who had posted something - whether a post on Main, a post in Discussion, or a comment, the average feminism score was 3.8. A t-test failed to reveal any significant difference between the two (p = .49). So there is no difference between lurkers and posters in feminism score.

Among people who have never posted a top-level article in Main, the average feminism score is 3.84. Among people who have posted top-level articles in Main, the average feminism score is 3.47. A t-test found a significant difference (p < .01). So top-level posters were slightly less feminist than the Less Wrong average. However, the average feminism of top-level posters (3.47) is still significantly higher than the average feminism among women (3.1).


I update in the direction that the model of people I form based on LW comments is pretty inaccurate.

My conclusion is that most posters in LW have conventionally liberal views (at least on social issues) but many of them refrain from participating in the periodic discussions that erupt touching on these issues. Some possible reasons for this: i) they hold these opinions in a non-passionate way that does not incline them to argue for them; ii) they are more interested in other stuff LW has to offer like logic or futurism and see politics as a distraction; iii) they mistakenly believe their opinions are unpopular and they will suffer a karma hit.
iv) they absorbed these views from their surrounding culture and don't actually have good arguments for them.
I agree that this is a very plausible possibility as well. However, IADBOC for two reasons. First, a large part of views like "feminism" and "social justice" are plausibly terminal values. These terminal values are probably absorbed from the surrounding culture, but it is not clear how they could be argued for against someone who held opposite values. In addition, for the descriptive components of these views, "most people hold them absorbed from general culture and can't argue for them" is not correlated with "unjustified, untrue beliefs". The same description would apply to most ordinary scientific beliefs held by non-experts.
But is, as Yvain has explained on his blog, more likely to be associated with true or at least reasonable beliefs. Reasonable beliefs are more likely to become commonly accepted beliefs, and most people who hold commonly accepted beliefs absorbed them from general culture and have never seen a need to make sound arguments for them.
Observe that this argument applies even more strongly to beliefs that have lasted a long time. In particular it applies much more strongly to religion.
I don't think that that is an important distinction. Most of the effect I was talking about is that it is easier for something reasonable (something with a relatively large probability of being true) to make the jump from controversial belief to generally accepted belief. Once something is generally accepted and people stop arguing about it, there is no strong mechanism rejecting false beliefs. To the contrary, new beliefs can seem more reasonable by being associated with previously accepted beliefs, so beliefs in clusters of strongly held beliefs such as religions and certain ideologies are less likely to be true than the first belief in the cluster to become generally accepted.
Memetic evolution. The fact that a belief has survived for a long time, and survived the rise and fall of civilizations, is evidence in it's favor.
Disagree here. Unless your terminal values include things like "everyone believing X regardless of it's truth value" or "making everyone as equal as possible even at the cost of making everyone worse off", the SJ policy proposals don't actually promote the terminal values they claim to support. One could equally well claim that opposition to cryonics is based on terminal values. Or for that matter religious views by non-theologian theists.
Your model of Feminism/SJ differs from mine. Most of the cluster of my-model-of-SJ-space consists of the terminal value "people should not face barriers to doing what they want to do on account of factors orthogonal to that goal" (which I endorse). My model of SJ also includes (as a smaller component) the terminal value "no one should believe there are correlations between race/sex/gender and any other attribute or characteristic", which I don't endorse.
What kind of factors count as "orthogonal to that goal"? If my goal is to become a physicist, say, does the fact that I'm not very intelligent count as an "orthogonal factor"? If the answer is no, then this is one form of my claim of them trying to make everyone as equal as possible even at the cost of making everyone worse off. If the answer is yes, the question arises what they're objection is to some disciplines having demographics that differ from the general population. Given that they tend to take this as ipso facto evidence of racism/sexism/etc. this shows that denial of correlations between race/sex and other attributes is in fact much more central to their belief system then you seem to think. BTW, the other form of my claim can be seen in the following situation: You need to choose between three candidates A, B and C for a position, you know that A is qualified and that one of B or C is also qualified (possibly slightly more qualified then A) but the other is extremely unqualified (as it happens B is the qualified one but you don't know that). However, for reasons beyond either A or B's control it is very hard to check which of B or C is the qualified one. Does hiring A, even though this is clearly unfair to B, count as "creating a barrier orthogonal to the goal"?
No. If "they" believe that. If you know of a large number of people who believe this, I am not aware of them. Hiring isn't creating the barrier; the barrier - the inability to determine which candidate is qualified - is already there.
Did you mean to say "Yes" and get confused by the double negative? (That would be more consistent with the rest of your comment.) I never said they believed that, at most they alieve that. My claim is that is what you get if you try to steel man their position as based on terminal values rather than factual confusion.
Confused: There doesn't appear to be a double-negative. If you're not very intelligent, that is relevant to your physicist aspirations. It is not orthogonal. I do not understand how your description is a steel man. It may be an attempt to extrapolate instrumental values from a certain set of terminal values, but that doesn't help us in our matter-of-fact disagreement about the terminal values of the SJ cluster. If you want to steel man social justice, substitute the entire works of John Rawls.
Sorry, my mistake. The part of his work that I have read, consisted of him making a social contract-type argument saying that since the contract must be made before risk preferences, i.e., whether one is risk averse to risk loving are assigned, we should treat everyone as maximally risk averse. There was also some talk about utility that mostly consisted of him misunderstanding the concept. This did not leave me particularly inclined to read the rest.
Could you talk a little more about/give an example of what you have in mind here?
In my case it's something similar to (ii)... I often feel that arguing in favor of my views will not be a useful contribution to the discussions that periodically erupt on these issues, so I don't. (Sometimes I do.)

Possible, but I suspect the "Why our kind can't cooperate" both has a stronger effect and is more likely.

Indeed. I weep to imagine what the author of the linked article would think of us if she decided to check out the discussion her piece had inspired.
Would love to see these numbers broken down by gender.

For the sake of simplicity, I used sex rather than gender and ignored nonbinaries. The average man on the site has a feminism approval score of 3.75; the average woman on the site has a score of 4.40. These are significantly different at p < .001.

The average man on the site has a social justice approval score of 3.55; the average woman on the site has a score of 4.21. These are, again, significantly different at p < .001.

Wow, this is exactly opposite of what I expected. Thank you!
8Scott Alexander
You expected men to be more feminist than women? Why?
Because the Internet is weird? I've seen conversations in which the only feminists were men and the only MRAs were women. (Myself, I expected the difference to have the same sign but be an order of magnitude smaller.) BTW, FWIW in the survey on your blog men thought that being a woman is 3% worse than being a man and women thought that being a man is 3% better than being a woman, though the exact numbers varied noticeably depending on which question exactly they were answering.
Do you mean that this specific demographic difference is "weird" on the internet relative to real life?
Perhaps what he expected was for men to call themselves more feminist than women, for some sort of signalling reasons (of course anon survey responses aren't much use for signalling, but maybe the idea is that people get into the habit of describing themselves in particular ways and then continue to do so for consistency even in contexts where there's no signalling benefit.
They are if you signal for the group and expect other people do the same.
I'm not sure about that. To my System 1, “50/100” means ‘mediocre’, whereas “3 stars (out of 5)” means ‘decent’.
Offtopic, but ETA on the survey results being published?
8Scott Alexander
Probably before the end of this month.
How big is the probability?
Updating to "LW is somehow the inverse of Western populations in general, among which support for feminist policies tends to be far more widespread than support of feminism-as-identity."

I think it's worth noting that we are (yet again) having a self-criticism session because a leftist (someone so far to the left that they consider liberal egalitarian Yvain to be beyond the pale of tolerability) complained that people who disagree with them are occasionally tolerated on LW.

Come on. Politics is rarely discussed here to begin with and something like 65*% of LWers are liberals/socialists. If the occasional non-leftist thought that slips through the cracks of karma-hiding and (more importantly) self-censorship is enough to drive you away, you probably have very little to offer.

*I originally said 80%, but I checked the survey and it's closer to 65%. I think my point still stands. Only 3% of LWers surveyed described themselves as conservatives.

Only 3% of LWers surveyed described themselves as conservatives.

Interesting. I wonder why LW has so few conservatives. Surely, just like there isn't masculine rationality and feminine rationality, there shouldn't be conservative rationality and liberal rationality. It also makes me wonder how valid the objections are in the linked post if the political views of LW skew vastly away from conservative topics.

Full disclosure: I'm a black male who grew up in the inner city and I don't find anything particularly offensive about topics on LW. There goes my opposing anecdote to the one(s) presented in the linked blog.

At a guess, I'd say this is linked to religion. Once you split out the libertarian faction (as the surveys historically have), it's quite rare for people on the conservative side of the fence (at least in the US) to be irreligious, and LW is nothing if not outspokenly secular.

People in the rationality community tend to believe that there's a lot of low-hanging fruit to be had in thinking rationally, and that the average person and the average society is missing out on this. This is difficult to reconcile with arguments for tradition and being cautious about rapid change, which is the heart of (old school) conservatism.

I think futurism is anti-conservative.

My steelman of the conservative position is 'empirical legislation' : do not make new laws until you have decent evidence they achieve the stated policy goals. "Ah, but while you are gathering your proof, the bad thing X is still happening!" "Too bad."

FAI is a conservative position.

To respond to the grandparent, I think in the US conservatives ceded all intellectual ground, and are therefore not a sexy position to adopt. (If this is true, I think one should view this as a bad thing regardless of one's political affiliation, because 'loyal opposition' is needed to sharpen teeth).

There is a big difference between what sex you are and what beliefs you profess: The first should not be determined by how rational you are, while the second very much should. There should be nothing surprising about the fact that more intelligent and more rational people would have different beliefs about reality than less intelligent and less rational people. Or to put it another way: If you believe that all political affiliations should be represented equally in the sceptic/rationalist community, you are implicitly assuming that political beliefs are merely statements of personal preference instead of seeing them as claims about reality. While personal preference plays a role, I would hope that there's more to it than that.

There is a big difference between what sex you are and what beliefs you profess: The first should not have anything to do with how rational you are...

Why not? Men and women are different in many ways. Why did you decide that a disposition to rationality can't possibly depend on your sex (and so your hormones, etc.)?

It's in reply to Quinton saying that there should be no masculine and feminine types of rationality. In other words, whether you are a man or a woman should not determine what the correct/rational answer is to a particular question (barring obvious exceptions). This is in stark contrast to asking whether or not political affiliation should be determined by how rational you are, which is another question entirely. In other words: Just because correct answers to factual questions should not be determined by gender does not mean that political affiliation should not be determined by correct answers to factual questions.
I think political differences come down to values moreso than beliefs about facts. Rationalism doesn't dictate terminal values.

I think political differences come down to values moreso than beliefs about facts.

Sometimes it is difficult to find out what is the different value and what is essentially the same value but different models.

For example two people can have a value of "it would be bad to destroy humanity", but one of them has a model that humanity will likely destroy itself with ongoing capitalism, while the other has a model that humanity would be likely destroyed by some totalitarian movement like communism.

But instead of openly discussing their models and finding the difference, the former will accuse the latter of not caring about human suffering, and the latter will accuse the former of not caring about human suffering. Or they will focus on different applause lights, just to emphasise how different they are.

I probably underestimate the difference of values. Some people are psychopaths; and they might not be the only different group of people. But it seems to me that a lot of political mindkilling is connected with overestimating the difference, instead of admitting that our values in connection with a different model of the world would lead to different decisions. (Because our val... (read more)

I agree, though I'll add that what facts people find plausible are shaped by their values.
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don't find them to map too closely to actual situations. I'm not sure I can aptly articulate by intuition here. By differences in values, I don't really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason. But I'll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.
I'm always a little suspicious of this line of thinking. Partly because the terminal/instrumental value division isn't very clean in humans -- since more deeply ingrained values are harder to break regardless of their centrality, and we don't have very good introspective access to value relationships, it's remarkably difficult to unambiguously nail down any terminal values in real people. Never mind figuring out where they differ. But more importantly, it's just too convenient: if you and your political enemies have different fundamental values, you've just managed to absolve yourself of any responsibility for argument. That's not connotationally the same as saying the people you disagree with are all evil mutants or hapless dupes, but it's functionally pretty damn close. That doesn't prove it wrong, of course, but I do think it's grounds for caution.
How about different factions (landowners, truck drivers, soldiers, immigrants, etc.) all advocating their own interests? Doesn't that count as "different values"? Or, more simply, I value myself and my family, you value yourself and your family, so we have dufferent values. Ideologies are just a more general and complicated form.
Well, it depends what you mean by values. I was mainly discussing Randy_M's comment that rationalism doesn't dictate terminal values; while different perspectives probably mean the evolution of different value systems even given identical hardwiring, that doesn't necessarily reflect different terminal values. Those don't reflect preferences but rather the algorithm by which preferences evolve; and self-interest is one module of that, not seven billion.
No, I think people can be persuaded on terminal values, although to an extent that modifies my response above; rationality will tell you that certain values are more likely to conflict, and noticing internal contradictions--pitting two vales against each other--is one way to convince someone to alter--or just adjust the relative worth of--their terminal values. Due to the complexity of social reality I don't think you are going to find too many with beliefs that are perfectly consistent; that is, any mainstream political affiliations is unlikely to be a shinning paragon of coherance and logical progression built upon core principles relative to its competitors. But demonstrate with examples if I'm wrong.
If you can persuade someone to alter (not merely ignore) a value they believe to have been terminal, that's good evidence that it wasn't a terminal value.
This is only true if you think humans actually hold coherent values that are internally designated as "terminal" or "instrumental". Humans only ever even designate statements as terminal values once you introduce them to the concept.
I don't think we disagree. To clarify, I suspect most neurotypical humans may possess features of ethical development which map reasonably well to the notion of terminal values, although we don't know their details (if we did, we'd be most of the way to solving ethics) or the extent to which they're shared. I also believe that almost everyone who professes some particular terminal (fundamental, immutable) value is wrong, as evidenced by the fact that these not infrequently change.
If terminal values are definitionally immutable, than I used the wrong term.
"The first should not have anything to do with how rational you are, while the second very much should. " What does should mean there, and from where do you derive it?
But it might affect how rational you are.
It's possible. Why are you bringing it up, though? As an aspiring rationalist, I believe it should be possible in principle to discuss whether one sex is more rational than the other, on average. However, it makes me feel uncomfortable that a considerable number of people here feel the need to inject the topic into a conversation where it's not really relevant. If I were a woman, I can imagine I would feel more hesitant to participate on Less Wrong as a result of this, and that would be a pity.
It's an interesting topic, the moreso because it is taboo, and not exactly tangential to the subject, I think.
Compare with Cosma Shalizi on the heritability of IQ (emphasis mine):

Here, my honest answer would be that I presently have no evidence one way or the other.

At this point I would have to conclude that the guy is either very deliberately blind or is lying through his teeth.

He, of course, knows very well what the consequences for his career and social life would be were he to admit the unspeakable.

You're wrong.

First, about the consequences: the theatrics of the "unspeakable" are getting a little tiresome. Shalizi is a statistics professor at Carnegie-Mellon. The Mainstream Science on Intelligence was signed by 52 professors and included very clear statements about interracial IQ differences, lack of culture bias, and explicit heritability estimates. I would ask you to name the supposedly inescapable and grave "consequences for career and social life" these 52 professors brought on their heads.

Second, about the subject matter: this quote comes at the end of a long post in which Shalizi challenges the accepted estimates of IQ heritability, and criticizes at length the frequent but confused interpretation of heritability as lack of malleability. In his next post on the subject, he criticizes the notion of a single g factor as standing on a shaky ground, having been inferred by intelligence researchers on the basis of factor analysis that is known to statisticians to be inadequate for such a conclusion. Basically, Shalizi criticizes the statistical foundations employed by IQ researchers as being statistically unsound, and he carries out this critique on a muc... (read more)

That flat and unconditional statement seems to be mismatched with your sentence a bit later: Given that you say you lack the capability to "assess it intelligently on my own" and given that I don't see the basis on which you decide I am statistically incompetent, I am rather curious why did you decide that I am wrong. Especially given that I was talking about my personal conclusions and not stating a falsifiable fact about reality. P.S. Oh, and the bit about consequences for career? Try Blits, Jan H. The silenced partner: Linda Gottfredson and the University of Delaware
You're wrong because your conclusion that Shalizi was either blind or lying rested on two premises: one, that heritability in racial IQ differences has been proven, and two, that for Shalizi to admit this fact would be uttering the "unspeakable" and would carry severe social and career-wise consequences. I wrote a detailed explanation about the way Shalizi challenges the first premise on statistical grounds, in the field where he's an expert (and in a way that's neither blind nor dishonest, albeit it could be wrong). I gave an example that illustrates that the second premise is wildly exaggerated, especially when applied to an academic such as Shalizi. That's why you are wrong. Your response was to twist my words into a claim that you are "statistically incompetent", where in fact I emphasized that Shalizi's critique was on a deep technical level, and that I myself lacked knowledge to assess it. That is cheap emotional manipulation. You also cited a paper about Gottfredson that wasn't relevant to what I said. Given this unpromising situation, I'm sure you'll understand if I neglect to address further responses of that kind.
How could you possibly do that for a subject about which you said that "most of this goes over my head"? Short memory, too. Your words: "I doubt, however, that your dismissal of Shalizi's honesty is based on a solid understanding of the arguments in this debate about statistical foundations of IQ research." Oh, I'm the understanding kind :-P
That's a locked-up paper printed in a journal operated by a political advocacy group. Linda Gottfredson doesn't seem to have been "silenced", though. (But I have a libertarian, rather than a left/right partisan, view on that concept. Someone who takes grants from wealthy ideological supporters instead of from government institutions is not thereby silenced; on the contrary, that would seem pretty darn liberating.)

The "Look Inside" button will give you the first two pages. I am not sure why the publisher of the journal is relevant unless you're going to claim the paper is an outright lie.

It's evidence. Are you advising to ignore it? Argument from authority is fallacious but reversed stupidity is not intelligence.

It's evidence.

It's evidence of what? That the paper fits well with the ideological orientation of the journal? Sure, but I'm not interested in that. Is it evidence that the paper incorrectly describes the relevant facts? I don't think so.

Oh, I see. Thanks for the pointer. The paper is from 1991 and seems to be about something that happened between 1988 and Gottfredson receiving a full professorship from U. Delaware in 1990? I'm not clear on the story there. But so far I'm not seeing silencing — just controversy and a question of whether the governors of an institution would choose to associate with a particular wealthy donor. But again, I'll admit I'm coming from a libertarian background — I see a big difference between what I'd call silencing (e.g. violence or threats of violence to get someone to stop speaking their views) and withdrawing association (e.g. choosing not to cooperate with someone on account of their views). The former is really scarily common, especially in online discourse today, so I'm kinda sensitive on that. :( That's all complicated again by it being a government university involved, but except in really politicized cases that usually doesn't affect the way the institution operates internally all that much.
Not quite. My reading is that Gottfredson was explicitly prohibited from accepting funding coming from the Pioneer Fund. I agree that this is not true silencing, but I do not wish to defend the title of the article, anyway. It's just a result of a quick Google search for "consequences" to holding, um, non-mainstream views on race and intelligence. Here is another example.
What you & Anatoly_Vorobey have quoted is talking about heritable IQ differences between individuals ("who do not have significant developmental disorders"). Is it possible you're conflating that with talking about heritable IQ differences between races or sexes? That you use the word "unspeakable" suggests you are, as does the fact that your two cases of scientists suffering career consequences (Gottfredson & Cattell) are cases where they suggested genetic racial differences as well as genetic individual differences. (In fact, if I remember rightly, both went further and inferred likely policy implications of genetic racial differences.)
That's a good point, I think the two issues got a bit conflated in the discussion here. However I can't but see it as a reinforcement of my scepticism. My impression is that the partial heritability of IQ in individuals is well established. At most you can talk about doubting the evidence or not believing it or something like that. Shalizi says he "has no evidence" which is not credible at all.
Yes, I think it supports your dim view of what Shalizi wrote. I also think it detracts from your implication that he's simply evading saying the "unspeakable", since heritable IQ differences between individuals are a much less contentious topic than heritable racial (or sexual) IQ differences.
As reasonable as that person sounds, I feel the need to point out that IQ differences between race has little or nothing to do with IQ differences between sexes (and even less with rationality, but I guess we gravitated away from that). Even if there is a "stupid gene", to phrase it very dumbly, there is still no reason to believe that someone with 2 X chromosomes would inherit this gene while someone with the same parents but with a Y chromosome would not. If you (or anyone) want to argue that women naturally have lower IQ than men, I would go with an argument based on hormones instead. Sounds much more plausible to me.
Where do you think the differences in hormone levels come from?
Food, genes, certain types of activity such as sports and competitiveness in general, the environment you grow up in, being in a position of authority, to name some factors that influence hormone production. It's certainly not just the gender divide. If you think that testosterone makes men smarter than women on average, you would also have to accept the conclusion that women with more testosterone than men will be smarter than men on average. All other things being equal, of course.

Testosterone levels in men and women are in completely different ballparks, and there is no overlap in healthy individuals of the different sexes beyond puberty. This would make me think the difference is mainly genetic.

I'm not arguing for anything beyond this point, so we don't have to go there.

I stand corrected on the testosterone levels: The difference is indeed greater than I thought. I will accept that the difference is mainly, but certainly not solely, genetic.
You are absolutely correct on the facts, and in a saner world I could leave it at that, but you seem to have missed an unspoken part of the argument; The common factor isn't genetics per se but rather an appeal to inherent nature. Whether that nature is the genetic legacy of selection for vastly different ancestral environments or due to the epigenetics of sexual dimorphism is very important in a scientific sense but not in the metaphysical sense of presenting a challenge to the ideals of "equality" or the "psychic unity of mankind." When Dr Shalizi writes the rhetorical question "why it is so important to you that IQ be heritable and unchangeable?" in the context of "'human equality' and 'genetic identity'" his tone is not that of scientific skepticism of an unproven claim but rather an apologetic defense of an embattled creed. Really, why is it so important to you what the truth is? After all, we don't have any evidence to suggest that the doctrines are wrong, so why not just repeat the cant like everyone else? Who else but a heretic would feel need to ask uncomfortable questions? For the most part, scientists writing against the hereditarian position don't bother debating the facts anymore; now that actual genetic evidence is starting to come out they know it'll just make them look foolish in a few years, and the psychometric evidence has survived four decades of concentrated attack already. It's all about implications and responsibility now, or in other words that the lie is too big to fail. It's hardly important to them if the truth at hand is a genetic or an hormonal inequality, they just want it to go away.
I think you misinterpret Dr Shalizi, and do him a disservice. I think his answer is perfectly reasonable from a bayesian point of view. Basically, I see three common reasons to spend time researching difference between races: A) People who are genuinely interested in the answer, for pragmatic or intellectual reasons B) People who are a racist and want to hear a particular answer that fits their preconceived views C) People who are trying to be controversial/contrarian/want to provoke people Certainly there are people who are genuinely curious towards the answer, purely for intellectual reasons (A). I am somewhat interested myself. However, the fact of the matter is that many others are interested purely for racist reasons (B). Many racists aren't open in their racism, and as such mask their racism as honest scientific inquiry, making B indistinguishable from A. Showing interest in the subject is therefore Bayesian evidence for B as much as it is for A. Even worse is the fact that everyone knows that everyone realizes this on an intuitive level, which causes most As to shut up for fear of being identified as Bs, while Bs continue what they are doing. This serves to compound the effect. Meanwhile, Cs arise expressly because it is a hot button topic. As a result it is entirely rational to conclude that someone who is constantly yelling about race and inserting the subject into other conversations is more likely to be a racist on average than others. And of course, it's incredibly frustrating if you are an A and just want an honest conversation about the subject, which is now impossible (thanks, politics!). I think Shalizi deals with this messed up situation admirably: Making clear what he believes while doing everything to avoid sounding controversial or giving fuel to racists. Of course this doesn't work very well because people who call others racist fall into two categories themselves: D) People who are genuinely worried about the dangerous effects of racist cla

I think that the fact that there is a debate and that the "good guys" use name-calling instead of scientific arguments, increases also the number of people in the group A.

It's a bit like telling people not to think of an elephant, and then justify it by saying that elephant-haters are most obsessed about elephants, therefore thinking of an elephant is an evidence of being an evil person. Well, as soon as told everyone not to think of an elephant, this stopped being true.

Actually, it is more like not being allowed to talk about the elephant (...in the room. See what I did there?). Not talking about a subject is much easier than not thinking about it. And because everybody knows that talking about the elephant will cause you to be called an elephant hater and nothing good whatsoever will come of it in 95% of cases, the only people who continue to talk about elephants are people who care so strongly about the subject that they are willing to be called an elephant-hater just so that they can be heard. So that leaves people who either really hate elephants, and people who really can't stand being told that they're not allowed to say something (and super-dedicated elephant scientists I guess, but there's not very many of those).
The most difficult part of not talking about the elephant is when someone suddently says: "There is no elephant in this room, and we all know it, don't we?" Interpreting the rule as forbidding to talk about the elephant, but not about the absence of the elephant. Specifically, if there is a rule against mentioning genetic differences -- and the goal is to avoid the discussion about genetics, not to assert that there are no differences -- the rule should equally forbid saying that there are genetic differences, and that there aren't genetic differences. The rule should make very clear whether its intent is to 1) stop both sides of the debate, or 2) stop only one side of the debate, letting the other side win. Both options make sense, but it is difficult to follow when it is not sure which of these two options was meant.
In the same sense that showing interest in medicine is Bayesian evidence for me wanting to poison my neighbors.
I'd say that the percentage of people showing interest in medicine that want to poison their neighbour is rather lower than the percentage of people talking about genetic differences between race being racist.
That depends on the definition of "racist" used.
I read Shalizi differently, as asking something like, "Really, is it because you care about the truth qua truth that you find this particular alleged truth so important?" Far from apologetic, he is — cautiously, because there is a counterfactual gun to his head — going on the offensive, hinting that the people insistently disagreeing with him are motivated by more than unalloyed curiosity. It is not, of course, dispassionate scientific scepticism, but nor is it a defensive crouch. My interpretation could be wrong. Shalizi isn't spelling things out in explicit, objective detail there. But my interpretation rings truer to my gut, and fits better with the fact that his peroration rounds off ten thousand words of blunt and occasionally snarky statistical critique.
Yes, Shalizi was talking about something completely different, but his attitude was similar to yours. He was saying: "sure, I could imagine that it might be so (that there might be a heritable difference), but why are you so invested in believing in that? Why do you fight for it so much?". I meant for my quotation to bolster your case.
Ahhhh, you're right, I completely misunderstood your intent. In that case we are in agreement.
It affects your argument that there is something wrong with having a skewed gender balance here.
Would you predict that the average IQ among LW census responders who self label as conservatives is lower? If so, how strong would you predict the effect to be?
See the penultimate paragraph of this comment, take a look at this, and try to guess whether US::conservatives have higher or lower Openness in average than US::liberals.
LW is a US-centric site. When I saw the option, I assumed it meant the US interpretation of the "conservative" label, which (from Europe) seems impossible to distinguish from batshit crazy. I like to see myself as somewhat conservative, but I even more like to see myself as not batshit crazy.
The definition given in the survey was “Conservative, for example the US Republican Party and UK Tories: traditional values, low taxes, low redistribution of wealth”.
As a US conservative, I can assure you the feeling is mutual, BTW.
Not sure what you mean by that. You feel European conservativism is crazy? You feel the interpretation of US conservatism is crazy? You feel US conservatives are functionally identical to crazy, if not actually so?
I meant that all the mainstream European parties seem crazy.

something like 80% of LWers are liberals/socialists

60%. But yes, it was funny to find out who the evil person was.

Actually, no, it was quite sad. I mean, when reading Yvain's articles, I often feel a deep envy of the peaceful way he can write. I am more likely to jump and say something agressive. I would be really proud of myself if I could someday learn to write the way Yvain does. ... Which still would make me just another bad guy. Holy Xenu, what's the point of even trying?


Politics is rarely discussed here to begin with and something like 65*% of LWers are liberals/socialists.

Yes, but people on the far right are disproportionately active in political discussions here, probably because it is one of the very few internet venues where they can air their views to a diverse and intelligent readership without being immediately shouted down as evil. If you actually measured political comments, I suspect you'd find that the explicitly liberal/social ones represent much less than 65%.

I did not know that, thanks!

Turns out I was wrong, according to the 2012 survey only like 65% of LWers are socialist/liberals.

Ok, that sounds much more reasonable.

Apposite criticism. Most worrying excerpt:

...these environments are also self-selecting. In other words, even when the people speaking loudest or most eloquently don’t intentionally discourage participation from people who are not like them / who may be uncomfortable with the terms of the discussion, entertaining ‘politically incorrect’ or potentially harmful ideas out loud, in public (so to speak) signals people who would be impacted by said ideas that they are not welcome.

Self-selection in LessWrong favors people who enjoy speaking dispassionately about sensitive issues, and disfavors people affected by those issues. We risk being an echo-chamber of people who aren't hurt by the problems we discuss.

That said, I have no idea what could be done about it.

I'm not sure that anything should be done about it, at least if we look at it from whole society's perspective. (Or rather, we should try to avoid the echo chamber effect if possible, but not at the cost of reducing dispassionate discussion.) If some places discuss sensitive issues dispassionately, then those places risk becoming echo chambers; but if no place does so, then there won't be any place for dispassionate discussion of those issues. I have a hard time believing that a policy that led to some issue only being discussed in emotionally charged terms would be a net good for society.

Yes, the complaint strikes me as "Stop saying things we don't like, it might lead to disapproved opinions being silenced!


Wouldn't it be possible to minimize signaling given the same level of dispassionate discussion? That is, discourage use of highly emotionally charged/exosemantically heavy words/phrases if a less charged equivalent exists or can be coined and defined.

Say if you have a word X that means Y plus emotional connotation α and thede/memeplex/identity signaling effect β (not that emotional connotation is detached from the thedish/political/identity-wise context of the reader, of course), there's really no reason to use X instead of Y in dispassionate discussion. To give a concrete example, there's no reason to use 'sluttiness' (denotatively equivalent to 'sexual promiscuity' but carrying a generally negative connotational load, signaling against certain memeplexes/political positions/identities (though ideally readers here would read past the signaling load/repress the negative emotional response), and signaling identification with other positions/identities) instead of 'sexual promiscuity', which means the same thing but sheds all the emotional and thedish/tribal/whatever baggage.

(That shouldn't be read as an endorsement of the reasoning toward the same conclusion in the post, of course.)


I don't believe this is feasible. My impression is that emotional connotations inhere in things, not in words.

Over the decades, society has, over the decades, gone through a whole string of synonyms for "limited intelligence" -- none of which are emotionally neutral. Changing terms from "imbecile", to "retarded", "developmentally disabled" to "special needs", has just resulted in a steady turnover of playground insults. You can't make an insulting concept emotionally neutral, I think.


The two aren't contradictory: emotional connotations can inhere in things and words.

The euphemism treadmill is what you get when the emotional connotation inheres in a thing. But what emotional connotation inheres in 'sexual promiscuity'? Even if it is there (and its recommendation by someone sensitive enough to emotional connotations that inhere in words [from the perspective of a specific thede/tribe] seems to suggest that it isn't), certainly there's less negative connotation there than in 'sluttiness'.

Similarly, it's possible to find loaded equivalents, or at least approximations, for most (all?) of Mencius Moldbug's caste terms. (UR is a good place to mine for these sorts of pairs, since he coins emotionally neutral terms to replace, or at least approximate, emotionally loaded terms. Of course, if you use them, you're signaling that you've read Moldbug, but...)

I get the impression that we're already pretty much mostly discusing issues in a "less emotionally laden" way, avoiding shocking words,etc., no?
But you're also a white man and have an obvious lack of experience in this situation that functions as an unknown unknown. You'd be wise to be conservative in your conclusions. As a white man myself, I feel it's entirely reasonable to refuse to dispassionately discuss the matter of a boot on one's own face. There are some situations in which case it is entirely appropriate to react with the deepest of passions.
As a white man (according to your own beliefs) you can't understand how women or non-whites feel, so please stop appropriating their cause and speaking for them. There are people on LW who aren't white or male, so (according to your own beliefs) you should let them talk, instead of talking from your ignorant position of white male privilege about what you think is better for them. That's mansplaining, right?
This is a hot iron approaching my face. YOU ARE TELLING ME MY THOUGHTS AND FEELINGS ARE ILLEGITIMATE. That is literally the first step to dehumanizing and murdering me. I can either follow your advice and tell you to fuck off, or I can try to address this disagreement in a reasonable way. Which do you think will go better for me? Which do you think will go better for you? I for one don't think the adversarial approach of many feminist and pro queer writers is sane. You really should not declare the people you think are extremely powerful and controlling the world to be your sworn enemies. Feminism literally cannot win any victories without the consent of men.
I've got a lot of sympathy for your situation-- I spent a lot of time freaking out about the complex emotional abuse that anti-racists/certain kinds of feminists go in for. Still, I found it useful to learn something about assessing the current risk level of an attack just so I don't go crazy-- they've spread a lot of misery and they may eventually be politically dangerous, but they aren't imposing the sort of immediate visceral threat you're reacting to. We haven't begun to see the next stage of the fight (or at least, I haven't seen anything I'd call effective opposition to the emotional abuse), but I recommend steadying yourself as much as possible.

I agree that this is by far the most interesting part of the piece. IIRC this site is pretty much all white men. Part of it is almost certainly that white men are into this sort of thing but I can't help but imagine that if I was not a white man, especially if I was still in the process of becoming a rationalist, I would be turned off and made to feel unwelcome by the open dialogue of taboo issues on this website. This has the obvious effect of artificially shifting the site's demographics, and more worryingly, artificially shifting the site's demographics to include a large number of people who are the type of person to be unconcerned with political correctness and offending people. I think while that trait in and of itself is good, it is probably correlated with certain warped views of the world. Browse 4chan for a while if you want examples.

I think that between the extremes of the SJW Tumblr view of "When a POC talks to you, shut the fuck up and listen, you are privileged and you know nothing" and the view of "What does it matter if most of us aren't affected by the problems we talk about, we can just imagine and extrapolate, we're rationalist, right?" is where the truth probably lies.

Like you said, I have no idea what to do about this. There are already a lot of communities where standard societal taboos of political correctness are enforced, and I think it's worthwhile to have at least one where these taboos don't exist, so maybe nothing.

I'm a white man who's done handsomely in the privilege lottery and I find quite a lot of LW utterly offputting and repellent (as I've noted at length previously). I'm still here of course, but in fairness I couldn't call someone unreasonable for looking at its worst and never wanting to go near the place.


If all you show a person is the worst of lesswrong, then yes, I could see them not wanting to have anything to do with it. However, this doesn't tell us anything; the same argument could be made of virtually all public boards. You could say the same thing about hallmark greeting cards.


This is roughly how I feel. There is a lot of good stuff here, and a lot of lot of horrible, horrible stuff that I never, ever want to be associated with. I do not recommend LessWrong to friends.


I'm at a loss regarding what you must consider 'horrible'. About the worst example I can think of is the JoshElders saga of pedophilia posts, and it only took two days to downvote everything he posted into oblivion and get it removed from the lists - and even that contained a lot of good discussion in the comments.

If you truly see that much horrible stuff here, perhaps your bar is too low, or perhaps mine is too high. Can you provide examples that haven't been downvoted, that are actually considered mainstream opinion here?


Most of these are not dominant on LW, but come up often enough to make me twitchy. I am not interested in debating or discussing the merits of these points here because that's a one-way track to a flamewar this thread doesn't need.

  • The stronger forms of evolutionary psychology and human-diversity stuff. High confidence that most/all demographic disparities are down to genes. The belief that LessWrong being dominated by white male technophiles is more indicative of the superior rationality of white male technophiles than any shortcomings of the LW community or society-at-large.

  • Any and all neoreactionary stuff.

  • High-confidence predictions about the medium-to-far-future (especially ones that suggest sending money)

  • Throwing the term "eugenics" around cavalierly and assuming that everyone knows you're talking about benevolent genetic engineering and not forcibly-sterilizing-people-who-don't-look-like-me.

There should be a place to discuss these things, but it probably shouldn't be on a message board dedicated to spreading and refining the art of human rationality. LessWrong could easily be three communities:

  • a rationality forum (based on the sequences and similar, fo

... (read more)

High confidence that most/all demographic disparities are down to genes. The belief that LessWrong being dominated by white male technophiles is more indicative of the superior rationality of white male technophiles than any shortcomings of the LW community or society-at-large.

I am not sure how much these opinions are that extreme, and how much it's just a reflection of how political debates push people into "all or nothing" positions. Like, if you admit that genes have any influence on population, you are automatically misinterpreted to believe that every aspect of a population is caused by genes. Because, you know, there are just two camps, "genes, boo" camp and "genes, yay" camp, and you have already proved you don't belong into the former camp, therefore...

At least this is how I often feel in similar debates. Like there is no "genes affect 50% of something" position. There is a "genes don't influence anything significant, ever" camp where all the good guys are; and there is the "other" camp, with everyone else, including me and Hitler. If we divide a continuous scale into "zero" and "nonzero" sub... (read more)


I even don't think that having a white male majority at this moment is some failure of a LW community

There are other options. I think there exist possible worlds where LW is less-offputting to people outside of the uppermiddleclasstechnophilewhitemaleosphere with demographics that are closer to, but probably not identical to, the broader population. Like you said, there's no reason for us to split the world into all-or-nothing sides: It's entirely possible (and I think likely) that statistical differences do exist between demographics and that we have a suboptimal community/broader-culture which skews those differences more than would otherwise be the case.

Edit: I had only skimmed your comment when writing this reply; On a reread, I think we mostly agree.

I've definitely experienced strong adverse reactions to discussing eugenics 'cavalierly' if you don't spend at least ten to fifteen minutes covering the inferential steps and sanitising the perceived later uses of the concept. Good point about the possible three communities. I haven't posted here much, as I found myself standing too far outside the concepts whilst I worked my way through the sequences. Regardless of that, the more I read the more I feel I have to learn, especially about patterned thinking and reframes. To a certain extent I see this community as a more scientifically minded Maybe Logic group, when thinking about priors and updating information. A lot of the transhumanist material have garnered very strong responses from friends though, but I've stocked up on Istvan paperbacks to hopefully disseminate soon.
I can't see this as part of the problem. You don't have to discuss it, but I'm bewildered that it's on the list.

I should probably have generalized this to "community-accepted norms that trigger absurdity heuristic alarms in the general population".

Again, there should be a place to discuss that, but it shouldn't be the same place that's trying to raise the sanity waterline.

I don't mind #3, in fact the discussions of futurism are a big draw of LessWrong for me (though I suppose there are general reasons for being cautious about your confidence about the future). But I would be very happy to see #1, #2, and #4 go away.

I find stuff like “if you don't sign up your kids for cryonics then you are a lousy parent” more problematic than a sizeable fraction of what reactionaries say.

What if you qualified it, "If you believe the claims of cryonicists, are signed up for cryonics yourself, but don't sign your kids up, then you are a lousy parent"?
I would agree with it, but that's a horse of a different colour.
In discussing vaccinations, how many people choose to say something as conditional as "if you believe the claims of doctors, have had your own vaccinations, but don't let your kids be vaccinated, then you are a lousy parent"? No, the argument is that you should believe the value of vaccinations, and that disbelieving the value of vaccinations itself makes your parenting lousy. Well, I think Eliezer feels the same about cryonics as pretty much all the rest of us feel about vaccines -- they help protect your kids from several possible causes of death.
Which is pretty much the same argument as saying that you should baptize your children and that disbelieving the value of baptism itself makes your parenting lousy.
If the belief-set you're subtly implying is involved were accurate, then it would be. However, I think we have a "sound" vs "sound" tree-falling-in-the-woods issue here. Is "lousy parenting" a virtue-ethics style moral judgement, or a judgement of your effectiveness as a parent? Taboo "lousy", people. We're supposed to be rationalists.
Exactly, it all depends on the actual value of the thing in question. I believe baptism has zero value, I believe vaccines have lots of value, I'm highly uncertain about the value of cryonics (compared to other things the money could be going to). A person is expected to say such about X if they believe X has lots of value. So why is it so very problematic for Eliezer to say it about cryonics when he believes cryonics have lots of value? It's impolitic and I don't know how effective it is in changing minds. But then again it's the same thing we say about vaccinations, so who knows: perhaps shaming parents does work in convincing them. I'd like to see research about that.
My prior is that the results will be bi-modal: some parents can be shamed into adjusting their ways, while for others it will only force them into the bunker mindset and make them more resistant to change.
I don't think this hypothesis is supported by the evidence, specifically past LW discussions.
My vague recollections of LW-past disagreements, but I don't have any readily available examples. It's possible my model is drawing too much on the-rest-of-the-Internet experiences and I should upgrade my assessment of LW accordingly.
Yes, I am specifically talking about LW. With respect to the usual 'net forums I agree with you.
I'm not sure that would work. After all, Bayes's rule has fairly obvious unPC consequences when applied to race or gender, and thinking seriously about transhumanism will require dealing with eugenics-like issues.
“rather than applying to particular issues”
That would simply result in people treating Bayesianism as if it's a separate magisterium from everyday life.
Think of it as the no-politics rule turned up to 11.The point is not that these things can't be reasoned about, but that the strong (negative/positve) affect attached to certain things makes them ill-suited to rationalist pedagogy. Lowering the barrier to entry doesn't mean you can't have other things further up the incline, though.

Datapoint: I find that I spend more time reading the politically-charged threads and subthreads than other content, but get much less out of them. They're like junk food; interesting but not useful. On the other hand, just about anywhere other than LW, they're not even interesting.

(on running a memory-check, I find that observation applies mostly to comment threads. There's been a couple of top-level political articles that I genuinely learned something from)


a lot of lot of horrible, horrible stuff that I never, ever want to be associated with.

As a lurker and relatively new person to this community I've now seen this sentiment expressed multiple places but without any specific examples. Could you (or anyone else) please provide some? I'd really like to know more about this before I start talking about Less Wrong to my friends/family/coworkers/etc.

Feel free to PM me if you don't want to discuss it publicly.

A lot of this content is concentrated among the users who eventually created MoreRight. Check out that site for a concentrated dose of what also pops up here.
Politics, eh? I'm confused.
This guy was a pretty big poster on LW, I think. Best example I can come up with, I'm sure there are better ones. http://www.youtube.com/watch?v=cq5vRKiQlUQ

But but ... he posted a link to that (or some other video of him ranting at the camera), and then was downvoted to oblivion and demolished in the comments, while whining about how he was being oppressed.

Things like that don't seem remotely mainstream on LW, do they? (I don't read all the big comment threads ...)

Oh, okay. For some reason I thought he was fairly respected here.

A lie repeated a hundred times becomes available.

If we keep telling ourselves that LW is full of horrible stuff, we start believing it. Then any negative example, even if it happens once in a while and is quickly downvoted, becomes a confirmation of the model.

This is a website with hundreds of thousands of comments. Just because a few dozen of the comments are about X, it doesn't prove much.

EDIT: And I think threads like this contribute heavily to the availability bias. It's like an exercise in making all the bad things more available. If you use this strategy as an individual, it's called depression.

Just imagine that once in a while someone would accuse you of being a horrible human being, and (assuming they had a record of everything you ever did) would show you a compilation of the worst things you have ever did in the past (ignoring completely anything good you did, because that's somehow irrelevant to the debate) and told you: this is you, this is why you are a horrible person! Well, that's pretty much what we are doing here.

That was awesome!
The dark secrets thread like a year ago was one of my favorite threads to read
Any key words I should use to find that one?
This one too, and maybe some other one I can't think of at the moment.

A pretty minor poster, but there was someone who was a fan of his who posted a lot of links to him for a while. I think he's gotten worse.

And thus, more entertaining.
That guy is funny. Definitely not someone who would be well respected here. His model of the world is broken and he's trying to make the world fit his model, instead of the other way around.
In one of his videos there's a part where he argues that cigarettes are actually good for you. LOL
Can you provide some links? I haven't followed what you've said previously about this.
Most of the previous threads on the topic, every time one of these posts comes around. You could find them by much the same process as I could. The HBD fans put me off for a few months.
My impression is that the HBD fans are a pretty small minority here. What were your impressions?
Small but noisy. They add their special flavour to the tone though, as one of the few places outside their circle of blogs that gives them airtime (much like the neoreactionaries they cross over with). I wonder if the people in the subthread below going "we may be racists, but let's be the right sort of racists" understand that this doesn't actually help much.
Rather we support our beliefs with rational arguments, the HBD-deniers don't bother presenting counter arguments (and when they do they tend to be laughably bad) but instead try to argue that it's somehow immoral to say and/or believe these things regardless of their truth value.
I've not really followed you, but I've never once seen you make an argument or even explain what you want. If you tell me something y'all want that you could plausibly achieve without the aid of low-status racists, perhaps I'll try to put y'all in a separate category.
I'd like people to stop trying to suppress science because of nothing but ideological principles, like the creationists, and let the scientists get on with stuff like finding a cure for Alzheimer's.
I'll give you two-to-one odds that Derbyshire has not found a promising line of research for an Alzheimer's cure.
This could do with some clarification - doesn't help whom with what? And, by contrast, what would help?
Let's see the results of the survey when they come out.
"Fan" is a funny word in this contex. It brings to mind people who go around shouting "Yea, Diversity!" non-ironically. Except, there are people who more or less do that, it isn't the HBD crowd, and in fact diversity boosters don't even really believe in it. Edit: Sorry, missed the correct coment to reply to.
Why? If the answer is, as appears to be the case from context, that we say true things that make you feel uncomfortable, well I recommend treating your feeling of discomfort with the truth rather than the people saying it as the problem. This is a community devoted to rationality, not to making you feel comfortable.
Truth isn't enough.

Continuing the argument though, I just don't think including actual people on the receiving end into the debate would help determine true beliefs about the best way to solve whatever problem it is. It'd fall prey to the usual suspects like scope insensitivity, emotional pleading, and the like. Someone joins the debate and says "Your plan to wipe out malaria diverted funding away from charities that research the cure to my cute puppy's rare illness, how could you do that?" - how do you respond to that truthfully while maintaining basic social standards of politeness?

Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed - but other than that, what marginal value are they adding to the discussion?

Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed

Aye !

but other than that, what marginal value are they adding to the discussion?

Is that not enough for You ? Especially in some discussions, which are repetitive on LW ?


Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed - but other than that, what marginal value are they adding to the discussion?

Thinkers - including such naive, starry-eyed liberal idealists as Friedrich Hayek or Niccolo Machiavelli - have long touched on the utter indispensability of subjective, individual knowledge and its advantages over the authoritarian dictates of an ostensibly all-seing "pure reason". Then along comes a brave young LW user and suggests that enlightened technocrats like him should tell people what's really important in their lives.

I'm grateful to David for pointing out this comment, it's really a good summary of what's wrong with the typical LW approach to policy.

(I'm a repentant ex/authoritarian myself, BTW.)

I'm having trouble wrapping my head around that. Could you give an example?
I hesitate to suggest this, but I've noticed most of the "sensitive but discussed anyway" issues have been on areas where socially weaker groups might feel threatened by the discussion. Criticism of socially strong groups is conspicuously absent, given that LW demographics are actually far-left leaning according to polls. If the requirement that one must be dispassionate would cut in multiple directions simultaneously (rather than selectively cutting in the direction of socially marginalized groups) then we'd select for "willing to deal intellectually with emotional things" rather than selecting for "emotionally un-reactive to social problems" (which is a heterogeneous class containing both people who are willing to deal intellectually with things which are emotionally threatening and people who happen to not often fall on the pointy end of sensitive issues) The reason I hesitate to suggest it is that while I do want an arena where sensitive issues can be discussed intellectually without driving people away, people consciously following the suggestion would probably result in a green-blue battleground for social issues.
There's lots of talk about religion which is almost the definition of a socially strong group.

Well sure, but that doesn't count because we're pretty much all atheists here. Atheism is the default position in this social circle, and the only one which is really given respect.

I'm talking about criticisms of demographics and identities of non-marginalized groups that actually frequent Lesswrong.

If we're allowed to discuss genetically mediated differences with respect to race and behavior, then we're also allowed to discuss empirical studies of racism, its effects, which groups are demonstrated to engage in it, and how to avoid it if we so wish. If we're allowed to empirically discuss findings about female hypergamy, we're also allowed to discuss findings about male proclivities towards sexual and non-sexual violence.

But for all these things, there's no point in discussing them in Main unless there's an instrumental goal being serviced or a broader philosophical point being made about ideas...and even in Discussion, for any of this to deserve an upvote it would need to be really data driven and/or bringing attention to novel ideas rather than just storytelling, rhetoric, or the latest political drama.

Reactionary views, being obscure and meta-contrarian, have a natural edge in the "novel ideas" department, which is probably why it has come up so often here (and why there is a perception of LW as more right-wing than surveys show).

If we're allowed to discuss genetically mediated differences with respect to race and behavior, then we're also allowed to discuss empirical studies of racism, its effects, which groups are demonstrated to engage in it, and how to avoid it if we so wish. If we're allowed to empirically discuss findings about female hypergamy, we're also allowed to discuss findings about male proclivities towards sexual and non-sexual violence.

Speaking for myself, I would be happy to see a rational article discussing racism, sexism, violence, etc.

For example, I would be happy to see someone explaining feminism rationally, by which I mean: 1) not assuming that everyone already agrees with your whole teaching or they are a very bad person; 2) actually providing definitions of what is and what isn't meant by the used terms in a way that really "carves reality at its joints" instead of torturing definitions to say what you want such as definining sexism as "doing X while male"; 3) focusing on those parts than can be reasonably defended and ignoring or even willing to criticize those part's that can't.

(What I hate is someone just throwing around an applause light and saying: "therefore you must agree with me or you are an evil person". Or telling me to go and find a definition elsewhere without even giving me a pointer, when the problem is that almost everyone uses the word without defining it, or that there are different contradictory definitions. Etc.)

Some of my favorite feminist articles are the ones demonstrating actual statistical effects of irrational biases against women, such as http://www.catalyst.org/file/139/bottom%20line%202.pdf talking about women being undervalued as board members, or the ones talking about how gender blind audition processes result in far more women orchestra members.