Cross-posted from Putanumonit. A bit more culture-warlike and snarky than the LW standard. Won't make as much sense to people outside the US. Previously on the subject of the IDW and outgroups.


Ezra Klein’s latest essay on the YouTube right / Intellectual Dark Web is perhaps the voxiest Vox article ever. The Vox specialty is walking 90% of the way towards truth in careful, reasoned steps, and then taking a left turn off a steep cliff.

Klein’s article responds to a challenge by Dave Rubin, wondering how he (Rubin) suddenly became a conservative reactionary:

Group Definitions

Klein claims that Dave Rubin is part of some group, and they both agree that it’s a bad group to be part of. The question is: who is Dave Rubin in a group with? And what makes them part of the same group?

You could ask Dave Rubin which groups he identifies with, and he’ll name examples like liberals, gays, Jews, and the Intellectual Dark Web. But Klein notes that even the IDW will “draw a circle around the most respectable subgroup of this world, use their preferred term for themselves, and worry over the unsavory characters they were associating with”. If you trust Rubin to define his own groups, there will be suspiciously few Nazis in them.

Another way to identify this group is via a network graph, a network whose nodes are people Ezra Klein is suspicious of and whose edges are podcast appearances and YouTube interviews. Lucky for Klein, Rebecca Lewis of the Data & Society research institute created a chart of this network using an algorithm, and we all know that algorithms are objective and trustworthy.

(Of course, after the algorithm had run they still had to manually remove all the inconvenient data points, like Noam Chomsky appearing on Stefan Molyneux’ show).

Here’s how it works: Nazis have a Nazi number of 0. If you interact with someone whose Nazi number is k, your number is at most k+1. I interviewed Geoffrey Miller, who did an event with Sam Harris, who recorded a podcast with Ezra Klein, who interviewed W. Kamau Bell, who interviewed Richard Spencer, who’s a known Nazi. So my Nazi number is 5, and yours is at most 6 by virtue of reading Putanumonit. Congratulations! You are within 6 degrees of separation from a Nazi, and so you’re practically one yourself.

Klein noticed that his own Nazi number is an unflattering 2, and reminds us that “it’s worth being careful with these diagrams”. He immediately disregards his own warning and claims that Rubin is fair game for the chart because:

He’s part of a social and algorithmic network in which he’s cross-pollinating audiences both intentionally, in terms of whom he has on and what shows he goes on, and unintentionally, in terms of what the algorithm learns to show his followers.

Get it? My link to Klein’s article is “intentionally cross-pollinating audiences” to Vox, and so it turns out that I’m a social justice ally and not a Nazi after all. Whew, I was really worried there for a minute.

Agree not to Disagree

What defines this group besides the number of lines you can draw on a chart? Klein comes up with an important insight:

Ideological coalitions depend on the agreements you emphasize and the disagreements you live with. Sen. Elizabeth Warren supports single-payer health care and Sen. Mark Warner doesn’t, but they’re still both Democrats. If Warner were anti-abortion, had signed Grover Norquist’s tax pledge, and had endorsed Donald Trump, he’d be out of the party. When you’re trying to understand an ideological coalition, you’re looking for those lines.

The ideology that Dave Rubin and his friends can’t agree to disagree on is what defines them as a group. According to Klein, this ideology is reaction, or antagonism, to the progressive project :

Their reactionary politics and connections to traditional modes of power show that what they are most often fighting for is actually the status quo—a return to traditional gender and racial norms, or a belief in the individual over an understanding of group oppression.

What is the right-wing status quo that Rubin and the rest of the “reactionary right” are defending? It’s a world of legalized drugs, reformed prisons, gay rights, and abortion access. Klein explicitly excludes all of the above from the pursuit of social justice, and doubles down on “a return to traditional gender and racial norms”, even though all four of Rubin’s stated political stances have to do with race and gender.

Since it’s hard to catch Rubin himself saying anything problematic about women and racial minorities, Klein posts clips of Milo Yiannopolous and Jordan Peterson, two Rubin guests, hating on feminism. The clips intend to show that hating on feminism is one thing the “reactionary right” agrees not to disagree on. What do the self-styled IDWs have to say in their defense?

The person who first styled himself IDW is Eric Weinstein, a five-time guest on the Rubin Report. Here’s what Eric had this to say on a show with “reactionary right” superstar Joe Rogan:

  • Women should be paid more by STEM companies as a policy, to account for the fact that men are more aggressive in salary negotiations and that women are better at many non-STEM things than men.
  • Men are overrepresented among Silicon Valley founders because they “overpromise”, a euphemism for “bullshit and hope it doesn’t catch up with you”.
  • Fields like economics and physics are losing out on valuable contributions by women because of their overcompetitive macho culture.
  • The greatest “oilfield” of untapped potential in the world is the minds of Asian women.

I don’t think that Weinstein, who’s married to an Indian economist, will find much agreement with Jordan Peterson on any of those. And yet the two have appeared on stage together multiple times. So what do the two of them, and everyone else in the group, have in common? I can think of two things:

  1. They’re both committed to the free expression of ideas, and so are their interviewers. This is why Rubin and Rogan rarely push back hard against any opinions expressed on their shows, even opinions diametrically opposed to each other.
  2. They’re both sick of Vox’s shit.

I already expressed my wish, on the most IDW-aligned magazine, that they focus more on free speech and bold ideas and less on booing the outgroup. I explicitly identified Vox and Ezra Klein as symbols of the outgroup.

Klein also couldn’t help notice that a lot of people kinda hate Vox, ranging from the “reactionary right” to leftist podcasters Chapo Trap House. But while Chapo are probably just allies joining the good fight with a few misguided assumptions, the fact that the right “can’t be in sympathy with the SJWs” means that they’re against race and gender equality.

And this is where Klein falls off a cliff – thinking that hating a certain tribe means that you oppose the stated goals of their ideology, even when the two have very little to do with each other.

Small Minds

In The Ideology is not the Movement, Scott Alexander lays out the following model of how tribes form:

1. Let’s get together to do X
2. Let’s get together to do X, and have drinks afterwards
3. Let’s get together to discuss things from an X-informed perspective
4. Let’s get together to discuss the sorts of things that interest people who do X
5. Let’s get together to discuss how the sort of people who do X are much better than the sort of people who do Y.
6. Dating site for the sort of people who do X
7. Oh god, it was so annoying, she spent the whole date talking about X.
8. X? What X?

In Scott’s main example, he claims that it’s not very useful to understand “Shia Muslims” in 2018 as “the people who believe that Ali ibn Abi Talib was appointed the rightful Caliph by Muhammad at Ghadir Khumm”. Shia are the people who do Shia stuff, who date other Shias, and who outgroup Sunnis. Very few of them devote any mindspace at all to debating the Caliphate ascension.

It was saidGreat minds discuss ideas, average minds discuss events, small minds discuss people. I don’t think that’s entirely fair – we are social monkeys with elephants in our brains; discussing people and events is what we’re made for. I would perhaps grant that small minds discuss ideas 0.5% of the time, and great minds do so 5% percent of the time, but all of us spend at least 95% of our thoughts on people and events. Muslim history is complicated, and people got shit to do.

Tribes drift away from the flags they originally rallied around, especially political tribes since politics is about tribes. Imagine any of the political leaders from 50 years ago trying to fit in with the modern incarnations of their movements. And if people spend so little time pondering the central ideas of their own tribe, how much less effort do they spend on the ideology of their outgroup?

The great mind of Brian Caplan coined the concept of the Ideological Turing Test– being able to represent your outgroup’s ideology so faithfully that they could think you are one of them. Passing the ITT is galaxy mind level: it is hard, it is rare, and a lot of people aren’t even aware that it’s a thing.

For example, Ezra Klein:

The unbridgeable divides today, the ones that seem to define which side you’re really on, revolve around issues of race, gender, identity, and equality.

I don’t know how to put this delicately: this sentence is written from a position so deep up one’s own ass that a proctologist wouldn’t dare venture near. By Klein’s logic, everyone who dislikes SJWs only does so because they perfectly understand SJW ideology, in SJWs’ own language, and then chooses to just do the opposite. Klein’s ingroup certainly defines itself by what they say on race, gender, identity, and equality. But Klein’s outgroup mostly defines itself by despising Klein’s ingroup.

Klein not only is incapable of passing the IDW’s Ideological Turing Test, but he also seems unaware of the fact that someone can fail to pass his own. The only way I can imagine this happening is that Klein is so absorbed in his ideology that he can’t fathom other minds being different.

(I guess I can imagine another option: Klein is knowingly lying for what he thinks is a worthy cause. )

As for the IDW: some of them are fighting for progress on gender and race equality but disagree with Klein on the way to get there (e.g., the Weinsteins), some are in fact opposed to the progressive fight (e.g., Ben Shapiro), and some do their best to ignore this entire topic and focus on other things (e.g., Sam Harris). Very few of them I think would get a high score on a Social Justice ITT, and neither would I. On the topic of whatever “post-modern cultural neo-Marxism” is, I’m more inclined to trust a transgender YouTuber in a wig than Jordan Peterson, who uses it as a catch-all term for his outgroup.

Allow me to make it simple: Jihadis don’t hate “American freedom and democracy”, they just hate Americans. How many of them do you think have read The Declaration of Independence and the Bill of Rights? Neoconservatives don’t hate Islam, they hate Islamists. How many of them do you think have read the Qur’an? Do you think that the Nazis had theological quibbles with Judaism or did they just need outgroups? 

Quick segue away from Hitler. I was at a party on Saturday, and a man was introduced to me as: “a really smart person who hates Rationality”. And yes, this is a humblebrag – I’m very proud of having cultivated a reputation such that friends will introduce me to someone who deplores my ingroup and expect a fun and curious conversation.

What did this gentleman hate about Rationality? He couldn’t stand the insufferable arrogance of Eliezer, who in his opinion isn’t only wrong on quantum theory but has the gall to denigrate physicists for disagreeing with him. Also, a rationalist was being a dick to him at Burning Man and wasn’t condemned quickly enough by other rationalists around them.

I tried to push for disagreements on core issues, and those turned out to be either minor quibbles or misunderstandings. For example, he thought that LessWrong espouses a strict and wrong definition of “intelligence” (most of us don’t). He said we’re unaware of the tortured history of “rationality projects” (some of us are). We don’t even disagree much on the individuals: I agree that Eliezer is arrogant (which doesn’t change the fact that he’s usually right), and I agree that the other rationalist is a dick (although I see our community’s tolerance of weirdos as a positive, not a negative).

There are whole online communities dedicated to detesting rationalists, but they’re not doing it because they think sunk cost bias doesn’t exist or that Bayes’ Theorem is wrong or even that Scott’s model of tribe formation is misguided. They’re certainly not doing it because they hate “clear thinking and true beliefs”. I hang out in those spaces occasionally for the perverse pleasure it gives me, and most people are there because they met or read a few rationalists and had a strong personal aversion to them.

If you enjoyed reading this essay because Ezra Klein is your outgroup, make sure not to make the same mistake he does. I dislike Vox because what they do has the effect of suppressing individual expression of ideas in favor of anti-cooperative narratives. But I don’t think that Klein wakes up in the morning thinking: How can I best empower Moloch and promote intergroup conflict by suppressing the exchange of ideas outside of gated institutions? Not because he’s on my side in the fight against Moloch, but because he doesn’t even know what this framework means.

But that means he’s also not diametrically opposed to everything I believe in. This is heartening because it leaves room for cooperation and agreement; we are not merely fighting in a zero-sum game. But cooperation requires learning to speak the same language and passing each other’s Ideological Turing Test, and that is not easy to do.

New to LessWrong?

New Comment
36 comments, sorted by Click to highlight new comments since: Today at 7:16 AM

Jacob, you're a nice person, and I'm as interested in culture war as the next guy, but it will take over LW if we let it.

I think that the issue here is a more general one, about the structure, purpose, and norms of the new LW.

This is not a LW post, it's a Putanumonit post. I rarely write about culture war on Putanumonit, but I also don't censor myself. Now when it comes to cross-posting on LW, it seems that there are two possible goals for LW that are in conflict:

1. LW being a central hub and one-stop-shop for the entire rationalist diaspora, and all rationalist bloggers are encourage to cross-post everything to it.

2. LW being CW-free, and all members are encouraged to keep politics and culture war entirely out of it.

In terms of what *will* happen, I am happy to follow the will of the LW leadership team. If they ask me to remove this post or change it to a link post, I will.

In terms of what *should* happen, I strongly support #1. There can be a norm that CWish content is kept out of Frontpage and Curated, but I think there's a big benefit to having everything on LW. I write to engage smart people in my ideas, and that's exactly what is happening here. Despite the touchiness of the post's subject, the comments so far are mostly civil and relevant. Rationalists should be able to write non-mind-killed things about CW-related subjects (I hope this one qualifies) and have non-mind-killed conversations about them.

Ultimately, this is a big part of what rationalists *are* talking about: in private blogs, on Reddit, in real life. If LW is our home, there should be a place for those topics on LW, even if with higher standards and stricter norms of discussion.

Even if it's civil and non-mind-killed, I don't want 90% of recent posts and comments to be about the culture war. And I don't see why it would stop short of that. Some people will contribute less due to CW noise; some new people will register just to discuss CW; CW posts are easier to write than math posts, they just flow off the fingers once you feel angry enough; we all know what happened with /r/ssc, where the CW thread dominates everything else. Sorry about the tone, I don't want to offend here, but to me all the arguments point in one direction.

A bit more culture-warlike and snarky than the LW standard.

I appreciate that the post started with this.

Voting is the worst mechanism for this decision, but I vote for #2. LW is for reflections on and advice for having true beliefs (being less wrong), and POSSIBLY for raising the sanity waterline, in helping others to have true beliefs.

It's a place to be rational, not to support rationalism.

This post seems like it's assuming methodological individualism when discussing the formation and perpetuation of tribes. This is an unfortunate limitation. Societies are often complex enough to respond to circumstances with articulated changes in their memeplexes and approved behaviors, even if most individuals in that society or even participating in that response couldn't tell you the decision tree. For instance:

Jihadis don’t hate “American freedom and democracy”, they just hate Americans.

This fails to explain the ways Jihadis tend to attack Americans - which is quite different from the ways in which, for instance, Nazis attacked Jews (or, for that matter, Americans attack Nazis or Communists) whereas "immune response to the much more powerful American state's interventions in predominantly Muslim countries" is a start - for one thing, it predicts (when combined with a more granular model of the relevant strengths and weaknesses) that a lot of the violence will be ultimately targeted for domestic propaganda to create more in-group cohesion. Individuals who just hated Americans would have killed quite a bit more of us, in ways much less traceable to radicalized Muslims.

I agree, "Jihadis act as if they hate American intervention in Muslim countries" or "Jihadis want to consolidate power and support in their communities" is a better model than "Jihadis act as if they hate Americans". My point was that all three are way better models than "Jihadis hate our freedom (tm)".

I think I'm trying to say something else too - that "what do Jihadis want" can be composed into "what does the culture that produced Jihad want out of it" and "what are the motivations of individual Jihadis" (where the latter is very often going to be, simply, to fulfill the expectations of the people they see as their tribe's legitimate authorities, or climb one of their tribe's internal credit-allocation gradients).

I don’t know how to put this delicately: this sentence is written from a position so deep up one’s own ass that a proctologist wouldn’t dare venture near.

This is a satisfying quip and also, as others have expressed, not something I'd like to see on LW. I'm not saying you shouldn't be allowed to post snark like this - I'll leave decisions like that to people who are more active in the community and have thought about the details more than I have - but I expect this kind of post does substantial harm to the discourse norms here.

But I also think my opposition to this kind of snark depends on how much I agree with it. Following the rule that a post should satisfy at least two of the three criteria {true, kind, relevant}, I see your comment above as neither true nor kind. The lack of kindness isn't really debatable, but clearly people disagree on its truth value. I think posting things that a substantial fraction of the community will deem as failing the two-out-of-three test is a bad idea if we want to avoid demon threads and related discourse disasters (although the civility in the comments I've read so far suggests that demon threads are not as inevitable for posts like these as I thought).

I don't want to get into a point by point discussion of everything I disagreed with in this post (because demon threads) but I would like to ask you one q: if you think Ezra is up his own ass claiming that social justice issues (race, gender, identity, etc.) are the most important dividing lines in 2018 US politics, then what do think the most important ones are? Obviously there is not one topic that will cleanly divide everyone in the US into two categories, but if you wanted to partition the population into two clusters as cleanly as possible, what would that topic or set of topics be? Or do you think the set would have to be so large that it wouldn't be useful to even consider? Or, and I suspect this might be the case, do you think the question is not precise/coherent enough to have a meaningful answer?

not something I'd like to see on LW

And also not something I would write on the old LW. But it seems weird to censor out snark when cross-posting things from Putanumonit, although I'm beginning to think that I should have done so.

demon threads are not as inevitable

That's actually part of why I wanted to write this. I've written some moderately controversial stuff on Putanumonit, and yet I hadn't had to moderate any demon threads. And the barrier to participation on LW is even higher than on Putanumonit, with the karma system and the prevailing norms here. My post on Jordan Peterson has >150 comments on LW, and almost all of them are good. It's important to remember that sometimes you can mention a demon by name and the demon *isn't* summoned.

if you wanted to partition the population into two clusters as cleanly as possible

I wouldn't really want to do that, because the US is made up of dozens of clusters who end up in tenuous alliances, not two groups. Some people mostly care about abortion, some about the economy of their small town, some about preserving their Christian/hippie/Pakistani/furry subculture, some about X-risk and climate change, and many don't really care about any policy topic at all, just about fitting in with their friends and neighbors.

A plausible model of Dave Rubin is that he cares about gay rights and free speech, so 10 years ago he was a leftist (because gay rights issues were salient and leftists were his allies), and today he's an anti-leftist (because gay marriage is a done deal, and leftists are now his enemies on free speech). The issues didn't change, but the clusters of alliances did.

Obviously there is not one topic that will cleanly divide everyone in the US into two categories, but if you wanted to partition the population into two clusters as cleanly as possible, what would that topic or set of topics be?

This seems like a type error to me. The two groups have an importance disagreement: one group feels that X is kinda important, but Y is Holy Shit Very Important, so Y should override X; the other group feels the opposite. (For example, X = "the marketplace of ideas" and Y = "the plight of people at the bottom of power relations".) So to one group, the dividing line is about Y, while to the other it's about X. The war isn't a factual disagreement, it's a bunch of people shouting at each other "X is important!" - "No, Y is important!" and then retreating to their bubbles to reassure each other that "X is important" - "Yes, yes, X is important" and then back into the fray again.

Following the rule that a post should satisfy at least two of the three criteria {true, kind, relevant}

Hold on, hold on. Since when is this a rule? Here?

I’m aware that this is a rule on Slate Star Codex.[1] But this isn’t SSC.

I don’t necessarily even disagree with it (I mean, I might; I’d have to give it more thought), but I strongly object to tacitly assuming that rules formulated by, and for, other forums, should bind our behavior here.

If a rule applies here, let it be stated here. Otherwise, I am absolutely opposed to any attempts to police the behavior of users here according to the rules of other places.

[1] Actually, the rule on SSC is “true, kind, necessary”—not “relevant”.

FYI, we have a less rigid but still fairly explicit mention of the thing here.

https://www.lesswrong.com/posts/tKTcrnKn2YSdxkxKG/frontpage-posting-and-commenting-guidelines

(This is for frontpage posts. Right now moderators move posts to the frontpage if they seem to fit the frontpage guidelines, with some discretion about "will the comments probably end up meeting the frontpage guidelines too?". I had left this post as a personal blogpost since we don't want this particular style of political discussion on frontpage.

There's nothing wrong with it being here as a personal blogpost, although I can imagine versions of Jacob that would be interested in approaching the topic in a fashion that meets the frontpage criteria, or perhaps leaving it as a personal blogpost but writing it in a such a way that frontpage readers might find less jarring. That depends on Jacob's goals though)

[note: this comment ended up covering a few different topics, some of which might have not been directly relevant to the conversation at hand, but felt like they naturally flowed together]

Thanks for the clarification. That was my impression as well, and I didn't expect this post to be on the frontpage.

With that said: if you remember off the top of your head, can you link to some content that dealt with CW/political issues but still met frontpage standards? Would my Quillette piece meet the standards? The less lazy version of Jacob that you envisioned is discouraged partly because he doesn't know what the standards are for touchy subjects and is worried that even if he tried it won't be on the frontpage.

(To be clear, the real version of Jacob is just too lazy to do major rewrites of posts for LW, it has nothing to do with standard ambiguity).

I do think it's fairly hard to write a current-political-hotbutton piece that's we'd promote to Frontpage.

Rather than having hard and fast rules here, we basically ask ourselves "does this piece a) feel like it's helpful us to understand something, vs trying to persuade us of something? and b) does this piece feel like it'd start moving LW in a direction where discourse would degrade [likely via attracting people to LW who are specifically interested in culture war stuff for it's own sake]."

The Quillette piece did feel substantially better to me than What the Haters Hate. But it does still feel like it's... orienting itself within the culture war frame, in a way that would attract more culture warriors. It sort of feels like it's trying to persuade me of something.

A piece that tackled a similar issue but was promoted to frontpage was Decouplers vs Contextualizers. This abstracted away the CW stuff, in a way that helped you understand the culture war without pulling you into it. I do think most culture-war-related topics would need to be similarly abstracted.

That piece did rely somewhat on other pieces (including one by you) to help provide the background. I think those background pieces were good to have as part of the rationalist discourse, but still probably weren't something we'd want front-and-center on frontpage. (Frontpage isn't supposed to mean "the important stuff", it's meant to be "the stuff that we're comfortable having as people's first/primary experience of LW)

Yes, to be clear, I was commenting based on this being a personal-blog post.

Perhaps "rule" was the wrong word - I meant "norm" or "adage". To reiterate, I'm not suggesting any kind of moderation or censorship. Let me rephrase my point: if your post is clearly unkind and not something you think people absolutely need to know/discuss, then the only thing going for it is its truth value, something a large fraction of people will dispute if the topic prods a partisan imbroglio.

My point is not that SSC rules should bind people's behavior here - and certainly not if those rules are implicit! I totally agree with you there. What I'm saying is that, using what I think is a good decomposition of a post's value (the "rule" of three), posts like these will systematically produce little value that isn't conditional on partisan preferences.

If a large enough fraction of posts here have high value to tribe X but low value to tribe Y then tribe Y is going to find LW to hold less value and may not stick around. Worse, it might not be obvious why this is happening from X's perspective because to them, the quality of posts has remained high.

Hold on. If you had asked me how this post fares on the rule of three, I would have said "two".

True - I honestly think that the pattern of "someone is against my group so they must be against our stated principles" is both a mistake that people make in their own head, and a rhetorical device they use against the outgroup.

Relevant - Recognizing this pattern can allow people to overcome bias and better understand arguments, outgrouping, and tribal fights in general. Also, I haven't seen this exact idea formulated.

Kind - Nope. I could have chosen to write a bloodless post full of generalities, or a snarky post using Ezra Klein as a salient example. I chose the second option on purpose.

You seem to disagree with the "true" claim. Do you disagree with what I wrote above about the pattern? Or do you think that this wasn't the central point of the article, and that something else is both false and central (such as whether identity/gender/race cleave Americans into two tribes)?

I would say there are two central points of the article: one, the general/meta point that there is a cognitive pattern that leads people to incorrect conclusions about their outgroup, and two, that this explains Klein's response to Rubin in this particular scenario. I would agree that the first point is true in the sense that it's a plausible hypothesis that we should keep in mind when trying to understand ingroup/outgroup dynamics. I disagree that this is going on in the example you've provided - that part doesn't seem true to me.

My general point is that if you choose a controversial current event as your example, you will reliably polarize the response in a way that wouldn't happen if you chose almost any other kind of example.

But why is this even a norm? If merely an adage, why should we follow this particular adage?

Let me be clear: I do not agree with this norm / adage / standard / whatever. (Of course, if it’s actually a rule of some space, officially—as it is on SSC, and on the front page of LW—then I will follow it without hesitation!)

You say:

if your post is clearly unkind and not something you think people absolutely need to know/​discuss, then the only thing going for it is its truth value

And I say that’s false. It is not a “good decomposition of a post’s value”.

We may be bound by rules without thinking that we’re bound by them. This is the danger. And while there’s nothing wrong with an arbitrary or unjustified rule, unjustified norms—especially ones that we tacitly agree to treat as if they’re justified—are poison.

And while there’s nothing wrong with an arbitrary or unjustified rule

Why is there nothing wrong with an unjustified rule? If a rule is unjustified, then surely there's something wrong with it? I don't think I understand what you're saying.

I agree that blindly following norms without ever considering their value is bad but that's not what I did. I brought up that norm because, having spent a little time thinking about it and evaluating its usefulness, I've concluded that it is useful. I think it's a great decomposition. You may disagree and I would like to hear your thoughts on it (if you're interested in sharing them) since you may have noticed a problem with it I haven't.

But I don't understand why, when I presented a template for why I thought a post was not good for LW, you concluded that my use of that template must mean that I'm bound by rules/norms that I'm unaware of. I'm definitely aware of the norm and the fact that it's a norm - that's why I brought it up!

Why is there nothing wrong with an unjustified rule? If a rule is unjustified, then surely there’s something wrong with it? I don’t think I understand what you’re saying.

For it to be right and proper to follow a rule, it is not necessary that all, or even most, or even anyone, of those who are subject to this rule, agree that the rule is justified.

In this sense, there is nothing wrong with an unjustified rule.

Conversely, for it to be right and proper to follow a norm, it is necessary that most of those who are subject to this norm, agree that the norm is justified.

I brought up that norm because, having spent a little time thinking about it and evaluating its usefulness, I’ve concluded that it is useful. I think it’s a great decomposition.

My objection was not to this, but to the implication, as I perceived it, that this was in fact already a norm, i.e. had been accepted by this community as binding upon us. I am aware of no such state of affairs.

You may disagree and I would like to hear your thoughts on it (if you’re interested in sharing them) since you may have noticed a problem with it I haven’t.

I will try to do that in a sibling comment, when I have a bit of time to properly formulate my view on the matter.

But I don’t understand why, when I presented a template for why I thought a post was not good for LW, you concluded that my use of that template must mean that I’m bound by rules/​norms that I’m unaware of. I’m definitely aware of the norm and the fact that it’s a norm—that’s why I brought it up!

It goes like this:

Scott instituted this rule on Slate Star Codex. That is his right, as SSC is his personal blog, created and operated entirely by him. He owes his readers no justifications for the rules of his blogs, and has no obligation whatever to gain anyone’s acquiescence or approval when instituting rules. So far, so good.

Then, however, certain readers of SSC (it’s not just you, by any means; I’ve noticed this pattern numerous times), apparently judging this rule to be excellent, adopt it as a general norm of behavior, and—crucially—come to have an impression that it has been accepted, by others, as a general norm of behavior. They begin to police the behavior of others, in other online spaces, according to this norm.

The critical step that is missing is the part where the communities at those other spaces ever agree to adopt this norm, or discuss the matter at all, or even are informed that they are now expected to abide by it.

Indeed, the norm may be a good one (or it may not). But where did it come from? How, in other words, was this norm, in particular, picked out of the vast space of possible community / discussion norms? Well, it came from Slate Star Codex, clearly. But because the rules of SSC are arbitrarily imposed by Scott (as, again, is his unquestionable right), that means that this norm originated as an un-discussed, un-questioned, un-justified rule. (Any claim that this norm was arrived at by considering, from first principles, what norms may be good for a community to adopt, would not be even slightly credible.)

Insofar as those SSC readers who behave in this way are not fully cognizant of this dynamic (as I believe most are not), what can we say about the reasons for their behavior? This: that they are bound by the rules of Slate Star Codex, without knowing it.

For it to be right and proper to follow a rule, it is not necessary that all, or even most, or even anyone, of those who are subject to this rule, agree that the rule is justified.

Thanks for the clarification. I strongly disagree with this but have no interest or expectation that we'll resolve it talking here since it probably comes down to fundamental values concerning authority.

It goes like this:

Aha! Now I understand what you're saying. From your perspective, I criticized the post based on a norm that is not accepted as widely as those who use it seem to think it is. I agree this is bad because it lends undue weight to an argument - it comes across as "everyone agrees you should do X" when really it's "some people from some community agree you should do X", which is obviously less persuasive/relevant.

But I was not trying to make an argument from authority. Apparently I should be clearer in the future since that's how it came across. I was trying to make a purely logic-based (as in, not evidence-based or authority-based, etc.) argument and cited the SSC rule as a shortcut to what I was trying to explain. Rather than write a long post explaining exactly why I felt the post was unjustified given its expected impact, I thought that citing something most people here are familiar with would be an easier/faster explanation.

I meant my original point to read something like: I expect this post to be harmful to the discourse here for reasons most easily summarized by saying that it fails the SSC 2-out-of-3 rule. I was not suggesting that everyone does or should follow that rule.

As a loose analogy, it would be like bringing up a fake framework not because everyone does or should think through that lens but merely because the point being made is most easily expressed through that lens. It would be fine, and actually encouraged, for people who don't like that fake framework to translate the point into other frameworks.

But I was not trying to make an argument from authority. Apparently I should be clearer in the future since that’s how it came across. I was trying to make a purely logic-based (as in, not evidence-based or authority-based, etc.) argument and cited the SSC rule as a shortcut to what I was trying to explain. Rather than write a long post explaining exactly why I felt the post was unjustified given its expected impact, I thought that citing something most people here are familiar with would be an easier/​faster explanation.

Indeed. This clarifies things, thanks. I agree that it would be helpful to be more clear about this in the future—though I question the wisdom of making reference to such rules at all, due to what I said about “privileging the hypothesis”.

I meant my original point to read something like: I expect this post to be harmful to the discourse here for reasons most easily summarized by saying that it fails the SSC 2-out-of-3 rule.

But you see, this is precisely the problem. What are the reasons (for why a post might be harmful to the discourse) that are summarized by saying that something fails the SSC rule? It’s not clear that we know—certainly I don’t!

Or, put it this way: since you assumed that your interlocutor (and, possibly, other readers of Less Wrong) would know what those reasons are (since you expected that your shorthand would be decoded properly), it must therefore be true (in your view) that these reasons have been discussed and elucidated, prominently and publicly. Could you provide some links to such prominent and public discussions?

… it would be like bringing up a fake framework …

It will probably not surprise you to learn that I take rather a dim view of “fake frameworks”.

it must therefore be true (in your view) that these reasons have been discussed and elucidated, prominently and publicly

If I wanted to ensure that every interlocutor understood (or could easily understand given a small amount of time) my point, then this would have to be true. You can't decode the message if you don't have the key, so if you want everyone to decode the message then the key must be as public and prominent as possible.

But I didn't want to ensure that because doing so would take a lot more effort than I spent on that first comment. I am totally OK with only some people decoding the message if it reduces what would have been a significant writing effort into a quick comment. Please correct me if I'm mistaken, but you seem to think that sending messages like this - messages that are easier to decode for some than others - harms the discourse. But couldn't that argument be made for any message that requires investment to understand?

For instance, LW contains many posts that assume a solid understanding of linear algebra, something that very few people (out of, say, all people who can read English and access the internet) have. To those unfamiliar with linear algebra, most of those LA-heavy posts are unintelligible. Should we avoid posting LA-heavy posts?

It's actually a little funny because thinking through this has made me realize that a math-heavy post creates a polarized response (those who understand the math enough to get something out of the post vs those who don't) in the same way that a political post does. And by your argument, referencing a framework that not everyone understands / agrees with / believes is useful also polarizes the response in a similar way.

Given that I see no problem with referencing math or disputed frameworks as a means of communicating with some people at the expense of losing others, this makes me much less confident that posts expected to produce politically polarized responses are problematic. If people unfamiliar with LA can skip LA-heavy posts, people with partisan biases/feelings can skip posts that touch on those.

Please correct me if I’m mistaken, but you seem to think that sending messages like this—messages that are easier to decode for some than others—harms the discourse.

Well… no, I don’t think I’d say that. It’s just that, if what you say is “hard to decode” in a certain way that will result in some people misunderstanding you as saying something which you are not actually saying, then, if you say something that—interpreted as intended—does not “harm the discourse”, nevertheless some people may misinterpret you as saying something different—something which does “harm the discourse”.

Which seems to be what happened here. (I think? There seem to have been several layers of misinterpretation, and I confess to having somewhat lost track. Which is, of course, itself a problem…)

I… think… I am not sure, but I think that, with all the corrections and clearings-up-of-misinterpretations that we’ve managed to muddle through, I no longer have any problem with your intended meaning. (My confidence in this is not high at all, but I’d judge it as more likely than the opposite.)

For instance, LW contains many posts that assume a solid understanding of linear algebra, something that very few people (out of, say, all people who can read English and access the internet) have. To those unfamiliar with linear algebra, most of those LA-heavy posts are unintelligible. Should we avoid posting LA-heavy posts?

This analogy is fundamentally flawed, and here is why.

As I’ve said before, the problem is not opacity, or the possibility of misunderstanding, per se; the problem is the double illusion of transparency. Consider that if I read a post filled with talk of linear algebra, then I know perfectly well that I don’t understand it. I don’t know much of linear algebra, so if you start saying things about vector spaces, then it’s clear to me that I don’t know what you’re talking about. And so it’s clear enough that a post like this one is simply not aimed at me.

Whereas when I read a comment that is not math-heavy, and is phrased in what seems to be perfectly ordinary plain English, and makes reference to ideas and words and phrases with which I am familiar, and does not seem confusing, then… how am I to know that the comment is actually not aimed at me? Why would I assume that it isn’t?

What’s more, consider the consequence of a norm that approved of communicating in this way. It would no longer be required of participants in Less Wrong, that they make their writing comprehensible. Why would they? Should anyone question them, they simply respond that their comment was aimed only and exclusively at those who already understand them. You would turn us all into Zizek, or perhaps Nasruddin! (Admirable figures, the both of them, each in his own way… but hardly role models for the Less Wrong commenter!)

Whereas when I read a comment that is not math-heavy, and is phrased in what seems to be perfectly ordinary plain English, and makes reference to ideas and words and phrases with which I am familiar, and does not seem confusing, then… how am I to know that the comment is actually not aimed at me? Why would I assume that it isn’t?

Is that what happened here, though? I posted a comment referencing the SSC rule and you objected to its use in that context. We both knew what I was referring to. The confusion seems to have arisen because I was intending the reference as a shortcut through my reasoning and you interpreted it as me smuggling a foreign norm into the discourse as if it were already widely accepted.

If I had been clearer about how I was using the reference, would there by any illusion of transparency, much less a double illusion? I didn't expect anyone unfamiliar or not on board with the reference to understand my logic and you didn't think you understood my point while actually misunderstanding it - in fact, you very clearly expressed that you didn't understand my point. So both of us were aware of the lack of transparency from the get-go, I think.

It would no longer be required of participants in Less Wrong, that they make their writing comprehensible.

I mean this in the least sarcastic way possible: to 99% of people I talk to, LW writing is incomprehensible. I have tried many times to introduce LW-related concepts to people unfamiliar with LW and, in my experience, it's insanely difficult to export anything from here to the rest of the world. Obviously my success also depends on how well I explain things, but the only subjects I have similar difficulty explaining to people are very technical things from my own field.

To be clear, I'm not saying "well everything's already too hard to explain so let's go full Zizek!" It is always better to be more comprehensible, all else equal. But all else is not equal - unwrapping explanations to the extent that they are understandable to someone with no familiarity with the subject comes at a great cost. It's good to have develop jargon and shorthand to expedite communication between people in the know, and when the jargon is explicitly jargon (e.g., "the 2-out-of-3 rule from SSC"), I don't think there is any illusion of transparency.

Sorry for the wall of text - I'm trying to keep these responses as short as possible but I also want to be clear. One more thing:

Should anyone question them, they simply respond that their comment was aimed only and exclusively at those who already understand them

If I'm trying to explain X but end up only explaining it to people who understand X, then yes, this is pointless and silly. But if I'm trying to explain Y and end up only explaining it to people who understand X, that is useful, especially when many people understand X.

[Brief mod note: I don't think there's anything wrong with the thread so far, but it's approaching a point where I'm somewhat worried, mostly for high-context reasons relating to past threads involving site norms. I don't have a specific ask at this point but am trying to err in the direction of publicly flagging when I feel worried earlier than I have in the past]

Klein not only is incapable of passing the IDW’s Ideological Turing Test, but he also seems unaware of the fact that someone can fail to pass his own. The only way I can imagine this happening is that Klein is so absorbed in his ideology that he can’t fathom other minds being different.

Even for an article dedicated to bashing the outgroup, this is a particularly ironic passage.

I read this comment five times and I have no idea what you mean. Does "an article" refer to this one, and the irony is that I'm too absorbed in my own ideology? If so, what am I missing: the fact that Klein can pass the IDW's ITT, or the irrelevance of ITTs to the subject, or something else?

"Article" refers to your post, the irony is that you are accusing Klein of being unable to imagine other minds working in different ways, because you are unable to imagine his mind working in any different way.

In the paragraph directly before the one I quoted, you pointed out that it's silly for SJWs to assume that everyone thinks in terms of identity, race, gender, etc. But the blind spot that you're accusing Klein of is one which implicitly assumes that he thinks in terms of ITTs, tribes which are distinct from ideologies, etc. Klein's framework leads him to make a slightly dubious statement about unbridegeable divides. Your framework leads you to badly strawman his statement and throw around ad hominem attacks.

I don't think this is particularly worth arguing about, since I predict it'll become an argument about the post as a whole. In hindsight, I shouldn't have given in to the temptation to post a snarky comment. I did so because I consider the quoted paragraph (and the one above it) both rude and incorrect, a particularly irksome combination. As a more meta note, if culture war posts are accepted on Less Wrong, I think we should strongly encourage them to have a much more civil tone.

But the blind spot that you're accusing Klein of is one which implicitly assumes that he thinks in terms of ITTs, tribes which are distinct from ideologies, etc.

Oh, wow. The main point I tried to get across is in fact that Klein *doesn't* think in terms of ITTs and ideology-is-not-movement, and that's why he's led into thinking that Rubin is a reactionary. My fault here isn't falling into my own trap (which the last two paragraphs explicitly address), but unclear writing.

Some of the IDW is part of what I'd call the liberal reaction to progressivism. It's rallying around entertaining all points of view in the marketplace of ideas, in a reaction to the left's attempt to prevent left-liberal institutions from feeding the trolls, which more traditional liberals interpret as a dangerous attempt to suppress discourse - and naturally, the left-liberals who buy the progressive argument will tend to read efforts targeted at introducing maximally presentable versions of a set of ideas that seem to be objectionable in approximately the same direction as an attempt to move discourse in that direction.

"The unbridgeable divides today, the ones that seem to define which side you’re really on, revolve around issues of race, gender, identity, and equality" - I actually agree with this. There is a very prominent strand of social justice thought that sees all opposition to social justice ideas as oppressive. And the influence of this strand has become such outside of this sphere, the other differences people might have seem less relevant, as it least they can have a conversation that doesn't start from completely different assumptions.

That quote from Klein seems to me to be paralleled by a Catholic saying: "The divides today that define which side you're really on revolve around issues of the Trinity, the Eucharist, and Papal infallibility". Such a Catholic will not see much difference between a Jew and a Hindu because both disagree with her statements about the Pope.

I was at a party with several rationalists on Saturday and met people who did not who Kavanaugh was, let alone Damore or Sage Sharp or Molyneux. And those are all millenials, how many 60-year-olds spare zero thought to "race, gender, identity, and equality"? It seems strange to claim that the world is divided into pro-X and anti-X when a huge number of people don't know about X, don't want to think about X, or just want everyone else to shut up about X. And if you divide the world into "care about X" vs. "don't care about X" then Klein and Richard Spencer are going to be in the "care about what color Americans are" group, and I'll be in the other.

[+]agc6y-60