Book Review: Charles Murray's Human Diversity: The Biology of Gender, Race, and Class

This is a pretty good book about things we know about some ways in which people are different from each other, particularly differences in cognitive repertoires (Murray's choice of phrase for shaving nine syllables off "personality, abilities, and social behavior"). In my last book review, I mentioned that I had been thinking about broadening the topic scope of this blog, and this book review seems like an okay place to start!

Honestly, I feel like I already knew most of this stuff?—sex differences in particular are kind of my bag—but many of the details were new to me, and it's nice to have it all bundled together in a paper book with lots of citations that I can chase down later when I'm skeptical or want more details about a specific thing! The main text is littered with pleonastic constructions like "The first author was Jane Thisand-Such" (when discussing the results of a multi-author paper) or "Details are given in the note[n]", which feel clunky to read, but are so much better than the all-too-common alternative of authors not "showing their work".

In the first part of this blog post, I'm going to summarize what I learned from (or thought about, or was reminded of by) Human Diversity, but it would be kind of unhealthy for you to rely too much on tertiary blog-post summaries of secondary semi-grown-up-book literature summaries, so if these topics happen to strike your scientific curiosity, you should probably skip this post and go buy the source material—or maybe even a grown-up textbook!

The second part of this blog post is irrelevant.


Human Diversity is divided into three parts corresponding to the topics in the subtitle! (Plus another part if you want some wrapping-up commentary from Murray.) So the first part is about things we know about some ways in which female people and male people are different from each other!

The first (short) chapter is mostly about explaining Cohen's d effect sizes, which I think are solving a very important problem! When people say "Men are taller than women" you know they don't mean all men are taller than all women (because you know that they know that that's obviously not true), but that just raises the question of what they do mean. Saying they mean it "generally", "on average", or "statistically" doesn't really solve the problem, because that covers everything between-but-not-including "No difference" to "Yes, literally all women and all men". Cohen's d—the difference between two groups' means in terms of their pooled standard deviation—lets us give a quantitative answer to how much men are taller than women: I've seen reports of d ≈ 1.4–1.7 depending on the source, a lot smaller than the sex difference in murder rates (d ≈ 2.5), but much bigger than the difference in verbal skills (d ≈ 0.3, favoring women).

Once you have a quantitative effect size, then you can visualize the overlapping distributions, and the question of whether the reality of the data should be summarized in English as a "large difference" or a "small difference" becomes much less interesting, bordering on meaningless.

Murray also addresses the issue of aggregating effect sizes—something I've been meaning to get around to blogging about more exhaustively in this context of group differences (although at least, um, my favorite author on Less Wrong covered it in the purely abstract setting): small effect sizes in any single measurement (whatever "small" means) can amount to a big difference when you're considering many measurements at once. That's how people can distinguish female and male faces at 96% accuracy, even though there's no single measurement (like "eye width" or "nose height") offers that much predictive power.

Subsequent chapters address sex differences in personality, cognition, interests, and the brain. It turns out that women are more warm, empathetic, æsthetically discerning, and cooperative than men are! They're also more into the Conventional, Artistic, and Social dimensions of the Holland occupational-interests model.

You might think that this is all due to socialization, but then it's hard to explain why the same differences show up in different cultures—and why (counterintuitively) the differences seem larger in richer, more feminist countries. (Although as evolutionary anthropologist William Buckner points out in his social-media criticism of Human Diversity, W.E.I.R.D. samples from different countries aren't capturing the full range of human cultures.) You might think that the "larger differences in rich countries" result is an artifact: maybe people in less-feminist countries implicitly make within-sex comparisons when answering personality questions (e.g., "I'm competitive for a woman") whereas people in more-feminist countries use a less sexist standard of comparison, construing ratings as compared to people-in-general. Murray points out that this explanation still posits the existence of large sex differences in rich countries (while explaining away the unexpected cross-cultural difference-in-differences). Another possibility is that sexual dimorphism in general increases with wealth, including, e.g., in height and blood pressure, not just in personality. (I notice that this is consilient with the view that agriculture was a mistake that suppresses humans' natural tendencies, and that people revert to forager-like lifestyles in many ways as the riches of the industrial revolution let them afford it.)

Women are better at verbal ability and social cognition, whereas men are better at visuospatial skills. The sexes achieve similar levels of overall performance via somewhat different mental "toolkits." Murray devotes a section to a 2007 result of Johnson and Bouchard, who report that general intelligence "masks the dimensions on which [sex differences in mental abilities] lie": people's overall skill in using tools from the metaphorical mental toolbox leads to underestimates of differences in toolkits (that is, nonmetaphorically, the effect sizes of sex differences in specific mental abilities), which you want to statistically correct for. This result in particular is super gratifying to me personally, because I independently had a very similar idea a few months back—it's super validating as an amateur to find that the pros have been thinking along the same track!

The second part of the book is about some ways in which people with different ancestries are different from each other! Obviously, there are no "distinct" "races" (that would be dumb), but it turns out (as found by endeavors such as Li et al. 2008) that when you throw clustering and dimensionality-reduction algorithms at SNP data (single nucleotide polymorphisms, places in the genome where more than one allele has non-negligible frequency), you get groupings that are a pretty good match to classical or self-identified "races".

Ask the computer to assume that an individual's ancestry came from K fictive ancestral populations where K := 2, and it'll infer that sub-Saharan Africans are descended entirely from one, East Asians and some native Americans are descended entirely from the other, and everyone else is an admixture. But if you set K := 3, populations from Europe and the near East (which were construed as admixtures in the K := 2 model) split off as a new inferred population cluster. And so on.

These ancestry groupings are a "construct" in the sense that the groupings aren't "ordained by God"—the algorithm can find K groupings for your choice of K—but where it draws those category boundaries is a function of the data. The construct is doing cognitive work, concisely summarizing statistical regularities in the dataset (which is too large for humans to hold in their heads all at once): a map that reflects a territory.

Twentieth-century theorists like Fisher and Haldane and whatshisface-the-guinea-pig-guy had already figured out a lot about how evolution works (stuff like, a mutation that confers a fitness advantage of s has a probability of about 2s of sweeping to fixation), but a lot of hypotheses about recent human evolution weren't easy to test or even formulate until the genome was sequenced!

You might think that there wasn't enough time in the 2–5k generations since we came forth out of Africa for much human evolution to take place: a new mutation needs to confer an unusually large benefit to sweep to fixation that fast. But what if you didn't actually need any new mutations? Natural selection on polygenic traits can also act on "standing variation": variation already present in the population that was mostly neutral in previous environments, but is fitness-relevant to new selection pressures. The rapid response to selective breeding observed in domesticated plants and animals mostly doesn't depend on new mutations.

Another mechanism of recent human evolution is introgression: early humans interbred with our Neanderthal and Denisovan "cousins", giving our lineage the chance to "steal" all their good alleles! In contrast to new mutations, which usually die out even when they're beneficial (that 2s rule again), alleles "flowing" from another population keep getting reintroduced, giving them more chances to sweep!

Population differences are important when working with genome-wide association studies, because a model "trained on" one population won't perform as well against the "test set" of a different population. Suppose you do a big study and find a bunch of SNPs that correlate with a trait, like schizophrenia or liking opera. The frequencies of those SNPs for two populations from the same continent (like Japanese and Chinese) will hugely correlate (Pearson's r ≈ 0.97), but for more genetically-distant populations from different continents, the correlation will still be big but not huge (like r ≈ 0.8 or whatever).

What do these differences in SNP frequencies mean in practice?? We ... don't know yet. At least some population differences are fairly well-understood: I'd tell you about sickle-cell and lactase persistence, except then I would have to scream. There are some cases where we see populations independently evolve different adaptations that solve the same problem: people living on the plateaus of both Tibet and Peru have both adapted to high altitudes, but the Tibetans did it by breathing faster and the Peruvians did it with more hemoglobin!

Sorry, "the Tibetans did it with ..." is sloppy phrasing on my part; what I actually mean is that the Tibetans who weren't genetically predisposed to breathe faster were more likely to die without leaving children behind. That's how evolution works!

The third part of the book is about genetic influences on class structure! Untangling the true causes of human variation is a really hard technical philosophy problem, but behavioral geneticists have at least gotten started with their simple ACE model. It works like this: first, assume (that is, "pretend") that the genetic variation for a trait is additive (if you have the appropriate SNP, you get more of the trait), rather than exhibiting epistasis (where the effects of different loci interfere with each other) or Mendelian dominance (where the presence of just one copy of an allele (of two) determines the phenotype, and it doesn't matter whether you heterozygously have a different allele as your second version of that gene). Then we pretend that we can partition the variance in phenotypes as the sum of the "additive" genetic variance A, plus the environmental variance "common" within a family C, plus "everything else" (including measurement "error" and the not-shared-within-families "environment") E. Briefly (albeit at the risk of being cliché): nature, nurture, and noise.

Then we can estimate the sizes of the A, C, and E components by studying fraternal and identical twins. (If you hear people talking about "twin studies", this is what they mean—not case studies of identical twins raised apart, which are really cool but don't happen very often.) Both kinds of twins have the same family environment C at the same time (parents, socioeconomic status, schools, &c.), but identical twins are twice as genetically related to each other as fraternal twins, so the extent to which the identical twins are more similar is going to pretty much be because of their genes. "Pretty much" in the sense that while there are ways in which the assumptions of the model aren't quite true (assortative mating makes fraternal twins more similar in the ways their parents were already similar before mating, identical twins might get treated more similarly by "the environment" on account of their appearance), Murray assures us that the experts assure us that the quantitative effect of these deviations are probably pretty small!

Anyway, it turns out that the effect of the shared environment C for most outcomes is smaller than most people intuitively expect—actually close to zero for personality and adult intelligence specifically! Sometimes sloppy popularizers summarize this as "parenting doesn't matter" in full generality, but it depends on the trait or outcome you're measuring: for example, the shared environment component gets up to 25% for years-of-schooling ("educational attainment") and 36% for "basic interpersonal interactions." Culture obviously exists, but for underlying psychological traits, the part of the environment that matters is mostly not shared by siblings in the same family—not the part of the environment we know how to control. Thus, a lot of economic and class stratification actually ends up being along genetic lines: the nepotism of family wealth can buy opportunities and second chances, but it doesn't actually live your life for you.

It's important not to overinterpret the heritability results; there are a bunch of standard caveats that go here that everyone's treatment of the topic needs to include! Heritability is about the variance in phenotypes that can be predicted by variance in genes. This is not the same concept as "controlled by genes." To see this, notice that the trait "number of heads" has a heritability of zero because the variance is zero: all living people have exactly one head. (Siamese twins are two people.) Heritability estimates are also necessarily bound to a particular population in a particular place and time, which can face constraints shaped solely by the environment. If you plant half of a batch of seeds in the shade and half in the sun, the variance in the heights of the resulting plants will be associated with variance in genes within each group, but the difference between the groups is solely determined by the sunniness of their environments. Likewise, in a Society with a cruel caste system under which children with red hair are denied internet access, part of the heritability of intellectual achievement is going to come from alleles that code for red hair. Even though (ex hypothesi) redheads have the same inherent intellectual potential as everyone else, the heritability computation can't see into worlds that are not our own, which might have vastly different gene–environment correlations.

(I speculate that heritability calculations being so Society-bound might help make sense of the "small role of the shared environment" results that many still balk at. If the population you're studying goes to public schools—or schools at all, as contrasted to other ways of living and learning—that could suppress a lot of the variance that might otherwise occur in families.)

Old-timey geneticists used to think that they would find small number of "genes for" something, but it turns out that we live in an omnigenetic, pleiotropic world where lots and lots of SNPs each exert a tiny effect on potentially lots and lots of things. I feel like this probably shouldn't have been surprising (genes code for amino-acid sequences, variation in what proteins get made from those amino-acid sequences is going to affect high-level behaviors, but high-level behaviors involve lots of proteins in a super-complicated unpredictable way), but I guess it was.

Murray's penultimate chapter summarizes the state of a debate between a "Robert Plomin school" and an "Eric Turkheimer school" on the impact and import of polygenic scores, where we tally up all the SNPs someone has that are associated with a trait of interest.

The starry-eyed view epitomized by Plomin says that polygenic scores are super great and everyone and her dog should be excited about them: they're causal in only one direction (the trait can't cause the score) and they let us assess risks in individuals before they happen. Clinical psychology will enter a new era of "positive genomics", where we understand how to work with the underlying dimensions along which people vary (including positively), rather than focusing on treating "diagnoses" that people allegedly "have".

The curmudgeonly view epitomized by Turkheimer says that Science is about understanding the causal structure of phenomena, and that polygenic scores don't fucking tell us anything. Marital status is heritable in the same way that intelligence is heritable, not because there are "divorce genes" in any meaningful biological sense, but because of a "universal, nonspecific genetic pull on everything": on average, people with more similar genes will make more similar proteins from those similar genes, and therefore end up with more similar phenotypes that interact with the environment in a more similar way, and eventually (the causality flowing "upwards" through many hierarchical levels of organization) this shows up in the divorce statistics of a particular Society in a particular place and time. But this is opaque and banal; the real work of Science is in figuring out what all the particular gene variations actually do.

Notably, Plomin and Turkheimer aren't actually disagreeing here: it's a difference in emphasis rather than facts. Polygenic scores don't explain mechanisms—but might they end up being useful, and used, anyway? Murray's vision of social science is content to make predictions and "explain variance" while remaining ignorant of ultimate causality. (Murray compares polygenic scores to "economic indexes predicting GDP growth", which is not necessarily a reassuring analogy to those who doubt how much of GDP represents real production rather than the "exhaust heat" of zero-sum contests in an environment of manufactured scarcity and artificial demand.) Meanwhile, my cursory understanding (while kicking myself for still not having put in the hours to get much farther into Probabilistic Graphical Models: Principles and Techniques) was that you need to understand causality in order to predict what interventions will have what effects: variance in rain may be statistically "explained by" variance in mud puddles, but you can't make it rain by turning the hose on. Maybe our feeble state of knowledge is why we don't know how to find reliable large-effect environmental interventions that still yet might exist in the vastness of the space of possible interventions.

There are also some appendices at the back of the book! Appendix 1 (reproduced from, um, one of Murray's earlier books with a coauthor) explains some basic statistics concepts. Appendix 2 ("Sexual Dimorphism in Humans") goes over the prevalence of intersex conditions and gays, and then—so much for this post broadening the topic scope of this blog—transgender typology! Murray presents the Blanchard–Bailey–Lawrence–Littman view as fact, which I think is basically correct, but a more comprehensive treatment (which I concede may be too much too hope for from a mere Appendix) would have at least mentioned alternative views (Serano? Veale?), if only to explain why they're worth dismissing. (Contrast to the eight pages in the main text explaining why "But, but, epigenetics!" is worth dismissing.) Then Appendix 3 ("Sex Differences in Brain Volumes and Variance") has tables of brain-size data, and an explanation of the greater-male-variance hypothesis. Cool!


... and that's the book review that I would prefer to write. A science review of a science book, for science nerds: the kind of thing that would have no reason to draw your attention if you're not genuinely interested in Mahanalobis D effect sizes or adaptive introgression or Falconer's formulas, for their own sake, or (better) for the sake of compressing the length of the message needed to encode your observations.

But that's not why you're reading this. That's not why Murray wrote the book. That's not even why I'm writing this. We should hope—emphasis on the should—for a discipline of Actual Social Science, whose practitioners strive to report the truth, the whole truth, and nothing but the truth, with the same passionately dispassionate objectivity they might bring to the study of beetles, or algebraic topology—or that an alien superintelligence might bring to the study of humans.

We do not have a discipline of Actual Social Science. Possibly because we're not smart enough to do it, but perhaps more so because we're not smart enough to want to do it. No one has an incentive to lie about the homotopy groups of an n-sphere. If you're asking questions about homotopy groups at all, you almost certainly care about getting the right answer for the right reasons. At most, you might be biased towards believing your own conjectures in the optimistic hope of achieving eternal algebraic-topology fame and glory, like Ruth Lawrence. But nothing about algebraic topology is going to be morally threatening in a way that will leave you fearing that your ideological enemies have seized control of the publishing-houses to plant lies in the textbooks to fuck with your head, or sobbing that a malicious God created the universe as a place of evil.

Okay, maybe that was a bad example; topology in general really is the kind of mindfuck that might be the design of an adversarial agency. (Remind me to tell you about the long line, which is like the line of real numbers, except much longer.)

In any case, as soon as we start to ask questions about humans—and far more so identifiable groups of humans—we end up entering the domain of politics.

We really shouldn't. Everyone should perceive a common interest in true beliefs—maps that reflect the territory, simple theories that predict our observations—because beliefs that make accurate predictions are useful for making good decisions. That's what "beliefs" are for, evolutionary speaking: my analogues in humanity's environment of evolutionary adaptedness were better off believing that (say) the berries from some bush were good to eat if and only if the berries were actually good to eat. If my analogues unduly-optimistically thought the berries were good when they actually weren't, they'd get sick (and lose fitness), but if they unduly-pessimistically thought the berries were not good when they actually were, they'd miss out on valuable calories (and fitness).

(Okay, this story is actually somewhat complicated by the fact that evolution didn't "figure out" how to build brains that keep track of probability and utility separately: my analogues in the environment of evolutionary adaptedness might also have been better off assuming that a rustling in the bush was a tiger, even if it usually wasn't a tiger, because failing to detect actual tigers was so much more costly (in terms of fitness) than erroneously "detecting" an imaginary tiger. But let this pass.)

The problem is that, while any individual should always want true beliefs for themselves in order to navigate the world, you might want others to have false beliefs in order to trick them into mis-navigating the world in a way that benefits you. If I'm trying to sell you a used car, then—counterintuitively—I might not want you to have accurate beliefs about the car, if that would reduce the sale price or result in no deal. If our analogues in the environment of evolutionary adaptedness regularly faced structurally similar situations, and if it's expensive to maintain two sets of beliefs (the real map for ourselves, and a fake map for our victims), we might end up with a tendency not just to be lying motherfuckers who deceive others, but also to self-deceive in situations where the payoffs (in fitness) of tricking others outweighed those of being clear-sighted ourselves.

That's why we're not smart enough to want a discipline of Actual Social Science. The benefits of having a collective understanding of human behavior—a shared map that reflects the territory that we are—could be enormous, but beliefs about our own qualities, and those of socially-salient groups to which we belong (e.g., sex, race, and class) are exactly those for which we face the largest incentive to deceive and self-deceive. Counterintuitively, I might not want you to have accurate beliefs about the value of my friendship (or the disutility of my animosity), for the same reason that I might not want you to have accurate beliefs about the value of my used car. That makes it a lot harder not just to get the right answer for the reasons, but also to trust that your fellow so-called "scholars" are trying to get the right answer, rather than trying to sneak self-aggrandizing lies into the shared map in order to fuck you over. You can't just write a friendly science book for oblivious science nerds about "things we know about some ways in which people are different from each other", because almost no one is that oblivious. To write and be understood, you have to do some sort of positioning of how your work fits in to the war over the shared map.

Murray positions Human Diversity as a corrective to a "blank slate" orthodoxy that refuses to entertain any possibility of biological influences on psychological group differences. The three parts of the book are pitched not simply as "stuff we know about biologically-mediated group differences" (the oblivious-science-nerd approach that I would prefer), but as a rebuttal to "Gender Is a Social Construct", "Race Is a Social Construct", and "Class Is a Function of Privilege." At the same time, however, Murray is careful to position his work as nonthreatening: "there are no monsters in the closet," he writes, "no dread doors that we must fear opening." He likewise "state[s] explicitly that [he] reject[s] claims that groups of people, be they sexes or races or classes, can be ranked from superior to inferior [or] that differences among groups have any relevance to human worth or dignity."

I think this strategy is sympathetic but ultimately ineffective. Murray is trying to have it both ways: challenging the orthodoxy, while denying the possibility of any unfortunate implications of the orthodoxy being false. It's like ... theistic evolution: satisfactory as long as you don't think about it too hard, but among those with a high need for cognition, who know what it's like to truly believe (as I once believed), it's not going to convince anyone who hasn't already broken from the orthodoxy.

Murray concludes, "Above all, nothing we learn will threaten human equality properly understood." I strongly agree with the moral sentiment, the underlying axiology that makes this seem like a good and wise thing to say.

And yet I have been ... trained. Trained to instinctively apply my full powers of analytical rigor and skepticism to even that which is most sacred. Because my true loyalty is to the axiology—to the process underlying my current best guess as to that which is most sacred. If that which was believed to be most sacred turns out to not be entirely coherent ... then we might have some philosophical work to do, to reformulate the sacred moral ideal in a way that's actually coherent.

"Nothing we learn will threaten X properly understood." When you elide the specific assignment X := "human equality", the form of this statement is kind of suspicious, right? Why "properly understood"? It would be weird to say, "Nothing we learn will threaten the homotopy groups of an n-sphere properly understood."

This kind of claim to be non-disprovable seems like the kind of thing you would only invent if you were secretly worried about X being threatened by new discoveries, and wanted to protect your ability to backtrack and re-gerrymander your definition of X to protect what you (think that you) currently believe.

If being an oblivious science nerd isn't an option, half-measures won't suffice. I think we can do better by going meta and analyzing the functions being served by the constraints on our discourse and seeking out clever self-aware strategies for satisfying those functions without lying about everything. We mustn't fear opening the dread meta-door in front of whether there actually are dread doors that we must fear opening.

Why is the blank slate doctrine so compelling, that so many feel the need to protect it at all costs? (As I once felt the need.) It's not ... if you've read this far, I assume you will forgive me—it's not scientifically compelling. If you were studying humans the way an alien superintelligence would, trying to get the right answer for the right reasons (which can conclude conditional answers: if what humans are like depends on choices about what we teach our children, then there will still be a fact of the matter as to what choices lead to what outcomes), you wouldn't put a whole lot of prior probability on the hypothesis "Both sexes and all ancestry-groupings of humans have the same distribution of psychological predispositions; any observed differences in behavior are solely attributable to differences in their environments." Why would that be true? We know that sexual dimorphism exists. We know that reproductively isolated populations evolve different traits to adapt to their environments, like those birds with differently-shaped beaks that Darwin saw on his boat trip. We could certainly imagine that none of the relevant selection pressures on humans happened to touch the brain—but why? Wouldn't that be kind of a weird coincidence?

If the blank slate doctrine isn't scientifically compelling—it's not something you would invent while trying to build shared maps that reflect the territory—then its appeal must have something to do with some function it plays in conflicts over the shared map, where no one trusts each other to be doing Actual Social Science rather than lying to fuck everyone else over.

And that's where the blank slate doctrine absolutely shines—it's the Schelling point for preventing group conflicts! (A Schelling point is a choice that's salient as a focus for mutual expectations: what I think that you think that I think ... &c. we'll choose.) If you admit that there could be differences between groups, you open up the questions of in what exact traits and of what exact magnitudes, which people have an incentive to lie about to divert resources and power to their group by establishing unfair conventions and then misrepresenting those contingent bargaining equilibria as some "inevitable" natural order.

If you're afraid of purported answers being used as a pretext for oppression, you might hope to make the question un-askable. Can't oppress people on the basis of race if race doesn't exist! Denying the existence of sex is harder—which doesn't stop people from occasionally trying. "I realize I am writing in an LGBT era when some argue that 63 distinct genders have been identified," Murray notes at the beginning of Appendix 2. But this oblique acerbity fails to pass the Ideological Turing Test. The language of has been identified suggests an attempt at scientific taxonomy—a project, which I share with Murray, of fitting categories to describe a preexisting objective reality. But I don't think the people making 63-item typeahead select "Gender" fields for websites are thinking in such terms to begin with. The specific number 63 is ridiculous and can't exist; it might as well be, and often is, a fill-in-the-blank free text field. Despite being insanely evil (where I mean the adjective literally rather than as a generic intensifier—evil in a way that is of or related to insanity), I must acknowledge this is at least good game theory. If you don't trust taxonomists to be acting in good faith—if you think we're trying to bulldoze the territory to fit a preconceived map—then destroying the language that would be used to be build oppressive maps is a smart move.

The taboo mostly only applies to psychological trait differences, both because those are a sensitive subject, and because they're easier to motivatedly see what you want to see: whereas things like height or skin tone can be directly seen and uncontroversially measured with well-understood physical instruments (like a meterstick or digital photo pixel values), psychological assessments are much more complicated and therefore hard to detach from the eye of the beholder. (If I describe Mary as "warm, compassionate, and agreeable", the words mean something in the sense that they change what experiences you anticipate—if you believed my report, you would be surprised if Mary were to kick your dog and make fun of your nose job—but the things that they mean are a high-level statistical signal in behavior for which we don't have a simple measurement device like a meterstick to appeal to if you and I don't trust each other's character assessments of Mary.)

Notice how the "not allowing sex and race differences in psychological traits to appear on shared maps is the Schelling point for resistance to sex- and race-based oppression" actually gives us an explanation for why one might reasonably have a sense that there are dread doors that we must not open. Undermining the "everyone is Actually Equal" Schelling point could catalyze a preference cascade—a slide down the slippery slope to the the next Schelling point, which might be a lot worse than the status quo on the "amount of rape and genocide" metric, even if it does slightly better on "estimating heritability coefficients." The orthodoxy isn't just being dumb for no reason. In analogy, Galileo and Darwin weren't trying to undermine Christianity—they had much more interesting things to think about—but religious authorities were right to fear heliocentrism and evolution: if the prevailing coordination equilibrium depends on lies, then telling the truth is a threat and it is disloyal. And if the prevailing coordination equilibrium is basically good, then you can see why purported truth-tellers striking at the heart of the faith might be believed to be evil.

Murray opens the parts of the book about sex and race with acknowledgments of the injustice of historical patriarchy ("When the first wave of feminism in the United States got its start [...] women were rebelling not against mere inequality, but against near-total legal subservience to men") and racial oppression ("slavery experienced by Africans in the New World went far beyond legal constraints [...] The freedom granted by emancipation in America was only marginally better in practice and the situation improved only slowly through the first half of the twentieth century"). It feels ... defensive? (To his credit, Murray is generally pretty forthcoming about how the need to write "defensively" shaped the book, as in a sidebar in the introduction that says that he'd prefer to say a lot more about evopsych, but he chose to just focus on empirical findings in order to avoid the charge of telling just-so stories.)

But this kind of defensive half-measure satisfies no one. From the oblivious-science-nerd perspective—the view that agrees with Murray that "everyone should calm down"—you shouldn't need to genuflect to the memory of some historical injustice before you're allowed to talk about Science. But from the perspective that cares about Justice and not just Truth, an insincere gesture or a strategic concession is all the more dangerous insofar as it could function as camouflage for a nefarious hidden agenda. If your work is explicitly aimed at destroying the anti-oppression Schelling-point belief, a few hand-wringing historical interludes and bromides about human equality having no testable implications (!!) aren't going to clear you of the suspicion that you're doing it on purpose—trying to destroy the anti-oppression Schelling point in order to oppress, and not because anything that can be destroyed by the truth, should be.

And sufficient suspicion makes communication nearly impossible. (If you know someone is lying, their words mean nothing, not even as the opposite of the truth.) As far as many of Murray's detractors are concerned, it almost doesn't matter what the text of Human Diversity says, how meticulously researched of a psychology/neuroscience/genetics lit review it is. From their perspective, Murray is "hiding the ball": they're not mad about this book; they're mad about specifically chapters 13 and 14 of a book Murray coauthored twenty-five years ago. (I don't think I'm claiming to be a mind-reader here; the first 20% of The New York Times's review of Human Diversity is pretty explicit and representative.)

In 1994's The Bell Curve: Intelligence and Class Structure in American Life, Murray and coauthor Richard J. Herrnstein argued that a lot of variation in life outcomes is explained by variation in intelligence. Some people think that folk concepts of "intelligence" or being "smart" are ill-defined and therefore not a proper object of scientific study. But that hasn't stopped some psychologists from trying to construct tests purporting to measure an "intelligence quotient" (or IQ for short). It turns out that if you give people a bunch of different mental tests, the results all positively correlate with each other: people who are good at one mental task, like listening to a list of numbers and repeating them backwards ("reverse digit span"), are also good at others, like knowing what words mean ("vocabulary"). There's a lot of fancy linear algebra involved, but basically, you can visualize people's test results as a hyperellipsoid in some high-dimensional space where the dimensions are the different tests. (I rely on this "configuration space" visual metaphor so much for so many things that when I started my secret ("secret") gender blog, it felt right to put it under a .space TLD.) The longest axis of the hyperellipsoid corresponds to the "g factor" of "general" intelligence—the choice of axis that cuts through the most variance in mental abilities.

It's important not to overinterpret the g factor as some unitary essence of intelligence rather than the length of a hyperellipsoid. It seems likely that if you gave people a bunch of physical tests, they would positively correlate with each other, such that you could extract a "general factor of athleticism". (It would be really interesting if anyone's actually done this using the same methodology used to construct IQ tests!) But athleticism is going to be an very "coarse" construct for which the tails come apart: for example, world champion 100-meter sprinter Usain Bolt's best time in the 800 meters is reportedly only around 2:10 or 2:07! (For comparison, I ran a 2:08.3 in high school once!)

Anyway, so Murray and Herrnstein talk about this "intelligence" construct, and how it's heritable, and how it predicts income, school success, not being a criminal, &c., and how Society is becoming increasingly stratified by cognitive abilities, as school credentials become the ticket to the new upper class.

This should just be more social-science nerd stuff, the sort of thing that would only draw your attention if, like me, you feel bad about not being smart enough to do algebraic topology and want to console yourself by at least knowing about the Science of not being smart enough to do algebraic topology. The reason everyone and her dog is still mad at Charles Murray a quarter of a century later is Chapter 13, "Ethnic Differences in Cognitive Ability", and Chapter 14, "Ethnic Inequalities in Relation to IQ". So, apparently, different ethnic/"racial" groups have different average scores on IQ tests. Ashkenazi Jews do the best, which is why I sometimes privately joke that the fact that I'm only 85% Ashkenazi (according to 23andMe) explains my low IQ. (I got a 131 on the WISC-III at age 10, but that's pretty dumb compared to some of my robot-cult friends.) East Asians do a little better than Europeans/"whites". And—this is the part that no one is happy about—the difference between U.S. whites and U.S. blacks is about Cohen's d ≈ 1. (If two groups differ by d = 1 on some measurement that's normally distributed within each group, that means that the mean of the group with the lower average measurement is at the 16th percentile of the group with the higher average measurement, or that a uniformly-randomly selected member of the group with the higher average measurement has a probability of about 0.76 of having a higher measurement than a uniformly-randomly selected member of the group with the lower average measurement.)

Given the tendency for people to distort shared maps for political reasons, you can see why this is a hotly contentious line of research. Even if you take the test numbers at face value, racists trying to secure unjust privileges for groups that score well, have an incentive to "play up" group IQ differences in bad faith even when they shouldn't be relevant. As economist Glenn C. Loury points out in The Anatomy of Racial Inequality, cognitive abilities decline with age, and yet we don't see a moral panic about the consequences of an aging workforce, because older people are construed by the white majority as an "us"—our mothers and fathers—rather than an outgroup. Individual differences in intelligence are also presumably less politically threatening because "smart people" as a group aren't construed as a natural political coalition—although Murray's work on cognitive class stratification would seem to suggest this intuition is mistaken.

It's important not to overinterpret the IQ-scores-by-race results; there are a bunch of standard caveats that go here that everyone's treatment of the topic needs to include. Again, just because variance in a trait is statistically associated with variance in genes within a population, does not mean that differences in that trait between populations are caused by genes: remember the illustrations about sun-deprived plants and internet-deprived red-haired children. Group differences in observed tested IQs are entirely compatible with a world in which those differences are entirely due to the environment imposed by an overtly or structurally racist society. Maybe the tests are culturally biased. Maybe people with higher socioeconomic status get more opportunities to develop their intellect, and racism impedes socio-economic mobility. And so on.

The problem is, a lot of the blank-slatey environmentally-caused-differences-only hypotheses for group IQ differences start to look less compelling when you look into the details. "Maybe the tests are biased", for example, isn't an insurmountable defeater to the entire endeavor of IQ testing—it is itself a falsifiable hypothesis, or can become one if you specify what you mean by "bias" in detail. One idea of what it would mean for a test to be biased is if it's partially measuring something other than what it purports to be measuring: if your test measures a combination of "intelligence" and "submission to the hegemonic cultural dictates of the test-maker", then individuals and groups that submit less to your cultural hegemony are going to score worse, and if you market your test as unbiasedly measuring intelligence, then people who believe your marketing copy will be misled into thinking that those who don't submit are dumber than they really are. But if so, and if not all of your individual test questions are equally loaded on intelligence and cultural-hegemony, then the cultural bias should show up in the statistics. If some questions are more "fair" and others are relatively more culture-biased, then you would expect the order of item difficulties to differ by culture: the "item characteristic curve" plotting the probability of getting a biased question "right" as a function of overall test score should differ by culture, with the hegemonic group finding it "easier" and others finding it "harder". Conversely, if the questions that discriminate most between differently-scoring cultural/ethnic/"racial" groups were the same as the questions that discriminate between (say) younger and older children within each group, that would be the kind of statistical clue you would expect to see if the test was unbiased and the group difference was real.

Hypotheses that accept IQ test results as unbiased, but attribute group differences in IQ to the environment, also make statistical predictions that could be falsified. Controlling for parental socioeconomic status only cuts the black–white gap by a third. (And note, on the hereditarian model, some of the correlation between parental SES and child outcomes is due to both being causally downstream of genes.) The mathematical relationship between between-group and within-group heritability means that the conjunction of wholly-environmentally-caused group differences, and the within-group heritability, makes quantitative predictions about how much the environments of the groups differ. Skin color is actually only controlled by a small number of alleles, so if you think Society's discrimination on skin color causes IQ differences, you could maybe design a clever study that measures both overall-ancestry and skin color, and does statistics on what happens when they diverge. And so on.

In mentioning these arguments in passing, I'm not trying to provide a comprehensive lit review on the causality of group IQ differences. (That's someone else's blog.) I'm not (that?) interested in this particular topic, and without having mastered the technical literature, my assessment would be of little value. Rather, I am ... doing some context-setting for the problem I am interested in, of fixing public discourse. The reason we can't have an intellectually-honest public discussion about human biodiversity is because good people want to respect the anti-oppression Schelling point and are afraid of giving ammunition to racists and sexists in the war over the shared map. "Black people are, on average, genetically less intelligent than white people" is the kind of sentence that pretty much only racists would feel good about saying out loud, independently of its actual truth value. In a world where most speech is about manipulating shared maps for political advantage rather than getting the right answer for the right reasons, it is rational to infer that anyone who entertains such hypotheses is either motivated by racial malice, or is at least complicit with it—and that rational expectation isn't easily canceled with a pro forma "But, but, civil discourse" or "But, but, the true meaning of Equality is unfalsifiable" disclaimer.

To speak to those who aren't already oblivious science nerds—or are committed to emulating such, as it is scientifically dubious whether anyone is really that oblivious—you need to put more effort into your excuse for why you're interested in these topics. Here's mine, and it's from the heart, though it's up to the reader to judge for herself how credible I am when I say this—

I don't want to be complicit with hatred or oppression. I want to stay loyal to the underlying egalitarian–individualist axiology that makes the blank slate doctrine sound like a good idea. But I also want to understand reality, to make sense of things. I want a world that's not lying to me. Having to believe false things—or even just not being able say certain true things when they would otherwise be relevant—extracts a dire cost on our ability to make sense of the world, because you can't just censor a few forbidden hypotheses—you have to censor everything that implies them, and everything that implies them: the more adept you are at making logical connections, the more of your mind you need to excise to stay in compliance.

We can't talk about group differences, for fear that anyone arguing that differences exist is just trying to shore up oppression. But ... structural oppression and actual group differences can both exist at the same time. They're not contradicting each other! Like, the fact that men are physically stronger than women (on average, but the effect size is enormous, like d ≈ 2.6 for total muscle mass) is not unrelated to the persistence of patriarchy! (The ability to credibly threaten to physically overpower someone, gives the more powerful party a bargaining advantage, even if the threat is typically unrealized.) That doesn't mean patriarchy is good; to think so would be to commit the naturalistic fallacy of attempting to derive an ought from an is. No one would say that famine and plague are good just because they, too, are subject to scientific explanation. This is pretty obvious, really? But similarly, genetically-mediated differences in cognitive repertoires between ancestral populations are probably going to be part of the explanation for why we see the particular forms of inequality and oppression that we do, just as a brute fact of history devoid of any particular moral significance, like how part of the explanation for why European conquest of the Americas happened earlier and went smoother for the invaders than the colonization of Africa, had to do with the disease burden going the other way (Native Americans were particularly vulnerable to smallpox, but Europeans were particularly vulnerable to malaria).

Again—obviously—is does not imply ought. In deference to the historically well-justified egalitarian fear that such hypotheses will primarily be abused by bad actors to portray their own group as "superior", I suspect it's helpful to dwell on science-fictional scenarios in which the boot of history is one's own neck, if the boot does not happen to be on one's own neck in real life. If a race of lavender humans from an alternate dimension were to come through a wormhole and invade our Earth and cruelly subjugate your people, you would probably be pretty angry, and maybe join a paramilitary group aimed at overthrowing lavender supremacy and re-instantiating civil rights. The possibility of a partially-biological explanation for why the purple bastards discovered wormhole generators when we didn't (maybe they have d ≈ 1.8 on us in visuospatial skills, enabling their population to be first to "roll" a lucky genius (probably male) who could discover the wormhole field equations), would not make the conquest somehow justified.

I don't know how to build a better world, but it seems like there are quite general grounds on which we should expect that it would be helpful to be able to talk about social problems in the language of cause and effect, with the austere objectivity of an engineering discipline. If you want to build a bridge (that will actually stay up), you need to study the "the careful textbooks [that] measure [...] the load, the shock, the pressure [that] material can bear." If you want to build a just Society (that will actually stay up), you need a discipline of Actual Social Science that can publish textbooks, and to get that, you need the ability to talk about basic facts about human existence and make simple logical and statistical inferences between them.

And no one can do it! ("Well for us, if even we, even for a moment, can get free our heart, and have our lips unchained—for that which seals them hath been deep-ordained!") Individual scientists can get results in their respective narrow disciplines; Charles Murray can just barely summarize the science to a semi-popular audience without coming off as too overtly evil to modern egalitarian moral sensibilities. (At least, the smarter egalitarians? Or, maybe I'm just old.) But at least a couple aspects of reality are even worse (with respect to naïve, non-renormalized egalitarian moral sensibilities) than the ball-hiders like Murray can admit, having already blown their entire Overton budget explaining the relevant empirical findings.

Murray approvingly quotes Steven Pinker (a fellow ball-hider, though Pinker is better at it): "Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group."

A fine sentiment. I emphatically agree with the underlying moral intuition that makes "Individuals should not be judged by group membership" sound like a correct moral principle—one cries out at the monstrous injustice of the individual being oppressed on the basis of mere stereotypes of what other people who look like them might statistically be like.

But can I take this literally as the exact statement of a moral principle? Technically?—no! That's actually not how epistemology works! The proposed principle derives its moral force from the case of complete information: if you know for a fact that I have moral property P, then it would be monstrously unjust to treat me differently just because other people who look like me mostly don't have moral property P. But in the real world, we often—usually—don't have complete information about people, or even about ourselves.

Bayes's theorem (just a few inferential steps away from the definition of conditional probability itself, barely worthy of being called a "theorem") states that for hypothesis H and evidence E, P(H|E) = P(E|H)P(H)/P(E). This is the fundamental equation that governs all thought. When you think you see a tree, that's really just your brain computing a high value for the probability of your sensory experiences given the hypothesis that there is a tree, multiplied by the prior probability that there is a tree, as a fraction of all the possible worlds that could be generating your sensory experiences.

What goes for seeing trees, goes the same for "treating individuals as individuals": the process of getting to know someone as an individual, involves your brain exploiting the statistical relationships between what you observe, and what you're trying to learn about. If you see someone wearing an Emacs tee-shirt, you're going to assume that they probably use Emacs, and asking them about their dot-emacs file is going to seem like a better casual conversation-starter compared to the base rate of people wearing non-Emacs shirts. Not with certainty—maybe they just found the shirt in a thrift store and thought it looked cool—but the shirt shifts the probabilities implied by your decisionmaking.

The problem that Bayesian reasoning poses for naïve egalitarian moral intuitions, is that, as far as I can tell, there's no philosophically principled reason for "probabilistic update about someone's psychology on the evidence that they're wearing an Emacs shirt" to be treated fundamentally differently from "probabilistic update about someone's psychology on the evidence that she's female". These are of course different questions, but to a Bayesian reasoner (an inhuman mathematical abstraction for getting the right answer and nothing else), they're the same kind of question: the correct update to make is an empirical matter that depends on the actual distribution of psychological traits among Emacs-shirt-wearers and among women. (In the possible world where most people wear tee-shirts from the thrift store that looked cool without knowing what they mean, the "Emacs shirt → Emacs user" inference would usually be wrong.) But to a naïve egalitarian, judging someone on their expressed affinity for Emacs is good, but judging someone on their sex is bad and wrong.

I used to be a naïve egalitarian. I was very passionate about it. I was eighteen years old. I am—again—still fond of the moral sentiment, and eager to renormalize it into something that makes sense. (Some egalitarian anxieties do translate perfectly well into the Bayesian setting, as I'll explain in a moment.) But the abject horror I felt at eighteen at the mere suggestion of making generalizations about people just—doesn't make sense. It's not even that it shouldn't be practiced (it's not that my heart wasn't in the right place), but that it can't be practiced—that the people who think they're practicing it are just confused about how their own minds work.

Give people photographs of various women and men and ask them to judge how tall the people in the photos are, as Nelson et al. 1990 did, and people's guesses reflect both the photo-subjects' actual heights, but also (to a lesser degree) their sex. Unless you expect people to be perfect at assessing height from photographs (when they don't know how far away the cameraperson was standing, aren't "trigonometrically omniscient", &c.), this behavior is just correct: men really are taller than women on average, so P(true-height|apparent-height, sex) ≠ P(true-height|apparent-height) because of regression to the mean (and women and men regress to different means). But this all happens subconsciously: in the same study, when the authors tried height-matching the photographs (for every photo of a woman of a given height, there was another photo in the set of a man of the same height) and telling the participants about the height-matching and offering a cash reward to the best height-judge, more than half of the stereotyping effect remained. It would seem that people can't consciously readjust their learned priors in reaction to verbal instructions pertaining to an artificial context.

Once you understand at a technical level that probabilistic reasoning about demographic features is both epistemically justified, and implicitly implemented as part of the way your brain processes information anyway, then a moral theory that forbids this starts to look less compelling? Of course, statistical discrimination on demographic features is only epistemically justified to exactly the extent that it helps get the right answer. Renormalized-egalitarians can still be properly outraged about the monstrous tragedies where I have moral property P but I can't prove it to you, so you instead guess incorrectly that I don't just because other people who look like me mostly don't, and you don't have any better information to go on—or tragedies in which a feedback loop between predictions and social norms creates or amplifies group differences that wouldn't exist under some other social equilibrium.

Nelson et al. also found that when the people in the photographs were pictured sitting down, then judgments of height depended much more on sex than when the photo-subjects were standing. This too makes Bayesian sense: if it's harder to tell how tall an individual is when they're sitting down, you rely more on your demographic prior. In order to reduce injustice to people who are an outlier for their group, one could argue that there is a moral imperative to seek out interventions to get more fine-grained information about individuals, so that we don't need to rely on the coarse, vague information embodied in demographic stereotypes. The moral spirit of egalitarian–individualism mostly survives in our efforts to hug the query and get specific information with which to discriminate amongst individuals. (And discriminateto distinguish, to make distinctions—is the correct word.) If you care about someone's height, it is better to precisely measure it using a meterstick than to just look at them standing up, and it is better to look at them standing up than to look at them sitting down. If you care about someone's skills as potential employee, it is better to give them a work-sample test that assesses the specific skills that you're interested in, than it is to rely on a general IQ test, and it's far better to use an IQ test than to use mere stereotypes. If our means of measuring individuals aren't reliable or cheap enough, such that we still end up using prior information from immutable demographic categories, that's a problem of grave moral seriousness—but in light of the mathematical laws governing reasoning under uncertainty, it's a problem that realistically needs to be solved with better tests and better signals, not by pretending not to have a prior.

This could take the form of finer-grained stereotypes. If someone says of me, "Taylor Saotome-Westlake? Oh, he's a man, you know what they're like," I would be offended—I mean, I would if I still believed that getting offended ever helps with anything. (It never helps.) I'm not like typical men, and I don't want to be confused with them. But if someone says, "Taylor Saotome-Westlake? Oh, he's one of those IQ 130, mid-to-low Conscientiousness and Agreeableness, high Openness, left-libertarian American Jewish atheist autogynephilic male computer programmers; you know what they're like," my response is to nod and say, "Yeah, pretty much." I'm not exactly like the others, but I don't mind being confused with them.

The other place where I think Murray is hiding the ball (even from himself) is in the section on "reconstructing a moral vocabulary for discussing human differences." (I agree that this is a very important project!) Murray writes—

I think at the root [of the reluctance to discuss immutable human differences] is the new upper class's conflation of intellectual ability and the professions it enables with human worth. Few admit it, of course. But the evolving zeitgeist of the new upper class has led to a misbegotten hierarchy whereby being a surgeon is better in some sense of human worth than being an insurance salesman, being an executive in a high-tech firm is better than being a housewife, and a neighborhood of people with advanced degrees is better than a neighborhood of high-school graduates. To put it so baldly makes it obvious how senseless it is. There shouldn't be any relationship between these things and human worth.

I take strong issue with Murray's specific examples here—as an incredibly bitter autodidact, I care not at all for formal school degrees, and as my fellow nobody pseudonymous blogger Harold Lee points out, many of those stuck in the technology rat race aspire to escape to a more domestic- and community-focused life not unlike that of a housewife. But after quibbling with the specific illustrations, I think I'm just going to bite the bullet here?

Yes, intellectual ability is a component of human worth! Maybe that's putting it baldly, but I think the alternative is obviously senseless. The fact that I have the ability and motivation to (for example, among many other things I do) write this cool science–philosophy blog about my delusional paraphilia where I do things like summarize and critique the new Charles Murray book, is a big part of what makes my life valuable—both to me, and to the people who interact with me. If I were to catch COVID-19 next month and lose 40 IQ points due to oxygen-deprivation-induced brain damage and not be able to write blog posts like this one anymore, that would be extremely terrible for me—it would make my life less-worth-living. (And this kind of judgment is reflected in health and economic policymaking in the form of quality-adjusted life years.) And my friends who love me, love me not as an irreplaceably-unique-but-otherwise-featureless atom of person-ness, but because my specific array of cognitive repertoires makes me a specific person who provides a specific kind of company. There can't be such a thing as literally unconditional love, because to love someone in particular, implicitly imposes a condition: you're only committed to love those configurations of matter that constitute an implementation of your beloved, rather than someone or something else.

Murray continues—

The conflation of intellectual ability with human worth helps to explain the new upper class's insistence that inequalities of intellectual ability must be the product of environmental disadvantage. Many people with high IQs really do feel sorry for people with low IQs. If the environment is to blame, then those unfortunates can be helped, and that makes people who want to help them feel good. If genes are to blame, it makes people who want to help them feel bad. People prefer feeling good to feeling bad, so they engage in confirmation bias when it comes to the evidence about the causes of human differences.

I agree with Murray that this kind of psychology explains a lot of the resistance to hereditarian explanations. But as long as we're accusing people of motivated reasoning, I think Murray's solution is engaging in a similar kind of denial, but just putting it in a different place. The idea that people are unequal in ways that matter is legitimately too horrifying to contemplate, so liberals deny the inequality, and conservatives deny that it matters. But I think if you really understand the fact–value distinction and see that the naturalistic fallacy is, in fact, a fallacy (and not even a tempting one), that the progress of humankind has consisted of using our wits to impose our will on an indifferent universe, then the very concept of "too horrifying to contemplate" becomes a grave error. The map is not the territory: contemplating doesn't make things worse; not-contemplating that which is already there can't make things better—and can blind you to opportunities to make things better.

Recently, Richard Dawkins spurred a lot of criticism on social media for pointing out that selective breeding would work on humans (that is, succeed at increasing the value of the traits selected for in subsequent generations), for the same reasons it works on domesticated nonhuman animals—while stressing, of course, that he deplores the idea: it's just that our moral commitments can't constrain the facts. Intellectuals with the reading-comprehension skill, including Murray, leapt to defend Dawkins and concur on both points—that eugenics would work, and that it would obviously be terribly immoral. And yet no one seems to bother explaining or arguing why it would be immoral. Yes, obviously murdering and sterilizing people is bad. But if the human race is to continue and people are going to have children anyway, those children are going to be born with some distribution of genotypes. There are probably going to be human decisions that do not involve murdering and sterilizing people that would affect that distribution—perhaps involving selection of in vitro fertilized embryos. If the distribution of genotypes were to change in a way that made the next generation grow up happier, and healthier, and smarter, that would be good for those children, and it wouldn't hurt anyone else! Life is not a zero-sum game! This is pretty obvious, really? But if no one except nobody pseudonymous bloggers can even say it, how are we to start the work?

The author of the Xenosystems blog mischievously posits five stages of knowledge of human biodiversity (in analogy to the famous, albeit reportedly lacking in empirical support, five-stage Kübler-Ross model of grief), culminating in Stage 4: Depression ("Who could possibly have imagined that reality was so evil?") and Stage 5: Acceptance ("Blank slate liberalism really has been a mountain of dishonest garbage, hasn't it? Guess it's time for it to die ...").

I think I got stuck halfway between Stage 4 and 5? It can simultaneously be the case that reality is evil, and that blank slate liberalism contains a mountain of dishonest garbage. That doesn't mean the whole thing is garbage. You can't brainwash a human with random bits; they need to be specific bits with something good in them. I would still be with the program, except that the current coordination equilibrium is really not working out for me. So it is with respect for the good works enabled by the anti-oppression Schelling point belief, that I set my sights on reorganizing at the other Schelling point of just tell the goddamned truth—not in spite of the consequences, but because of the consequences of what good people can do when we're fully informed. Each of us in her own way.

Submit to Reddit

(Post revision history)

Comments permit Markdown or HTML formatting.