"—but if one hundred thousand [normies] can turn up, to show their support for the [rationalist] community, why can't you?"

I said wearily, "Because every time I hear the word community, I know I'm being manipulated. If there is such a thing as the [rationalist] community, I'm certainly not a part of it. As it happens, I don't want to spend my life watching [rationalist and effective altruist] television channels, using [rationalist and effective altruist] news systems ... or going to [rationalist and effective altruist] street parades. It's all so ... proprietary. You'd think there was a multinational corporation who had the franchise rights on [truth and goodness]. And if you don't market the product their way, you're some kind of second-class, inferior, bootleg, unauthorized [nerd]."

—"Cocoon" by Greg Egan (paraphrased)[1]

Recapping my Whole Dumb Story so far: in a previous post, "Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems", I told you about how I've always (since puberty) had this obsessive erotic fantasy about being magically transformed into a woman and how I used to think it was immoral to believe in psychological sex differences, until I read these great Sequences of blog posts by Eliezer Yudkowsky which incidentally pointed out how absurdly impossible my obsessive fantasy was ...

—none of which gooey private psychological minutiæ would be in the public interest to blog about, except that, as I explained in a subsequent post, "Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer", around 2016, everyone in the community that formed around the Sequences suddenly decided that guys like me might actually be women in some unspecified metaphysical sense, and the cognitive dissonance from having to rebut all this nonsense coming from everyone I used to trust drove me temporarily insane from stress and sleep deprivation ...

—which would have been the end of the story, except that, as I explained in a subsequent–subsequent post, "A Hill of Validity in Defense of Meaning", in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that suggested that people were philosophically confused if they disputed that men could be women in some unspecified metaphysical sense.

Anyone else being wrong on the internet like that wouldn't have seemed like a big deal, but Scott Alexander had semi-jokingly written that rationalism is the belief that Eliezer Yudkowsky is the rightful caliph. After extensive attempts by me and allies to get clarification from Yudkowsky amounted to nothing, we felt justified in concluding that he and his Caliphate of so-called "rationalists" was corrupt.

Origins of the Rationalist Civil War (April–May 2019)

Anyway, given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing.

I had been hyperfocused on prosecuting my Category War, but the reason Michael Vassar and Ben Hoffman and Jessica Taylor[2] were willing to help me out was not because they particularly cared about the gender and categories example but because it seemed like a manifestation of a more general problem of epistemic rot in "the community."

Ben had previously worked at GiveWell and had written a lot about problems with the Effective Altruism (EA) movement; in particular, he argued that EA-branded institutions were making incoherent decisions under the influence of incentives to distort information in order to seek power.

Jessica had previously worked at MIRI, where she was unnerved by what she saw as under-evidenced paranoia about information hazards and short AI timelines. (As Jack Gallagher, who was also at MIRI at the time, later put it, "A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.")

To what extent were my gender and categories thing, and Ben's EA thing, and Jessica's MIRI thing, manifestations of the same underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit?

If there was a real problem, I didn't have a good grasp on it. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people and get a correction on that specific point. (Although as we had just discovered, even that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with that. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a gestalt.

Ben called the gestalt he saw the Blight, after the rogue superintelligence in Vernor Vinge's A Fire Upon the Deep. The problem wasn't that people were getting dumber; it was that they were increasingly behaving in a way that was better explained by their political incentives than by coherent beliefs about the world; they were using and construing facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game.

When I asked Ben for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space", despite the fact that no one who had been following the AI risk discourse thought that OpenAI as originally announced was a good idea. Nate had privately clarified that the word "excited" wasn't necessarily meant positively—and in this case meant something more like "terrified."

This seemed to me like the sort of thing where a particularly principled (naïve?) person might say, "That's lying for political reasons! That's contrary to the moral law!" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works."

I thought explaining the Blight to an ordinary grown-up was going to need either lots of specific examples that were more egregious than this (and more egregious than the examples in Sarah Constantin's "EA Has a Lying Problem" or Ben's "Effective Altruism Is Self-Recommending"), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world with unusually high standards.

The schism introduced new pressures on my social life. I told Michael that I still wanted to be friends with people on both sides of the factional schism. Michael said that we should unambiguously regard Yudkowsky and CfAR president (and my personal friend of ten years) Anna Salamon as criminals or enemy combatants who could claim no rights in regard to me or him.

I don't think I got the framing at this time. War metaphors sounded scary and mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soldiers on the other side of a war aren't necessarily morally blameworthy as individuals:[3] their actions are being directed by the Power they're embedded in.

I wrote to Anna (Subject: "Re: the end of the Category War (we lost?!?!?!)"):

I was just trying to publicly settle a very straightforward philosophy thing that seemed really solid to me

if, in the process, I accidentally ended up being an unusually useful pawn in Michael Vassar's deranged four-dimensional hyperchess political scheming

that's ... arguably not my fault


I may have subconsciously pulled off an interesting political maneuver. In my final email to Yudkowsky on 20 April 2019 (Subject: "closing thoughts from me"), I had written—

If we can't even get a public consensus from our de facto leadership on something so basic as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's no point in pretending to have a rationalist community, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting my price for joining particularly high here?[4]

And as it happened, on 4 May 2019, Yudkowsky retweeted Colin Wright on the "univariate fallacy"—the point that group differences aren't a matter of any single variable—which was thematically similar to the clarification I had been asking for. (Empirically, it made me feel less aggrieved.) Was I wrong to interpret this as another "concession" to me? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)

Separately, one evening in April, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[5] I said, essentially, "Oh man oh jeez, Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're the only ones backing me up on this incredibly basic philosophy thing and I don't feel like I have anywhere else to go." This culminated in a group conversation with the entire house, which I found unsettling. (Unfortunately, I didn't take notes and don't remember the details except that I had a sense of everyone else seeming to agree on things that I thought were clearly contrary to the spirit of the Sequences.)

The two-year-old son of Mike and "Meredith" was reportedly saying the next day that Kelsey doesn't like his daddy, which was confusing until it was figured out he had heard Kelsey talking about why she doesn't like Michael Vassar.[6]

And as it happened, on 7 May 2019, Kelsey wrote a Facebook comment displaying evidence of understanding my thesis.

These two datapoints led me to a psychological hypothesis: when people see someone wavering between their coalition and a rival coalition, they're intuitively motivated to offer a few concessions to keep the wavering person on their side. Kelsey could afford to speak as if she didn't understand the thing about sex being a natural category when it was just me freaking out alone, but visibly got it almost as soon as I could credibly threaten to walk (defect to a coalition of people she dislikes). Maybe my "closing thoughts" email had a similar effect on Yudkowsky, assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later? This probably wouldn't work if you repeated it, or tried to do it consciously?

Exit Wounds (May 2019)

I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" memoir-post. Ben said that the target audience to aim for was sympathetic but naïve people like I had been a few years ago, who hadn't yet had the experiences I'd had. This way, they wouldn't have to freak out to the point of being imprisoned and demand help from community leaders and not get it; they could just learn from me.

I didn't know how to continue it. I was too psychologically constrained; I didn't know how to tell the Whole Dumb Story without escalating personal conflicts or leaking info from private conversations.

I decided to take a break from the religious civil war and from this blog. I declared May 2019 as Math and Wellness Month.

My dayjob performance had been suffering for months. The psychology of the workplace is ... subtle. There's a phenomenon where some people are vastly more productive than others and everyone knows it, but no one is cruel enough to make it common knowledge. This is awkward for people who simultaneously benefit from the culture of common-knowledge-prevention allowing them to collect the status and money rents of being a $150K/year software engineer without actually performing at that level, who also read enough Ayn Rand as a teenager to be ideologically opposed to subsisting on unjustly-acquired rents rather than value creation. I didn't think the company would fire me, but I was worried that they should.

I asked my boss to temporarily assign me some easier tasks that I could make steady progress on. (We had a lot of LaTeX templating of insurance policy amendments that needed to get done.) If I was going to be psychologically impaired, it was better to be up-front about how I could best serve the company given that impairment, rather than hoping the boss wouldn't notice.

My intent of a break from the religious war didn't take. I met with Anna on the UC Berkeley campus and read her excerpts from Ben's and Jessica's emails. (She had not provided a comment on "Where to Draw the Boundaries?" despite my requests, including in the form of two paper postcards that I stayed up until 2 a.m. on 14 April 2019 writing; spamming people with hysterical and somewhat demanding postcards felt more distinctive than spamming people with hysterical and somewhat demanding emails.)

I complained that I had believed our own marketing material about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies outside the laboratory. Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about "the thing" was never actually part of the original vision. She kept repeating that she had tried to warn me, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay "What You Can't Say" to people, summarizing Graham's moral that you should figure out the things you can't say in your culture and then not say them, in order to avoid getting drawn into pointless conflicts.)

It was true that she had tried to warn me for years, and (not yet having gotten over my teenage ideological fever dream), I hadn't known how to listen. But this seemed fundamentally unresponsive to how I kept repeating that I only expected consensus on the basic philosophy of language and categorization (not my object-level special interest in sex and gender). Why was it so unrealistic to imagine that the smart people could enforce standards in our own tiny little bubble?

My frustration bubbled out into follow-up emails:

I'm also still pretty angry about how your response to my "I believed our own propaganda" complaint is (my possibly-unfair paraphrase) "what you call 'propaganda' was all in your head; we were never actually going to do the unrestricted truthseeking thing when it was politically inconvenient." But ... no! I didn't just make up the propaganda! The hyperlinks still work! I didn't imagine them! They were real! You can still click on them: "A Sense That More Is Possible", "Raising the Sanity Waterline"

I added:

Can you please acknowledge that I didn't just make this up? Happy to pay you $200 for a reply to this email within the next 72 hours

Anna said she didn't want to receive cheerful price offers from me anymore; previously, she had regarded my occasionally throwing money at her to bid for her scarce attention[7] as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her using me. She agreed that someone could have gotten the ideals I had gotten out of those posts, but there was also evidence from that time pointing the other way (e.g., "Politics Is the Mind-Killer") and it shouldn't be surprising if people steered clear of controversy.

I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether I should cut my dick off would become a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit an epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't correct the mistake even after it was pointed out, then the "rationalists" were worse than useless to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I were part of a reference class that included AI researchers).

Fundamentally, I was skeptical that you could do consistently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in "Entangled Truths, Contagious Lies" and "Dark Side Epistemology": the need to lie about lying and cover up cover-ups propagates recursively. Anna was unusually skillful at thinking things without saying them; I thought people facing similar speech restrictions generally just get worse at thinking (plausibly[8] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend "What You Can't Say" to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not talking about atheism without also optimizing against understanding Occam's razor; you can't optimize for not questioning gender self-identity without also optimizing against understanding the 37 ways that words can be wrong.

Squabbling On and With lesswrong.com (May–July 2019)

Despite Math and Wellness Month and my intent to take a break from the religious civil war, I kept reading Less Wrong during May 2019, and ended up scoring a couple of victories in the civil war (at some cost to Wellness).

MIRI researcher Scott Garrabrant wrote a post about how "Yes Requires the Possibility of No". Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if you believed that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and mentioned it in a comment: if you want to believe that x belongs to category C, you might try redefining C in order to make the question "Is x a C?" come out "Yes", but you can only do so at the expense of making C less useful. Meaningful category-membership (Yes) requires the possibility of non-membership (No).

Someone objected that she found it "unpleasant that [I] always bring [my] hobbyhorse in, but in an 'abstract' way that doesn't allow discussing the actual object level question"; it made her feel "attacked in a way that allow[ed] for no legal recourse to defend [herself]." I replied that that was understandable, but that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized, such that my attempts to do correct epistemology were perceived as attacking people. Such a trainwreck ensued that the mods manually moved the comments to their own post. Based on the karma scores and what was said,[9] I count it as a victory.

On 31 May 2019, a draft of a new Less Wrong FAQ included a link to "The Categories Were Made for Man, Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other Slate Star Codex post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or I could call it a blatant lie because no rule of rationality says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, was persuaded, and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."

But "victories" weren't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" to my side of the civil war). In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what your real work is. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except in this one meta-level essay) was probably the right choice. But someone whose goal is to improve Society's collective ability to reason should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improve our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software."

I said I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a MIRI research associate? Not to demonize that commenter, because I was just as bad (if not worse) in 2008. The difference was that in 2008, we had a culture that could beat it out of me.

Steven objected that tractability and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's gravitational field directly impedes NASA's mission, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort (viz., zero) trying to reduce the Earth's gravity.

I agreed that tractability needed to be addressed, but the situation felt analogous to being in a coal mine in which my favorite of our canaries had just died. Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird; it's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, but that's not why I expected them to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question doesn't matter. (The causal graph is the fork "canary death ← mine gas → danger" rather than the direct link "canary death → danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue?

Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was a fragment of group theory and some probability theory that later turned out to be deeply relevant to understanding sex differences. So much for taking a break.

In June 2019, I made a linkpost on Less Wrong to Tal Yarkoni's "No, It's Not The Incentives—It's you", about how professional scientists should stop using career incentives as an excuse for doing poor science. It generated a lot of discussion.

In an email (Subject: "LessWrong.com is dead to me"), Jessica identified Less Wrong moderator Raymond Arnold's comments as her last straw. Jessica wrote:

LessWrong.com is a place where, if the value of truth conflicts with the value of protecting elites' feelings and covering their asses, the second value will win.

Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a lot of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it). People who think being exposed as fraudulent (or having their friends exposed as fraudulent) is a terrible outcome, are going to actively resist high-integrity discussion norms.

Posting on Less Wrong made sense as harm-reduction, but the only way to get people to stick up for truth would be to convert them to a whole new worldview, which would require a lot of in-person discussions. She brought up the idea of starting a new forum to replace Less Wrong.

Ben said that trying to discuss with the Less Wrong mod team would be a good intermediate step, after we clarified to ourselves what was going on; it might be "good practice in the same way that the Eliezer initiative was good practice." The premise should be, "If this is within the Overton window for Less Wrong moderators, there's a serious confusion on the conditions required for discourse"—scapegoating individuals wasn't part of it. He was less optimistic about harm reduction; participating on the site was implicitly endorsing it by submitting to the rule of the karma and curation systems.

"Riley" expressed sadness about how the discussion on "The Incentives" demonstrated that the community they loved—including dear friends—was in a bad way. Michael (in a separate private discussion) had said he was glad to hear about the belief-update. "Riley" said that Michael saying that also made them sad, because it seemed discordant to be happy about sad news. Michael wrote:

I['m] sorry it made you sad. From my perspective, the question is no[t] "can we still be friends with such people", but "how can we still be friends with such people" and I am pretty certain that understanding their perspective [is] an important part of the answer. If clarity seems like death to them and like life to us, and we don't know this, IMHO that's an unpromising basis for friendship.


I got into a scuffle with Ruby Bloom on his post on "Causal Reality vs. Social Reality". I wrote what I thought was a substantive critique, but Ruby complained that my tone was too combative, and asked for more charity and collaborative truth-seeking[10] in any future comments.

(My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the Less Wrong FAQ. Maybe he couldn't let me "win" again so quickly?)

I emailed the posse about the thread, on the grounds that gauging the psychology of the mod team was relevant to our upcoming Voice vs. Exit choices. Meanwhile on Less Wrong, Ruby kept doubling down:

[I]f the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. [...]

[...]

Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express "I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." [...] There's an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't.

"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. I would later complain to Anna that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from veteran CfAR participants, what was CfAR for?

I replied to Ruby that you could just directly respond to your interlocutor's arguments. Whether you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack! I thought it was ironic that this happened on a post that was explicitly about causal vs. social reality; it's possible that I wouldn't have been so rigid about this if it weren't for that prompt.

(On reviewing the present post prior to publication, Ruby writes that he regrets his behavior during this exchange.)

Jessica ended up writing a post, "Self-Consciousness Wants Everything to Be About Itself", arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings. She used as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community.

Jessica was surprised by how well it worked, judging by Ruby mentioning silencing in a subsequent comment to me (plausibly influenced by Jessica's post) and by an exchange between Ray and Ruby that she thought was "surprisingly okay".

From this, Jessica derived the moral that when people are doing something that seems obviously terrible and in bad faith, it can help to publicly explain why the abstract thing is bad, without accusing anyone. This made sense because people didn't want to be held to standards that other people aren't being held to: a call-out directed at oneself personally could be selective enforcement, but a call-out of the abstract pattern invited changing one's behavior if the new equilibrium looked better.

Michael said that part of the reason this worked was because it represented a clear threat of scapegoating without actually scapegoating and without surrendering the option to do so later; it was significant that Jessica's choice of example positioned her on the side of the powerful social-justice coalition.


On 4 July 2019, Scott Alexander published "Some Clarifications on Rationalist Blogging", disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [Slate Star Codex] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it.


Jessica published "The AI Timelines Scam", arguing that the recent prominence of "short" (e.g., 2030) timelines to transformative AI was better explained by political factors than by technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project.

(Remember, this was 2019. After seeing what GPT-3, DALL-E, PaLM, &c. could do during the "long May 2020", it now looks to me that the short-timelines people had better intuitions than Jessica gave them credit for.)

I still sympathized with the pushback from Caliphate supporters against using "scam"/"fraud"/"lie"/&c. language to include motivated elephant-in-the-brain-like distortions. I conceded that this was a boring semantic argument, but I feared that until we invented better linguistic technology, the boring semantic argument was going to continue sucking up discussion bandwidth with others.

"Am I being too tone-policey here?" I asked the posse. "Is it better if I explicitly disclaim, 'This is marketing advice; I'm not claiming to be making a substantive argument'?" (Subject: "Re: reception of 'The AI Timelines Scam' is better than expected!")

Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you?" He argued that investigations of financial fraud focus on false promises about money, rather than the psychological minutiæ of the perp's motives.

I replied that the concept of mens rea did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") and premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is taking on some nonzero risk of committing vehicular manslaughter.) The manslaughter example was simpler than misinformation-that-moves-resources,[11] and it might not be easy for the court to determine "intent", but I didn't see what would reverse the weak principle that intent sometimes matters.

Ben replied that what mattered in the determination of manslaughter vs. murder was whether there was long-horizon optimization power toward the outcome of someone's death, not what sentiments the killer rehearsed in their working memory.

On a phone call later, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope had to know on some level that it wasn't true. (I agreed that this usage of "fraud" made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they had to know at some level that they weren't doing that.


Ruby wrote a document about ways in which one's speech could harm people, which was discussed in the comments of a draft Less Wrong post by some of our posse members and some of the Less Wrong mods.[12]

Ben wrote:

What I see as under threat is the ability to say in a way that's actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction. Frequently, attempts to say this are construed primarily as moves to attack some person or institution, pushing them into the outgroup. Frequently, people suggest to me an "equivalent" wording with a softer tone, which in fact omits important substantive criticisms I mean to make, while claiming to understand what's at issue.

Ray Arnold replied:

My core claim is: "right now, this isn't possible, without a) it being heard by many people as an attack, b) without people having to worry that other people will see it as an attack, even if they don't."

It seems like you see this something as "there's a precious thing that might be destroyed" and I see it as "a precious thing does not exist and must be created, and the circumstances in which it can exist are fragile." It might have existed in the very early days of LessWrong. But the landscape now is very different than it was then. With billions of dollars available and at stake, what worked then can't be the same thing as what works now.

(!!)[13]

Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread: now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene were still just a philosophy club. This was catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it more important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory behavior that philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. We didn't want to disproportionately punish people for getting caught up in obfuscatory patterns; that would just increase the incentive to obfuscate. But we did need some way to reveal what was going on.

In email, Jessica acknowledged that Ray had a point that it was confusing to use court-inspired language if we didn't intend to blame and punish people. Michael said that court language was our way to communicate "You don't have the option of non-engagement with the complaints that are being made." (Courts can summon people; you can't ignore a court summons the way you can ignore ordinary critics.)

Michael said that we should also develop skill in using social-justicey blame language, as was used against us, harder, while we still thought of ourselves as trying to correct people's mistakes rather than being in a conflict against the Blight. "Riley" said that this was a terrifying you-have-become-the-abyss suggestion; Ben thought it was obviously a good idea.

I was horrified by the extent to which Less Wrong moderators (!) seemed to be explicitly defending "protect feelings" norms. Previously, I had mostly been seeing the present struggle through the lens of my idiosyncratic Something to Protect as a simple matter of Bay Area political correctness. I was happy to have Michael, Ben, and Jessica as allies, but I hadn't been seeing the Blight as a unified problem. Now I was seeing something.

An in-person meeting was arranged for 23 July 2019 at the Less Wrong office, with Ben, Jessica, me, and most of the Less Wrong team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to faithfully recount it.[14] I ended up crying at one point and left the room for a while.

The next day, I asked Ben and Jessica for their takeaways via email (Subject: "peace talks outcome?"). Jessica said that I was a "helpful emotionally expressive and articulate victim" and that there seemed to be a consensus that people like me should be warned somehow that Less Wrong wasn't doing fully general sanity-maximization anymore. (Because community leaders were willing to sacrifice, for example, ability to discuss non-AI heresies in order to focus on sanity about AI in particular while maintaining enough mainstream acceptability and power.)

I said that for me and my selfish perspective, the main outcome was finally shattering my "rationalist" social identity. I needed to exhaust all possible avenues of appeal before it became real to me. The morning after was the first for which "rationalists ... them" felt more natural than "rationalists ... us".

A Beleaguered Ally Under Fire (July–August 2019)

Michael's reputation in the community, already not what it once was, continued to be debased even further.

The local community center, the Berkeley REACH,[15] was conducting an investigation as to whether to exclude Michael (which was mostly moot, as he didn't live in the Bay Area). When I heard that the committee conducting the investigation was "very close to releasing a statement", I wrote to them:

I've been collaborating with Michael a lot recently, and I'm happy to contribute whatever information I can to make the report more accurate. What are the charges?

They replied:

To be clear, we are not a court of law addressing specific "charges." We're a subcommittee of the Berkeley REACH Panel tasked with making decisions that help keep the space and the community safe.

I replied:

Allow me to rephrase my question about charges. What are the reasons that the safety of the space and the community require you to write a report about Michael? To be clear, a community that excludes Michael on inadequate evidence is one where I feel unsafe.

We arranged a call, during which I angrily testified that Michael was no threat to the safety of the space and the community. This would have been a bad idea if it were the cops, but in this context, I figured my political advocacy couldn't hurt.

Concurrently, I got into an argument with Kelsey Piper about Michael after she wrote on Discord that her "impression of Vassar's threatening schism is that it's fundamentally about Vassar threatening to stir shit up until people stop socially excluding him for his bad behavior." I didn't think that was what the schism was about (Subject: "Michael Vassar and the theory of optimal gossip").

In the course of litigating Michael's motivations (the details of which are not interesting enough to summarize here), Kelsey mentioned that she thought Michael had done immense harm to me—that my models of the world and ability to reason were worse than they were a year ago. I thanked her for the concern, and asked if she could be more specific.

She said she was referring to my ability to predict consensus and what other people believe. I expected people to be convinced by arguments that they found not only unconvincing, but so unconvincing they didn't see why I would bother. I believed things to be in obvious violation of widespread agreement that everyone else thought were not. My shocked indignation at other people's behavior indicated a poor model of social reality.

I considered this an insightful observation about a way in which I'm socially retarded. I had had similar problems with school. We're told that the purpose of school is education (to the extent that most people think of school and education as synonyms), but the consensus behavior is "sit in lectures and trade assignments for grades." Faced with what I saw as a contradiction between the consensus narrative and the consensus behavior, I would assume that the narrative was the "correct" version, and so I spent a lot of time trying to start conversations about math with everyone and then getting indignant when they'd say, "What class is this for?" Math isn't for classes; it's the other way around, right?

Empirically, no! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are mistakenly failing to live up to the narrative" and "Everybody knows the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing.

It was the same thing here. Kelsey said that it was predictable that Yudkowsky wouldn't make a public statement, even one as basic as "category boundaries should be drawn for epistemic and not instrumental reasons," because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion in April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[16]

Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks from people who hate him anyway, and not the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it was obvious. But if so, I had no reason to care what Eliezer Yudkowsky said, because not provoking SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise to me, even if Kelsey knew better.

What Kelsey saw as "Zack is losing his ability to model other people and I'm worried about him," I thought Ben and Jessica would see as "Zack is angry about living in simulacrum level 3 and we're worried about everyone else."

I did think that Kelsey was mistaken about how much causality to attribute to Michael's influence, rather than to me already being socially retarded. From my perspective, validation from Michael was merely the catalyst that excited me from confused-and-sad to confused-and-socially-aggressive-about-it. The latter phase revealed a lot of information, and not just to me. Now I was ready to be less confused—after I was done grieving.

Later, talking in person at "Arcadia", Kelsey told me that the REACH was delaying its release of its report about Michael because someone whose identity she could not disclose had threatened to sue. As far as my interest in defending Michael went, I counted this as short-term good news (because the report wasn't being published for now) but longer-term bad news (because the report must be a hit piece if Michael's mysterious ally was trying to hush it).

When I mentioned this to Michael on Signal on 3 August 2019, he replied:

The person is me, the whole process is a hit piece, literally, the investigation process and not the content. Happy to share the latter with you. You can talk with Ben about appropriate ethical standards.

In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. This kind of situation is an example of how norms protecting confidentiality distort information; Kelsey felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person. I can't say I never introduce this kind of distortion myself (for I, too, am bound by norms), but when I do, I feel dirty about it.

As far as appropriate ethical standards go, I didn't approve of silencing critics with lawsuit threats, even while I agreed with Michael that "the process is the punishment." I imagine that if the REACH wanted to publish a report about me, I would expect to defend myself in public, having faith that the beautiful weapon of my Speech would carry the day against a corrupt community center—or for that matter, against /r/SneerClub.

This is arguably one of my more religious traits. Michael and Kelsey are domain experts and probably know better.

An Poignant-to-Me Anecdote That Fits Here Chronologically But Doesn't Particularly Foreshadow Anything (August 2019)

While visiting "Arcadia", "Meredith" and Mike's son (age 2¾ years) asked me, "Why are you a boy?"

After a long pause, I said, "Yes," as if I had misheard the question as "Are you a boy?" I think it was a motivated mishearing: it was only after I answered that I consciously realized that's not what the kid asked.

I think I would have preferred to say, "Because I have a penis, like you." But it didn't seem appropriate.

Philosophy Blogging Interlude! (August–October 2019)

I wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constrained. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise the "rationalists" collectively, and—more philosophy blogging!

In August 2019's "Schelling Categories, and Simple Membership Tests", I explained a nuance that had only merited a passing mention in "Where to Draw the Boundaries?": sometimes you might want categories for different agents to coordinate on, even at the cost of some statistical "fit." (This was generalized from a "pro-trans" argument that had occurred to me, that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as.)

In September 2019's "Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists", I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. (Jessica said that it was obvious even to ten-year-olds that partisan politics distorts impressions by filtering evidence. "[D]o you think we could get a ten-year-old to explain it to Eliezer Yudkowsky?" I asked.)

In October 2019's "Algorithms of Deception!", I exhibited some toy Python code modeling different kinds of deception. If a function faithfully passes its observations as input to another function, the second function can construct a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function computes a worse probability distribution.

Also in October 2019, in "Maybe Lying Doesn't Exist", I replied to Scott Alexander's "Against Lie Inflation", which was itself a generalized rebuke of Jessica's "The AI Timelines Scam". Scott thought Jessica was wrong to use language like "lie", "scam", &c. to describe someone being (purportedly) motivatedly wrong, but not necessarily consciously lying.

I was furious when "Against Lie Inflation" came out. (Furious at what I perceived as hypocrisy, not because I particularly cared about defending Jessica's usage.) Oh, so now Scott agreed that making language less useful is a problem?! But on further consideration, I realized he was actually being consistent in admitting appeals to consequences as legitimate. In objecting to the expanded definition of "lying", Alexander was counting "everyone is angrier" (because of more frequent accusations of lying) as a cost. In my philosophy, that wasn't a legitimate cost. (If everyone is lying, maybe people should be angry!)

The Caliph's Madness (August and November 2019)

I continued to note signs of contemporary Yudkowsky not being the same author who wrote the Sequences. In August 2019, he Tweeted:

I am actively hostile to neoreaction and the alt-right, routinely block such people from commenting on my Twitter feed, and make it clear that I do not welcome support from those quarters. Anyone insinuating otherwise is uninformed, or deceptive.

I argued that the people who smear him as a right-wing Bad Guy do so in order to extract these kinds of statements of political alignment as concessions; his own timeless decision theory would seem to recommend ignoring them rather than paying even this small Danegeld.

When I emailed the posse about it begging for Likes (Subject: "can't leave well enough alone"), Jessica said she didn't get my point. If people are falsely accusing you of something (in this case, of being a right-wing Bad Guy), isn't it helpful to point out that the accusation is false? It seemed like I was advocating for self-censorship on the grounds that speaking up helps the false accusers. But it also helps bystanders (by correcting the misapprehension) and hurts the false accusers (by demonstrating to bystanders that the accusers are making things up). By linking to "Kolmogorov Complicity and the Parable of Lightning" in my replies, I seemed to be insinuating that Yudkowsky was under some sort of duress, but this wasn't spelled out: if Yudkowsky would face social punishment for advancing right-wing opinions, did that mean he was under such duress that saying anything at all would be helping the oppressors?

The paragraph from "Kolmogorov Complicity" that I was thinking of was (bolding mine):

Some other beliefs will be found to correlate heavily with lightning-heresy. Maybe atheists are more often lightning-heretics; maybe believers in global warming are too. The enemies of these groups will have a new cudgel to beat them with, "If you believers in global warming are so smart and scientific, how come so many of you believe in lightning, huh?" Even the savvy Kolmogorovs within the global warming community will be forced to admit that their theory just seems to attract uniquely crappy people. It won't be very convincing. Any position correlated with being truth-seeking and intelligent will be always on the retreat, having to forever apologize that so many members of their movement screw up the lightning question so badly.

I perceived a pattern where people who are in trouble with the orthodoxy buy their own safety by denouncing other heretics: not just disagreeing with the other heretics because they are mistaken, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld.

Suppose there are five true heresies, but anyone who's on the record as believing more than one gets burned as a witch. Then it's impossible to have a unified rationalist community, because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy of categorization right in full generality, even though his writings revealed an implicit understanding of the correct way,[17] and he and I had a common enemy in the social-justice egregore. He couldn't afford to. He'd already spent his Overton budget on anti-feminism.

Alexander (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech,[18] and it was tempting to try to jump there. (It would probably be better if there were a way to organize just the good witches, and exclude all the Actually Bad witches, but the Sorites problem on witch Badness made that hard to organize without falling back to the one-heresy-per-thinker equilibrium.)

Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" facts.) I agreed that conflating political positions with facts would be bad. I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had learned stuff from reading far-right authors (most notably Mencius Moldbug) and from talking with "Thomas". I was starting to appreciate what Michael had said about "Less precise is more violent" back in April when I was talking about criticizing "rationalists".

Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky had said years ago that Moldbug was low quality.)

I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this author." I wasn't accusing Yudkowsky of being insincere.

What I did think was that the need to keep up appearances of not being a right wing Bad Guy was a serious distortion of people's beliefs, because there are at least a few questions of fact where believing the correct answer can, in the political environment of the current year, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to notice that this is a rationality problem and to not actively make the problem worse. I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that if I thought I'd learned valuable things from Moldbug, that made me less welcome in Yudkowsky's fiefdom.

Yudkowsky certainly wouldn't endorse "Even learning things from these people makes you unwelcome" as stated, but "I do not welcome support from those quarters" still seemed like a pointlessly partisan silencing/shunning attempt, when one could just as easily say, "I'm not a neoreactionary, and if some people who read me are, that's obviously not my fault."

Jessica asked if Yudkowsky denouncing neoreaction and the alt-right would still seem harmful, if he were to also to acknowledge, e.g., racial IQ differences?

I agreed that that would be better, but realistically, I didn't see why Yudkowsky should want to poke that hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, either you become an evidence-filtering clever arguer, or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.)

It was as if there was a "Say Everything" attractor and a "Say Nothing" attractor, and my incentives were pushing me towards the "Say Everything" attractor—but that was only because I had Something to Protect in the forbidden zone and I was a decent programmer (who could therefore expect to be employable somewhere, just as James Damore eventually found another job). Anyone in less extreme circumstances would find themselves pushed toward the "Say Nothing" attractor.

It was instructive to compare Yudkowsky's new disavowal of neoreaction with one from 2013, in response to a TechCrunch article citing former MIRI employee Michael Anissimov's neoreactionary blog More Right:[19]

"More Right" is not any kind of acknowledged offspring of Less Wrong nor is it so much as linked to by the Less Wrong site. We are not part of a neoreactionary conspiracy. We are and have been explicitly pro-Enlightenment, as such, under that name. Should it be the case that any neoreactionary is citing me as a supporter of their ideas, I was never asked and never gave my consent. [...]

Also to be clear: I try not to dismiss ideas out of hand due to fear of public unpopularity. However I found Scott Alexander's takedown of neoreaction convincing and thus I shrugged and didn't bother to investigate further.

My criticism regarding negotiating with terrorists did not apply to the 2013 disavowal. More Right was brand encroachment on Anissimov's part that Yudkowsky had a legitimate interest in policing, and the "I try not to dismiss ideas out of hand" disclaimer importantly avoided legitimizing McCarthyist persecution.

The question was, what had specifically happened in the last six years to shift Yudkowsky's opinion on neoreaction from (paraphrased) "Scott says it's wrong, so I stopped reading" to (verbatim) "actively hostile"? Note especially the inversion from (both paraphrased) "I don't support neoreaction" (fine, of course) to "I don't even want them supporting me" (which was bizarre; humans with very different views on politics nevertheless have a common interest in not being transformed into paperclips).

Did Yudkowsky get new information about neoreaction's hidden Badness parameter sometime between 2013 and 2019, or did moral coercion from the left intensify (because Trump and because Berkeley)? My bet was on the latter.


However it happened, it didn't seem like the brain damage was limited to "political" topics, either. In November 2019, we saw another example of Yudkowsky destroying language for the sake of politeness, this time the context of him trying to wirehead his fiction subreddit by suppressing criticism-in-general.

That's my characterization, of course: the post itself talks about "reducing negativity". In a followup comment, Yudkowsky wrote (bolding mine):

On discussion threads for a work's particular chapter, people may debate the well-executedness of some particular feature of that work's particular chapter. Comments saying that nobody should enjoy this whole work are still verboten. Replies here should still follow the etiquette of saying "Mileage varied: I thought character X seemed stupid to me" rather than saying "No, character X was actually quite stupid."

But ... "I thought X seemed Y to me"[20] and "X is Y" do not mean the same thing! The map is not the territory. The quotation is not the referent. The planning algorithm that maximizes the probability of doing a thing is different from the algorithm that maximizes the probability of having "tried" to do the thing. If my character is actually quite stupid, I want to believe that my character is actually quite stupid.

It might seem like a little thing of no significance—requiring "I" statements is commonplace in therapy groups and corporate sensitivity training—but this little thing coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space made a pattern click. If everyone is forced to only make claims about their map ("I think", "I feel") and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because disagreement is disrespect), that's great for reducing social conflict but not for the kind of collective information processing that accomplishes cognitive work,[21] like good literary criticism. A rationalist space needs to be able to talk about the territory.

To be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" as one of the major considerations to be balanced. But I think (I think) it's also fair to note that (as we had seen on Less Wrong earlier that year), lip service is cheap. It's easy to say, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did.

"Broadcast criticism is adversely selected for critic errors," Yudkowsky wrote in the post on reducing negativity, correctly pointing out that if a work's true level of mistakenness is M, the i-th commenter's estimate of mistakenness has an error term of , and commenters leave a negative comment when their estimate M + is greater than their threshold for commenting , then the comments that get posted will have been selected for erroneous criticism (high ) and commenter chattiness (low ).

I can imagine some young person who liked Harry Potter and the Methods being intimidated by the math notation and indiscriminately accepting this wisdom from the great Eliezer Yudkowsky as a reason to be less critical, specifically. But a somewhat less young person who isn't intimidated by math should notice that this is just regression to the mean. The same argument applies to praise!

What I would hope for from a rationality teacher and a rationality community, would be efforts to instill the general skill of modeling things like regression to the mean and selection effects, as part of the general project of having a discourse that does collective information-processing.

And from the way Yudkowsky writes these days, it looks like he's ... not interested in collective information-processing? Or that he doesn't actually believe that's a real thing? "Credibly helpful unsolicited criticism should be delivered in private," he writes! I agree that the positive purpose of public criticism isn't solely to help the author. (If it were, there would be no reason for anyone but the author to read it.) But readers do benefit from insightful critical commentary. (If they didn't, why would they read the comments section?) When I read a story and am interested in reading the comments about a story, it's because I'm interested in the thoughts of other readers, who might have picked up subtleties I missed. I don't want other people to self-censor comments on any plot holes or Fridge Logic they noticed for fear of dampening someone else's enjoyment or hurting the author's feelings.

Yudkowsky claims that criticism should be given in private because then the target "may find it much more credible that you meant only to help them, and weren't trying to gain status by pushing them down in public." I'll buy this as a reason why credibly altruistic unsolicited criticism should be delivered in private.[22] Indeed, meaning only to help the target just doesn't seem like a plausible critic motivation in most cases. But the fact that critics typically have non-altruistic motives, doesn't mean criticism isn't helpful. In order to incentivize good criticism, you want people to be rewarded with status for making good criticisms. You'd have to be some sort of communist to disagree with this![23]

There's a striking contrast between the Yudkowsky of 2019 who wrote the "Reducing Negativity" post, and an earlier Yudkowsky (from even before the Sequences) who maintained a page on Crocker's rules: if you declare that you operate under Crocker's rules, you're consenting to other people optimizing their speech for conveying information rather than being nice to you. If someone calls you an idiot, that's not an "insult"; they're just informing you about the fact that you're an idiot, and you should probably thank them for the tip. (If you were an idiot, wouldn't you be better off knowing that?)

It's of course important to stress that Crocker's rules are opt-in on the part of the receiver; it's not a license to unilaterally be rude to other people. Adopting Crocker's rules as a community-level norm on an open web forum does not seem like it would end well.

Still, there's something precious about a culture where people appreciate the obvious normative ideal underlying Crocker's rules, even if social animals can't reliably live up to the normative ideal. Speech is for conveying information. People can say things—even things about me or my work—not as a command, or as a reward or punishment, but just to establish a correspondence between words and the world: a map that reflects a territory.

Appreciation of this obvious normative ideal seems strikingly absent from Yudkowsky's modern work—as if he's given up on the idea that reasoning in public is useful or possible. His Less Wrong commenting guidelines declare, "If it looks like it would be unhedonic to spend time interacting with you, I will ban you from commenting on my posts." The idea that people who are unhedonic to interact with might have intellectually substantive criticisms that the author has a duty to address does not seem to have crossed his mind.

The "Reducing Negativity" post also warns against the failure mode of attempted "author telepathy": attributing bad motives to authors and treating those attributions as fact without accounting for uncertainty or distinguishing observations from inferences. I should be explicit, then: when I say negative things about Yudkowsky's state of mind, like it's "as if he's given up on the idea that reasoning in public is useful or possible", that's a probabilistic inference, not a certain observation.

But I think making probabilistic inferences is ... fine? The sentence "Credibly helpful unsolicited criticism should be delivered in private" sure does look to me like text generated by a state of mind that doesn't believe that reasoning in public is useful or possible. I think that someone who did believe in public reason would have noticed that criticism has information content whose public benefits might outweigh its potential to harm an author's reputation or feelings. If you think I'm getting this inference wrong, feel free to let me and other readers know why in the comments.

A Worthy Critic At Last (November 2019)

I received an interesting email comment on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifier for an application depends on the costs and benefits. As a classic example, prey animals need to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator.

I had thought of the "false positives are better than false negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for consequences to motivate what variables you paid attention to. But given the subspace that's relevant to your interests, you want to run an "epistemically legitimate" clustering algorithm on the data you see there, which depends on the data, not your values. Ideal probabilistic beliefs shouldn't depend on consequences.

Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is because probabilistic reasoning is broadly useful: epistemology can be derived from instrumental concerns. He agreed that severe wireheading issues potentially arise if you allow consequentialist concerns to affect your epistemics.

But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's two agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't propagate itself and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places.

I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me in the way that almost everyone else in Berkeley was trying to mess with me.)

Writer's Block (November 2019)

I wrote to Ben about how I was still stuck on writing the grief-memoir. My plan had been to tell the story of the Category War while Glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-frenemies warning when you're about to publicly call them intellectually dishonest), then share it to Less Wrong and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water).

The reason it should have been safe to write was because it's good to explain things. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to tell the true story about why I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, &c."

So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was the nuclear option. If you're slowly but surely gaining territory in a conventional war, suddenly escalating to nukes would be pointlessly destructive. This metaphor was horribly non-normative (arguing is not a punishment; carefully telling a true story about an argument is not a nuke), but I didn't know how to make it stably go away.

A more motivationally-stable compromise would be to split off whatever generalizable insights that would have been part of the story into their own posts. "Heads I Win, Tails?—Never Heard of Her" had been a huge success as far as I was concerned, and I could do more of that kind of thing, analyzing the social stuff without making it personal, even if, secretly ("secretly"), it was personal.

Ben replied that it didn't seem like it was clear to me that I was a victim of systemic abuse, and that I was trying to figure out whether I was being fair to my abusers. He thought if I could internalize that, I would be able to forgive myself a lot of messiness, which would make the problem less daunting.

I said I would bite that bullet: Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have extenuating circumstances" would be a lame excuse.

This seemed correlated with the recurring stalemated disagreement within our posse, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's perspective that this usage of "fraud" was motte-and-baileying between different senses of the word. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.[24]) I wanted to do more work to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge.

Interactions With a Different Rationalist Splinter Group (November–December 2019)

On 12 and 13 November 2019, Ziz published several blog posts laying out her grievances against MIRI and CfAR. On the fifteenth, Ziz and three collaborators staged a protest at the CfAR reunion being held at a retreat center in the North Bay near Camp Meeker. A call to the police falsely alleged that the protesters had a gun, resulting in a dramatic police reaction (SWAT team called, highway closure, children's group a mile away being evacuated—the works).

I was tempted to email links to Ziz's blog posts to the Santa Rosa Press-Democrat reporter covering the incident (as part of my information-sharing-is-good virtue ethics), but decided to refrain because I predicted that Anna would prefer I didn't.

The main relevance of this incident to my Whole Dumb Story is that Ziz's memoir–manifesto posts included a 5500 word section about me. Ziz portrays me as a slave to social reality, throwing trans women under the bus to appease the forces of cissexism. I don't think that's what's going on with me, but I can see why the theory was appealing.


On 12 December 2019 I had an interesting exchange with Somni, one of the "Meeker Four"—presumably out on bail at this time?—on Discord.

I told her it was surprising that she spent so much time complaining about CfAR, Anna Salamon, Kelsey Piper, &c., but I seemed to get along fine with her—because naïvely, one would think that my views were so much worse. Was I getting a pity pass because she thought false consciousness was causing me to act against my own transfem class interests? Or what?

In order to be absolutely clear about my terrible views, I said that I was privately modeling a lot of transmisogyny complaints as something like—a certain neurotype-cluster of non-dominant male is latching onto locally ascendant social-justice ideology in which claims to victimhood can be leveraged into claims to power. Traditionally, men are moral agents, but not patients; women are moral patients, but not agents. If weird non-dominant men aren't respected if identified as such (because low-ranking males aren't valuable allies, and don't have the intrinsic moral patiency of women), but can get victimhood/moral-patiency points for identifying as oppressed transfems, that creates an incentive gradient for them to do so. No one was allowed to notice this except me, because everybody who's anybody prefers to stay on the good side of social-justice ideology unless they have Something to Protect that requires defying it.

Somni said we got along because I was being victimized by the same forces of gaslighting as her and wasn't lying about my agenda. Maybe she should be complaining about me?—but I seemed to be following a somewhat earnest epistemic process, whereas Kelsey, Scott, and Anna were not. If I were to start going, "Here's my rationality org; rule #1: no transfems (except me); rule #2, no telling people about rule #1", then she would talk about it.

I would later remark to Anna that Somni and Ziz saw themselves as being oppressed by people's hypocritical and manipulative social perceptions and behavior. Merely using the appropriate language ("Somni ... she", &c.) protected her against threats from the Political Correctness police, but it actually didn't protect against threats from the Zizians. The mere fact that I wasn't optimizing for PR (lying about my agenda, as Somni said) was what made me not a direct enemy (although still a collaborator) in their eyes.

Philosophy Blogging Interlude 2! (December 2019)

I had a pretty productive blogging spree in December 2019. In addition to a number of more minor posts on this blog and on Less Wrong, I also got out some more significant posts bearing on my agenda.

On this blog, in "Reply to Ozymandias on Fully Consensual Gender", I finally got out at least a partial reply to Ozy Brennan's June 2018 reply to "The Categories Were Made for Man to Make Predictions", affirming the relevance of an analogy Ozy had made between the socially-constructed natures of money and social gender, while denying that the analogy supported gender by self-identification. (I had been working on a more exhaustive reply, but hadn't managed to finish whittling it into a shape that I was totally happy with.)

I also polished and pulled the trigger on "On the Argumentative Form 'Super-Proton Things Tend to Come In Varieties'", my reply to Yudkowsky's implicit political concession to me back in March. I had been reluctant to post it based on an intuition of, "My childhood hero was trying to do me a favor; it would be a betrayal to reject the gift." The post itself explained why that intuition was crazy, but that just brought up more anxieties about whether the explanation constituted leaking information from private conversations—but I had chosen my words carefully such that it wasn't. ("Even if Yudkowsky doesn't know you exist [...] he's effectively doing your cause a favor" was something I could have plausibly written in the possible world where the antecedent was true.) Jessica said the post seemed good.

On Less Wrong, the mods had just announced a new end-of-year Review event, in which the best posts from the year before would be reviewed and voted on, to see which had stood the test of time and deserved to be part of our canon of cumulative knowledge. (That is, this Review period starting in late 2019 would cover posts published in 2018.)

This provided me with an affordance to write some posts critiquing posts that had been nominated for the Best-of-2018 collection that I didn't think deserved such glory. In response to "Decoupling vs. Contextualizing Norms" (which had been cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck), I wrote "Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary", appealing to our academically standard theory of how context affects meaning to explain why "decoupling vs. contextualizing norms" is a false dichotomy.

More significantly, in reaction to Yudkowsky's "Meta-Honesty: Firming Up Honesty Around Its Edge Cases", I published "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think",[25] explaining why I thought "Meta-Honesty" was relying on an unproductively narrow sense of "honesty", because the ambiguity of natural language makes it easy to deceive people without technically lying.

I thought that one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. The "hill of meaning in defense of validity" affair had been driven by Yudkowsky's obsession with not technically lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if that were the matter that anyone cared about—as if conservatives and gender-critical feminists should just pack up and go home after it had been demonstrated that trans people aren't lying), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if that were the matter my posse cared about—as if I should keep honoring him as my caliph after it had been demonstrated that he hadn't lied). But his Sequences had articulated a higher standard than merely not-lying. If he didn't remember, I could at least hope to remind everyone else.

I also wrote a little post, "Free Speech and Triskadekaphobic Calculators", arguing that it should be easier to have a rationality/alignment community that just does systematically correct reasoning than a politically savvy community that does systematically correct reasoning except when that would taint AI safety with political drama, analogous to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic except that it never displays the result 13. In order to build a "triskadekaphobic calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute 6 + 7 but also the infinite family of calculations that include 13 as an intermediate result: if you can't count on (6 + 7) + 1 being the same as 6 + (7 + 1), you lose the associativity of addition.

A Newtonmas Party (December 2019)

On 20 December 2019, Scott Alexander messaged me on Discord—that I shouldn't answer if it would be unpleasant, but that he was thinking of asking about autogynephilia on the next Slate Star Codex survey, and wanted to know if I had any suggestions about question design, or if I could suggest any "intelligent and friendly opponents" to consult. After reassuring him that he shouldn't worry about answering being unpleasant ("I am actively at war with the socio-psychological forces that make people erroneously think that talking is painful!"), I referred him to my friend Tailcalled, who had a lot of experience conducting surveys and ran a "Hobbyist Sexologists" Discord server, which seemed likely to have some friendly opponents.

The next day (I assume while I still happened to be on his mind), Scott also commented on "Maybe Lying Doesn't Exist", my post from back in October replying to his "Against Lie Inflation."

I was frustrated with his reply, which I felt was not taking into account points that I had already covered in detail. A few days later, on the twenty-fourth, I succumbed to the temptation to blow up at him in the comments.

After commenting, I noticed what day it was and added a few more messages to our Discord chat—

okay, maybe speech is sometimes painful
the Less Wrong comment I just left you is really mean
and you know it's not because I don't like you
you know it's because I'm genuinely at my wit's end
after I posted it, I was like, "Wait, if I'm going to be this mean to Scott, maybe Christmas Eve isn't the best time?"
it's like the elephant in my brain is gambling that by being socially aggressive, it can force you to actually process information about philosophy which you otherwise would not have an incentive to
I hope you have a merry Christmas

And then, as an afterthought—

oh, I guess we're Jewish
that attenuates the "is a hugely inappropriately socially-aggressive blog comment going to ruin someone's Christmas" fear somewhat

Scott messaged back at 11:08 the next morning, Christmas Day. He explained that the thought process behind his comment was that he still wasn't sure where we disagreed and didn't know how to proceed except to dump his understanding of the philosophy (which would include things I already knew) and hope that I could point to the step I didn't like. He didn't know how to convince me of his sincerity and rebut my accusations of him motivatedly playing dumb (which he was inclined to attribute to the malign influence of Michael Vassar's gang).

I explained that the reason for those accusations was that I knew he knew about strategic equivocation, because he taught everyone else about it (as in his famous posts about the motte-and-bailey doctrine and the noncentral fallacy). And so when he acted like he didn't get it when I pointed out that this also applied to "trans women are women", that just seemed implausible.

He asked for a specific example. ("Trans women are women, therefore trans women have uteruses" being a bad example, because no one was claiming that.) I quoted an article from the The Nation: "There is another argument against allowing trans athletes to compete with cis-gender athletes that suggests that their presence hurts cis-women and cis-girls. But this line of thought doesn't acknowledge that trans women are in fact women." Scott agreed that this was stupid and wrong and a natural consequence of letting people use language the way he was suggesting (!).

I didn't think it was fair to ordinary people to expect them to go as deep into the philosophy-of-language weeds as I could before being allowed to object to this kind of chicanery. I thought "pragmatic" reasons to not just use the natural clustering that you would get by impartially running a clustering algorithm on the subspace of configuration space relevant to your goals, basically amounted to "wireheading" (optimizing someone's map for looking good rather than reflecting the territory) or "war" (optimizing someone's map to not reflect the territory in order to manipulate them). If I were to transition today and didn't pass as well as Jessica, and everyone felt obligated to call me a woman, they would be wireheading me: making me think my transition was successful, even though it wasn't. That's not a nice thing to do to a rationalist.

Scott thought that trans people had some weird thing going on in their brains such that being referred to as their natal sex was intrinsically painful, like an electric shock. The thing wasn't an agent, so the injunction to refuse to give in to extortion didn't apply. Having to use a word other than the one you would normally use in order to avoid subjecting someone to painful electric shocks was worth it.

I thought I knew things about the etiology of transness such that I didn't think the electric shock was inevitable, but I didn't want the conversation to go there if it didn't have to. I didn't have to ragequit the so-called "rationalist" community over a complicated empirical question, only over bad philosophy. Scott said he might agree with me if he thought the tradeoff were unfavorable between clarity and utilitarian benefit—or if he thought it had the chance of snowballing like in his "Kolmogorov Complicity and the Parable of Lightning".

I pointed out that what sex people are is more relevant to human social life than whether lightning comes before thunder. He said that the problem in his parable was that people were being made ignorant of things, whereas in the transgender case, no one was being kept ignorant; their thoughts were just following a longer path.

I was skeptical of the claim that no one was "really" being kept ignorant. If you're sufficiently clever and careful and you remember how language worked when Airstrip One was still Britain, then you can still think, internally, and express yourself as best you can in Newspeak. But a culture in which Newspeak is mandatory, and all of Oceania's best philosophers have clever arguments for why Newspeak doesn't distort people's beliefs doesn't seem like a culture that could solve AI alignment.

I linked to Zvi Mowshowitz's post about how the claim that "everybody knows" something gets used to silence people trying to point out the thing: in this case, basically, "'Everybody knows' our kind of trans women are sampled from (part of) the male multivariate trait distribution rather than the female multivariate trait distribution, why are you being a jerk and pointing this out?" But I didn't think that everyone knew.[26] I thought the people who sort-of knew were being intimidated into doublethinking around it.

At this point, it was almost 2 p.m. (the paragraphs above summarizing a larger volume of typing), and Scott mentioned that he wanted to go to the Event Horizon Christmas party, and asked if I wanted to come and continue the discussion there. I assented, and thanked him for his time; it would be really exciting if we could avoid a rationalist civil war.

When I arrived at the party, people were doing a reading of the "Hero Licensing" dialogue epilogue to Inadequate Equilibria, with Yudkowsky himself playing the Mysterious Stranger. At some point, Scott and I retreated upstairs to continue our discussion. By the end of it, I was feeling more assured of Scott's sincerity, if not his competence. Scott said he would edit in a disclaimer note at the end of "... Not Man for the Categories".

It would have been interesting if I also got the chance to talk to Yudkowsky for a few minutes, but if I did, I wouldn't be allowed to recount any details of that here due to the privacy rules I'm following.

The rest of the party was nice. People were reading funny GPT-2 quotes from their phones. At one point, conversation happened to zag in a way that let me show off the probability fact I had learned during Math and Wellness Month. A MIRI researcher sympathetically told me that it would be sad if I had to leave the Bay Area, which I thought was nice. There was nothing about the immediate conversational context to suggest that I might have to leave the Bay, but I guess by this point, my existence had become a context.

All in all, I was feeling less ragequitty about the rationalists[27] after the party—as if by credibly threatening to ragequit, the elephant in my brain had managed to extort more bandwidth from our leadership. The note Scott added to the end of "... Not Man for the Categories" still betrayed some philosophical confusion, but I now felt hopeful about addressing that in a future blog post explaining my thesis that unnatural category boundaries were for "wireheading" or "war".

It was around this time that someone told me that I wasn't adequately taking into account that Yudkowsky was "playing on a different chessboard" than me. (A public figure focused on reducing existential risk from artificial general intelligence is going to sense different trade-offs around Kolmogorov complicity strategies than an ordinary programmer or mere worm focused on things that don't matter.) No doubt. But at the same time, I thought Yudkowsky wasn't adequately taking into account the extent to which some of his longtime supporters (like Michael or Jessica) were, or had been, counting on him to uphold certain standards of discourse (rather than chess)?

Another effect of my feeling better after the party was that my motivation to keep working on my memoir of the Category War vanished—as if I was still putting weight on a zero-sum frame in which the memoir was a nuke that I only wanted to use as an absolute last resort.

Ben wrote (Subject: "Re: state of Church leadership"):

It seems to [me] that according to Zack's own account, even writing the memoir privately feels like an act of war that he'd rather avoid, not just using his own territory as he sees fit to create internal clarity around a thing.

I think this has to mean either
(a) that Zack isn't on the side of clarity except pragmatically where that helps him get his particular story around gender and rationalism validated
or
(b) that Zack has ceded the territory of the interior of his own mind to the forces of anticlarity, not for reasons, but just because he's let the anticlaritarians dominate his frame.

Or, I pointed out, (c) I had ceded the territory of the interior of my own mind to Eliezer Yudkowsky in particular, and while I had made a lot of progress unwinding this, I was still, still not done, and seeing him at the Newtonmas party set me back a bit.

"Riley" reassured me that finishing the memoir privately would be clarifying and cathartic for me. If people in the Caliphate came to their senses, I could either not publish it, or give it a happy ending where everyone comes to their senses.

(It does not have a happy ending where everyone comes to their senses.)

Further Discourses on What the Categories Were Made For (January–February 2020)

Michael told me he had changed his mind about gender and the philosophy of language. We talked about it on the phone. He said that the philosophy articulated in "A Human's Guide to Words" was inadequate for politicized environments where our choice of ontology is constrained. If we didn't know how to coin a new third gender, or teach everyone the language of "clusters in high-dimensional configuration space," our actual choices for how to think about trans women were basically three: creepy men (the TERF narrative), crazy men (the medical model), or a protected class of actual woman.[28]

According to Michael, while "trans women are real women" was a lie (in the sense that he agreed that me and Jessica and Ziz were not part of the natural cluster of biological females), it was also the case that "trans women are not real women" was a lie (in the sense that the "creepy men" and "crazy men" stories were wrong). "Trans women are women" could be true in the sense that truth is about processes that create true maps, such that we can choose the concepts that allow discourse and information flow. If the "creepy men" and "crazy men" stories are a cause of silencing, then—under present conditions—we had to choose the "protected class" story in order for people like Ziz to not be silenced.

My response (more vehemently when thinking on it a few hours later) was that this was a garbage bullshit appeal to consequences. If I wasn't going to let Ray Arnold get away with "we are better at seeking truth when people feel safe," I shouldn't let Michael get away with "we are better at seeking truth when people aren't oppressed." Maybe the wider world was ontology-constrained to those three choices, but I was aspiring to higher nuance in my writing.

"Thanks for being principled," he replied.


On 10 February 2020, Scott Alexander published "Autogenderphilia Is Common and Not Especially Related to Transgender", an analysis of the results of the autogynephilia/autoandrophilia questions on the recent Slate Star Codex survey. Based on eyeballing the survey data, Alexander proposed "if you identify as a gender, and you're attracted to that gender, it's a natural leap to be attracted to yourself being that gender" as a "very boring" theory.

I appreciated the endeavor of getting real data, but I was unimpressed with Alexander's analysis for reasons that I found difficult to write up in a timely manner; I've only just recently gotten around to polishing my draft and throwing it up as a standalone post. Briefly, I can see how it looks like a natural leap if you're verbally reasoning about "gender", but on my worldview, a hypothesis that puts "gay people (cis and trans)" in the antecedent is not boring and takes on a big complexity penalty, because that group is heterogeneous with respect to the underlying mechanisms of sexuality. I already don't have much use for "if you are a sex, and you're attracted to that sex" as a category of analytical interest, because I think gay men and lesbians are different things that need to be studied separately. Given that, "if you identify as a gender, and you're attracted to that gender" (with respect to "gender", not sex) comes off even worse: it's grouping together lesbians, and gay men, and heterosexual males with a female gender identity, and heterosexual females with a male gender identity. What causal mechanism could that correspond to?

(I do like the hypernym autogenderphilia.)

A Private Document About a Disturbing Hypothesis (early 2020)

There's another extremely important part of the story that would fit around here chronologically, but I again find myself constrained by privacy norms: everyone's common sense of decency (this time, even including my own) screams that it's not my story to tell.

Adherence to norms is fundamentally fraught for the same reason AI alignment is. In rich domains, attempts to regulate behavior with explicit constraints face a lot of adversarial pressure from optimizers bumping up against the constraint and finding the nearest unblocked strategies that circumvent it. The intent of privacy norms is to conceal information. But information in Shannon's sense is about what states of the world can be inferred given the states of communication signals; it's much more expansive than the denotative meaning of a text.

If norms can only regulate the denotative meaning of a text (because trying to regulate subtext is too subjective for a norm-enforcing coalition to coordinate on), someone who would prefer to reveal private information but also wants to comply with privacy norms has an incentive to leak everything they possibly can as subtext—to imply it, and hope to escape punishment on grounds of not having "really said it." And if there's some sufficiently egregious letter-complying-but-spirit-violating evasion of the norm that a coalition can coordinate on enforcing, the whistleblower has an incentive to stay only just shy of being that egregious.

Thus, it's unclear how much mere adherence to norms helps, when people's wills are actually misaligned. If I'm furious at Yudkowsky for prevaricating about my Something to Protect, and am in fact more furious rather than less that he managed to do it without violating the norm against lying, I should not be so foolish as to think myself innocent and beyond reproach for not having "really said it."

Having considered all this, I want to tell you about how I spent a number of hours from early May 2020 to early July 2020 working on a private Document about a disturbing hypothesis that had occurred to me earlier that year.

Previously, I had already thought it was nuts that trans ideology was exerting influence on the rearing of gender-non-conforming children—that is, children who are far outside the typical norm of behavior for their sex: very tomboyish girls and very effeminate boys.

Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-analyzed at Cohen's d ≈ 1.31 standard deviations for men and d ≈ 0.96 for women.) A solid majority of children diagnosed with gender dysphoria ended up growing out of it by puberty. In the culture of the current year, it seemed likely that a lot of those kids would instead get affirmed into a cross-sex identity at a young age, even though most of them would have otherwise (under a "watchful waiting" protocol) grown up to be ordinary gay men and lesbians.

What made this shift in norms crazy, in my view, was not just that transitioning younger children is a dubious treatment decision, but that it's a dubious treatment decision that was being made on the basis of the obvious falsehood that "trans" was one thing: the cultural phenomenon of "trans kids" was being used to legitimize trans adults, even though a supermajority of trans adults were in the late-onset taxon and therefore had never resembled these HSTS-taxon kids. That is: pre-gay kids in our Society are being sterilized in order to affirm the narcissistic delusions[29] of guys like me.

That much was obvious to anyone who's had their Blanchardian enlightenment, and wouldn't have been worth the effort of writing a special private Document about. The disturbing hypothesis that occurred to me in early 2020 was that, in the culture of the current year, affirmation of a cross-sex identity might happen to kids who weren't HSTS-taxon at all.

Very small children who are just learning what words mean say a lot of things that aren't true (I'm a grown-up; I'm a cat; I'm a dragon), and grownups tend to play along in the moment as a fantasy game, but they don't coordinate to make that the permanent new social reality.

But if the grown-ups have been trained to believe that "trans kids know who they are"—if they're emotionally eager at the prospect of having a transgender child, or fearful of the damage they might do by not affirming—they might selectively attend to confirming evidence that the child "is trans", selectively ignore contrary evidence that the child "is cis", and end up reinforcing a cross-sex identity that would not have existed if not for their belief in it—a belief that the same people raising the same child ten years ago wouldn't have held. (A September 2013 article in The Atlantic by the father of a male child with stereotypically feminine interests was titled "My Son Wears Dresses; Get Over It", not "My Daughter Is Trans; Get Over It".)

Crucially, if gender identity isn't an innate feature of toddler psychology, the child has no way to know anything is "wrong." If none of the grown-ups can say, "You're a boy because boys are the ones with penises" (because that's not what nice smart liberal people are supposed to believe in the current year), how is the child supposed to figure that out independently? Toddlers are not very sexually dimorphic, but sex differences in play style and social behavior tend to emerge within a few years. There were no cars in the environment of evolutionary adaptedness, and yet the effect size of the sex difference in preference for toy vehicles is a massive d ≈ 2.44, about one and a half times the size of the sex difference in adult height.

(I'm going with the MtF case without too much loss of generality; I don't think the egregore is quite as eager to transition females at this age, but the dynamics are probably similar.)

What happens when the kid develops a self-identity as a girl, only to find out, potentially years later, that she noticeably doesn't fit in with the (cis) girls on the many occasions that no one has explicitly spelled out in advance where people are using "gender" (perceived sex) to make a prediction or decision?

Some might protest, "But what's the harm? She can always change her mind later if she decides she's actually a boy." I don't doubt that if the child were to clearly and distinctly insist, "I'm definitely a boy," the nice smart liberal grown-ups would unhesitatingly accept that.

But the harm I'm theorizing is not that the child has an intrinsic male identity that requires recognition. (What is an "identity", apart from the ordinary factual belief that one is of a particular sex?) Rather, the concern is that social transition prompts everyone, including the child themself, to use their mental models of girls (juvenile female humans) to make (mostly subconscious rather than deliberative) predictions and decisions about the child, which will be a systematically worse statistical fit than their models of boys (juvenile male humans), because the child is, in fact, a boy (juvenile male human), and those miscalibrated predictions and decisions will make the child's life worse in a complicated, illegible way that doesn't necessarily result in the child spontaneously asserting, "I prefer that you call me a boy" against the current of everyone in the child's life having accepted otherwise for as long the kid can remember.

Scott Alexander has written about how concept-shaped holes can be impossible to notice. In a culture whose civic religion celebrates being trans and denies that gender has truth conditions other than the individual's say-so, there are concept-shaped holes that would make it hard for a kid to notice the hypothesis "I'm having a systematically worse childhood than I otherwise would have because all the grown-ups in my life have agreed I was a girl since I was three years old, even though all of my actual traits are sampled from the joint distribution for juvenile male humans, not juvenile female humans."

The epistemic difficulties extend to the grown-ups as well. I think people who are familiar with the relevant scientific literature or come from an older generation will find the story I've laid out above pretty compelling, but the parents are likely to be unmoved. They know they didn't coach the child to claim to be a girl. On what grounds could a stranger who wasn't there (or a skeptical family friend who sees the kid maybe once a month) assert that subconscious influence must be at work?

In the early twentieth century, a German schoolteacher named Wilhelm von Osten claimed to have taught his horse, Clever Hans, to do arithmetic and other intellectual feats. One could ask, "How much is 2/5 plus 1/2?" and the stallion would first stomp his hoof nine times, and then ten times—representing 9/10ths, the correct answer. An investigation concluded that no deliberate trickery was involved: Hans could often give the correct answer when questioned by a stranger, demonstrating that von Osten couldn't be secretly signaling the horse when to stop stomping. But further careful experiments by Oskar Pfungst revealed that Hans was picking up on unconscious cues "leaked" by the questioner's body language as the number of stomps approached the correct answer: for instance, Hans couldn't answer questions that the questioner themself didn't know.[30]

Notably, von Osten didn't accept Pfungst's explanation, continuing to believe that his intensive tutoring had succeeded in teaching the horse arithmetic.

It's hard to blame him, really. He had spent more time with Hans than anyone else. Hans observably could stomp out the correct answers to questions. Absent an irrational prejudice against the idea that a horse could learn arithmetic, why should he trust Pfungst's nitpicky experiments over the plain facts of his own intimately lived experience?

But what was in question wasn't the observations of Hans's performance, only the interpretation of what those observations implied about Hans's psychology. As Pfungst put it: "that was looked for in the animal which should have been sought in the man."

Similarly, in the case of a reputedly transgender three-year-old, a skeptical family friend isn't questioning observations of what the child said, only the interpretation of what those observations imply about the child's psychology. From the family's perspective, the evidence is clear: the child claimed to be a girl on many occasions over a period of months, and expressed sadness about being a boy. Absent an irrational prejudice against the idea that a child could be transgender, what could make them doubt the obvious interpretation of their own intimately lived experience?

From the skeptical family friend's perspective, there are a number of anomalies that cast serious doubt on what the family thinks is the obvious interpretation.

(Or so I'm imagining how this might go, hypothetically. The following illustrative vignettes may not reflect real events.)

For one thing, there may be clues that the child's information environment did not provide instruction on some of the relevant facts. Suppose that, six months before the child's social transition went down, another family friend had explained to the child that "Some people don't have penises." (Nice smart liberal grown-ups in the current year don't feel the need to be more specific.) Growing up in such a culture, the child's initial gender statements may reflect mere confusion rather than a deep-set need—and later statements may reflect social reinforcement of earlier confusion. Suppose that after social transition, the same friend explained to the child, "When you were little, you couldn't talk, so your parents had to guess whether you were a boy or a girl based on your parts." While this claim does convey the lesson that there's a customary default relationship between gender and genitals (in case that hadn't been clear before), it also reinforces the idea that the child is transgender.

For another thing, from the skeptical family friend's perspective, it's striking how the family and other grown-ups in the child's life seem to treat the child's statements about gender starkly differently than the child's statements about everything else.

Imagine that, around the time of the social transition, the child responded to "Hey kiddo, I love you" with, "I'm a girl and I'm a vegetarian." In the skeptic's view, both halves of that sentence were probably generated by the same cognitive algorithm—something like, "practice language and be cute to caregivers, making use of themes from the local cultural environment" (of nice smart liberal grown-ups who talk a lot about gender and animal welfare). In the skeptic's view, if you're not going to change the kid's diet on the basis of the second part, you shouldn't social transition the kid on the basis of the first part.

Perhaps even more striking is the way that the grown-ups seem to interpret the child's conflicting or ambiguous statements about gender. Imagine that, around the time social transition was being considered, a parent asked the child whether the child would prefer to be addressed as "my son" or "my daughter."

Suppose the child replied, "My son. Or you can call me she. Everyone should call me she or her or my son."

The grown-ups seem to mostly interpret exchanges like this as indicating that while the child is trans, she's confused about the gender of the words "son" and "daughter". They don't seem to pay much attention to the competing hypothesis that the child knows he's his parents "son", but is confused about the implications of she/her pronouns.

It's not hard to imagine how differential treatment by grown-ups of gender-related utterances could unintentionally shape outcomes. This may be clearer if we imagine a non-gender case. Suppose the child's father's name is John Smith, and that after a grown-up explains "Sr."/"Jr." generational suffixes after it happened to come up in fiction, the child declares that his name is John Smith, Jr. now. Caregivers are likely to treat this as just a cute thing that the kid said, quickly forgotten by all. But if caregivers feared causing psychological harm by denying a declared name change, one could imagine them taking the child's statement as a prompt to ask followup questions. ("Oh, would you like me to call you John or John Jr., or just Junior?") With enough followup, it seems plausible that a name change to "John Jr." would meet with the child's assent and "stick" socially. The initial suggestion would have come from the child, but most of the optimization—the selection that this particular statement should be taken literally and reinforced as a social identity, while others are just treated as a cute but not overly meaningful thing the kid said—would have come from the adults.

Finally, there is the matter of the child's behavior and personality. Suppose that, around the same time that the child's social transition was going down, a parent reported the child being captivated by seeing a forklift at Costco. A few months later, another family friend remarked that maybe the child is very competitive, and that "she likes fighting so much because it's the main thing she knows of that you can win."

I think people who are familiar with the relevant scientific literature or come from an older generation would look at observations like these and say, Well, yes, he's a boy; boys like vehicles (d ≈ 2.44!) and boys like fighting. Some of them might suggest that these observations should be counterindicators for transition—that the cross-gender verbal self-reports are less decision-relevant than the fact of a male child behaving in male-typical ways. But nice smart liberal grown-ups in the current year don't think that way.

One might imagine that the inferential distance between nice smart liberal grown-ups and people from an older generation (or a skeptical family friend) might be crossed by talking about it, but it turns out that talking doesn't help much when people have radically different priors and interpret the same evidence differently.

Imagine a skeptical family friend wondering (about four months after the social transition) what "being a girl" means to the child. How did the kid know?

A parent obliges to ask the child: "Hey kiddo, somebody wants to know how you know that you are a girl."

"Why?"

"He's interested in that kind of thing."

"I know that I'm a girl because girls like specific things like rainbows and I like rainbows so I'm a girl."

"Is that how you knew in the first place?"

"Yeah."

"You know there are a lot of boys who like rainbows."

"I don't think boys like rainbows so well—oh hey! Here this ball is!"

(When recounting this conversation, the parent adds that rainbows hadn't come up before, and that the child was looking at a rainbow-patterned item at the time of answering.)

It would seem that the interpretation of this kind of evidence depends on one's prior convictions. If you think that transition is a radical intervention that might pass a cost–benefit analysis for treating rare cases of intractable sex dysphoria, answers like "because girls like specific things like rainbows" are disqualifying. (A fourteen-year-old who could read an informed-consent form would be able to give a more compelling explanation than that, but a three-year-old just isn't ready to make this kind of decision.) Whereas if you think that some children have a gender that doesn't match their assigned sex at birth, you might expect them to express that affinity at age three, without yet having the cognitive or verbal abilities to explain it. Teasing apart where these two views make different predictions seems like it should be possible, but might be beside the point, if the real crux is over what categories are made for. (Is sex an objective fact that sometimes merits social recognition, or is it better to live in a Society where people are free to choose the gender that suits them?)

Anyway, that's just a hypothesis that occurred to me in early 2020, about something that could happen in the culture of the current year, hypothetically, as far as I know. I'm not a parent and I'm not an expert on child development. And even if the "Clever Hans" etiological pathway I conjectured is real, the extent to which it might apply to any particular case is complex; you could imagine a kid who was "actually trans" whose social transition merely happened earlier than it otherwise would have due to these dynamics.

For some reason, it seemed important that I draft a Document about it with lots of citations to send to a few friends. I thought about cleaning it up and publishing it as a public blog post (working title: "Trans Kids on the Margin; and, Harms from Misleading Training Data"), but for some reason, that didn't seem as pressing.

I put an epigraph at the top:

If you love someone, tell them the truth.

—Anonymous

Given that I spent so many hours on this little research and writing project in May–July 2020, I think it makes sense for me to mention it at this point in my memoir, where it fits in chronologically. I have an inalienable right to talk about my own research interests, and talking about my own research interests obviously doesn't violate any norm against leaking private information about someone else's family, or criticizing someone else's parenting decisions.

The New York Times Pounces (June 2020)

On 1 June 2020, I received a Twitter DM from New York Times reporter Cade Metz, who said he was "exploring a story about the intersection of the rationality community and Silicon Valley." I sent him an email saying that I would be happy to talk but that had been pretty disappointed with the community lately: I was worried that the social pressures of trying to be a "community" and protect the group's status (e.g., from New York Times reporters who might portray us in an unflattering light?) might incentivize people to compromise on the ideals of systematically correct reasoning that made the community valuable in the first place.

He never got back to me. Three weeks later, all existing Slate Star Codex posts were taken down. A lone post on the main page explained that the New York Times piece was going to reveal Alexander's real last name and he was taking his posts down as a defensive measure. (No blog, no story?) I wrote a script (slate_starchive.py) to replace the Slate Star Codex links on this blog with links to the most recent Internet Archive copy.

Philosophy Blogging Interlude 3! (mid-2020)

I continued my philosophy of language work, looking into the academic literature on formal models of communication and deception. I wrote a couple posts encapsulating what I learned from that—and I continued work on my "advanced" philosophy of categorization thesis, the sequel to "Where to Draw the Boundaries?"

The disclaimer note that Scott Alexander had appended to "... Not Man for the Categories" after our Christmas 2019 discussion had said:

I had hoped that the Israel/Palestine example above made it clear that you have to deal with the consequences of your definitions, which can include confusion, muddling communication, and leaving openings for deceptive rhetorical strategies.

This is certainly an improvement over the original text without the note, but I took the use of the national borders metaphor to mean that Scott still hadn't gotten my point about there being laws of thought underlying categorization: mathematical principles governing how choices of definition can muddle communication or be deceptive. (But that wasn't surprising; by Scott's own admission, he's not a math guy.)

Category "boundaries" are a useful visual metaphor for explaining the cognitive function of categorization: you imagine a "boundary" in configuration space containing all the things that belong to the category.

If you have the visual metaphor, but you don't have the math, you might think that there's nothing intrinsically wrong with squiggly or discontinuous category "boundaries", just as there's nothing intrinsically wrong with Alaska not being part of the contiguous United States. It may be inconvenient that you can't drive from Alaska to Washington without going through Canada, but it's not wrong that the borders are drawn that way: Alaska really is governed by the United States.

But if you do have the math, a moment of introspection will convince you that the analogy between category "boundaries" and national borders is shallow.

A two-dimensional political map tells you which areas of the Earth's surface are under the jurisdiction of which government. In contrast, category "boundaries" tell you which regions of very high-dimensional configuration space correspond to a word/concept, which is useful because that structure can be used to make probabilistic inferences. You can use your observations of some aspects of an entity (some of the coordinates of a point in configuration space) to infer category-membership, and then use category membership to make predictions about aspects that you haven't yet observed.

But the trick only works to the extent that the category is a regular, non-squiggly region of configuration space: if you know that egg-shaped objects tend to be blue, and you see a black-and-white photo of an egg-shaped object, you can get close to picking out its color on a color wheel. But if egg-shaped objects tend to blue or green or red or gray, you wouldn't know where to point to on the color wheel.

The analogous algorithm applied to national borders on a political map would be to observe the longitude of a place, use that to guess what country the place is in, and then use the country to guess the latitude—which isn't typically what people do with maps. Category "boundaries" and national borders might both be illustrated similarly in a two-dimensional diagram, but philosophically, they're different entities. The fact that Scott Alexander was appealing to national borders to defend gerrymandered categories, suggested that he didn't understand this.

I still had some deeper philosophical problems to resolve, though. If squiggly categories were less useful for inference, why would someone want a squiggly category boundary? Someone who said, "Ah, but I assign higher utility to doing it this way" had to be messing with you. Squiggly boundaries were less useful for inference; the only reason you would realistically want to use them would be to commit fraud, to pass off pyrite as gold by redefining the word "gold".

That was my intuition. To formalize it, I wanted some sensible numerical quantity that would be maximized by using "nice" categories and get trashed by gerrymandering. Mutual information was the obvious first guess, but that wasn't it, because mutual information lacks a "topology", a notion of "closeness" that would make some false predictions better than others by virtue of being "close".

Suppose the outcome space of X is {H, T} and the outcome space of Y is {1, 2, 3, 4, 5, 6, 7, 8}. I wanted to say that if observing X=H concentrates Y's probability mass on {1, 2, 3}, that's more useful than if it concentrates Y on {1, 5, 8}. But that would require the numerals in Y to be numbers rather than opaque labels; as far as elementary information theory was concerned, mapping eight states to three states reduced the entropy from lg 8 = 3 to lg 3 ≈ 1.58 no matter which three states they were.

How could I make this rigorous? Did I want to be talking about the variance of my features conditional on category membership? Was "connectedness" what I wanted, or was it only important because it cut down the number of possibilities? (There are 8!/(6!2!) = 28 ways to choose two elements from {1..8}, but only 7 ways to choose two contiguous elements.) I thought connectedness was intrinsically important, because we didn't just want few things, we wanted things that are similar enough to make similar decisions about.

I put the question to a few friends in July 2020 (Subject: "rubber duck philosophy"), and Jessica said that my identification of the variance as the key quantity sounded right: it amounted to the expected squared error of someone trying to guess the values of the features given the category. It was okay that this wasn't a purely information-theoretic criterion, because for problems involving guessing a numeric quantity, bits that get you closer to the right answer were more valuable than bits that didn't.

A Couple of Impulsive Emails (September 2020)

I decided on "Unnatural Categories Are Optimized for Deception" as the title for my advanced categorization thesis. Writing it up was a major undertaking. There were a lot of nuances to address and potential objections to preëmpt, and I felt that I had to cover everything. (A reasonable person who wanted to understand the main ideas wouldn't need so much detail, but I wasn't up against reasonable people who wanted to understand.)

In September 2020, Yudkowsky Tweeted something about social media incentives prompting people to make nonsense arguments, and something in me boiled over. The Tweets were fine in isolation, but I rankled at it given the absurdly disproportionate efforts I was undertaking to unwind his incentive-driven nonsense. I left a snarky, pleading reply and vented on my own timeline (with preview images from the draft of "Unnatural Categories Are Optimized for Deception"):

Who would have thought getting @ESYudkowsky's robot cult to stop trying to trick me into cutting my dick off (independently of the empirical facts determining whether or not I should cut my dick off) would involve so much math?? OK, I guess the math part isn't surprising, but—[31]

My rage-boil continued into staying up late writing him an angry email, which I mostly reproduce below (with a few redactions for either brevity or compliance with privacy norms, but I'm not going to clarify which).

To: Eliezer Yudkowsky <[redacted]>
Cc: Anna Salamon <[redacted]>
Date: Sunday 13 September 2020 2:24 a.m.
Subject: out of patience

"I could beg you to do it in order to save me. I could beg you to do it in order to avert a national disaster. But I won't. These may not be valid reasons. There is only one reason: you must say it, because it is true."
Atlas Shrugged by Ayn Rand

Dear Eliezer (cc Anna as mediator):

Sorry, I'm getting really really impatient (maybe you saw my impulsive Tweet-replies today; and I impulsively called Anna today; and I've spent the last few hours drafting an even more impulsive hysterical-and-shouty potential Less Wrong post; but now I'm impulsively deciding to email you in the hopes that I can withhold the hysterical-and-shouty post in favor of a lower-drama option of your choice): is there any way we can resolve the categories dispute in public?! Not any object-level gender stuff which you don't and shouldn't care about, just the philosophy-of-language part.

My grievance against you is very simple. You are on the public record claiming that:

you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.

I claim that this is false. I think I am standing in defense of truth when I insist on a word, brought explicitly into question, being used with some particular meaning, when I have an argument for why my preferred usage does a better job of "carving reality at the joints" and the one bringing my usage into question doesn't have such an argument. And in particular, "This word usage makes me sad" doesn't count as a relevant argument. I agree that words don't have intrinsic ontologically-basic meanings, but precisely because words don't have intrinsic ontologically-basic meanings, there's no reason to challenge someone's word usage except because of the hidden probabilistic inference it embodies.

Imagine one day David Gerard of /r/SneerClub said, "Eliezer Yudkowsky is a white supremacist!" And you replied: "No, I'm not! That's a lie." And imagine E.T. Jaynes was still alive and piped up, "You are ontologcially confused if you think that's a false assertion. You're not standing in defense of truth if you insist on words, such white supremacist, brought explicitly into question, being used with some particular meaning." Suppose you emailed Jaynes about it, and he brushed you off with, "But I didn't say you were a white supremacist; I was only targeting a narrow ontology error." In this hypothetical situation, I think you might be pretty upset—perhaps upset enough to form a twenty-one month grudge against someone whom you used to idolize?

I agree that pronouns don't have the same function as ordinary nouns. However, in the English language as actually spoken by native speakers, I think that gender pronouns do have effective "truth conditions" as a matter of cognitive science. If someone said, "Come meet me and my friend at the mall; she's really cool and you'll like her", and then that friend turned out to look like me, you would be surprised.

I don't see the substantive difference between "You're not standing in defense of truth (...)" and "I can define a word any way I want." [...]

[...]

As far as your public output is concerned, it looks like you either changed your mind about how the philosophy of language works, or you think gender is somehow an exception. If you didn't change your mind, and you don't think gender is somehow an exception, is there some way we can get that on the public record somewhere?!

As an example of such a "somewhere", I had asked you for a comment on my explanation, "Where to Draw the Boundaries?" (with non-politically-hazardous examples about dolphins and job titles) [...] I asked for a comment from Anna, and at first she said that she would need to "red team" it first (because of the political context), and later she said that she was having difficulty for other reasons. Okay, the clarification doesn't have to be on my post. I don't care about credit! I don't care whether or not anyone is sorry! I just need this trivial thing settled in public so that I can stop being in pain and move on with my life.

As I mentioned in my Tweets today, I have a longer and better explanation than "... Boundaries?" mostly drafted. (It's actually somewhat interesting; the logarithmic score doesn't work as a measure of category-system goodness because it can only reward you for the probability you assign to the exact answer, but we want "partial credit" for almost-right answers, so the expected squared error is actually better here, contrary to what you said in the "Technical Explanation" about what Bayesian statisticians do). [...]

The only thing I've been trying to do for the past twenty-one months is make this simple thing established "rationalist" knowledge:

(1) For all nouns N, you can't define N any way you want, for at least 37 reasons.

(2) Woman is such a noun.

(3) Therefore, you can't define the word woman any way you want.

(Note, this is totally compatible with the claim that trans women are women, and trans men are men, and nonbinary people are nonbinary! It's just that you have to argue for why those categorizations make sense in the context you're using the word, rather than merely asserting it with an appeal to arbitrariness.)

This is literally modus ponens. I don't understand how you expect people to trust you to save the world with a research community that literally cannot perform modus ponens.

[...] See, I thought you were playing on the chessboard of being correct about rationality. Such that, if you accidentally mislead people about your own philosophy of language, you could just ... issue a clarification? I and Michael and Ben and Sarah and ["Riley"] and Jessica wrote to you about this and explained the problem in painstaking detail, and you stonewalled us. Why? Why is this so hard?!

[...]

No. The thing that's been driving me nuts for twenty-one months is that I expected Eliezer Yudkowsky to tell the truth. I remain,

Your heartbroken student,
Zack M. Davis

I followed it with another email after I woke up the next morning:

To: Eliezer Yudkowsky <[redacted]>
Cc: Anna Salamon <[redacted]>
Date: Sunday 13 September 2020 11:02 a.m.
Subject: Re: out of patience

[...] The sinful and corrupted part wasn't the initial Tweets; the sinful and corrupted part is this bullshit stonewalling when your Twitter followers and me and Michael and Ben and Sarah and ["Riley"] and Jessica tried to point out the problem. I've never been arguing against your private universe [...]; the thing I'm arguing against in "Where to Draw the Boundaries?" (and my unfinished draft sequel, although that's more focused on what Scott wrote) is the actual text you actually published, not your private universe.

[...] you could just publicly clarify your position on the philosophy of language the way an intellectually-honest person would do if they wanted their followers to have correct beliefs about the philosophy of language?!

You wrote:

Using language in a way you dislike, openly and explicitly and with public focus on the language and its meaning, is not lying.

Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to.

The problem with "it's a policy debate about how to use language" is that it completely elides the issue that some ways of using language perform better at communicating information, such that attempts to define new words or new senses of existing words should come with a justification for why the new sense is useful for conveying information, and that is a matter of Truth. Without such a justification, it's hard to see why you would want to redefine a word except to mislead people with strategic equivocation.

It is literally true that Eliezer Yudkowsky is a white supremacist (if I'm allowed to define "white supremacist" to include "someone who once linked to the 'Race and intelligence' Wikipedia page in a context that implied that it's an empirical question").

It is literally true that 2 + 2 = 6 (if I'm allowed to define '2' as •••-many).

You wrote:

The more technology advances, the further we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts.

That's kind of like defining Solomonoff induction, and then saying, "Having said this, we've built AGI." No, you haven't said all the facts! Configuration space is very high-dimensional; we don't have access to the individual points. Trying to specify the individual points ("say all the facts") would be like what you wrote about in "Empty Labels"—"not just that I can vary the label, but that I can get along just fine without any label at all." Since that's not possible, we need to group points into the space together so that we can use observations from the coordinates that we have observed to make probabilistic inferences about the coordinates we haven't. But there are mathematical laws governing how well different groupings perform, and those laws are a matter of Truth, not a mere policy debate.

[...]

But if behavior at equilibrium isn't deceptive, there's just no such thing as deception; I wrote about this on Less Wrong in "Maybe Lying Can't Exist?!" (drawing on the academic literature about sender–receiver games). I don't think you actually want to bite that bullet?

In terms of information transfer, there is an isomorphism between saying "I reserve the right to lie 5% of the time about whether something is a member of category C" and adopting a new definition of C that misclassifies 5% of instances with respect to the old definition.

Like, I get that you're ostensibly supposed to be saving the world and you don't want randos yelling at you in your email about philosophy. But I thought the idea was that we were going to save the world by means of doing unusually clear thinking?

Scott wrote (with an irrelevant object-level example redacted): "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life." (Okay, he added a clarification after I spent Christmas yelling at him; but I think he's still substantially confused in ways that I address in my forthcoming draft post.)

You wrote: "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning."

I think I've argued pretty extensively this is wrong! I'm eager to hear counterarguments if you think I'm getting the philosophy wrong. But ... "people live in different private universes" is not a counterargument.

It makes sense that you don't want to get involved in gender politics. That's why I wrote "... Boundaries?" using examples about dolphins and job titles, and why my forthcoming post has examples about bleggs and artificial meat. This shouldn't be expensive to clear up?! This should take like, five minutes? (I've spent twenty-one months of my life on this.) Just one little ex cathedra comment on Less Wrong or somewhere (it doesn't have to be my post, if it's too long or I don't deserve credit or whatever; I just think the right answer needs to be public) affirming that you haven't changed your mind about 37 Ways Words Can Be Wrong? Unless you have changed your mind, of course?

I can imagine someone observing this conversation objecting, "[...] why are you being so greedy? We all know the real reason you want to clear up this philosophy thing in public is because it impinges on your gender agenda, but Eliezer already threw you a bone with the 'there's probably more than one type of dysphoria' thing. That was already a huge political concession to you! That makes you more than even; you should stop being greedy and leave Eliezer alone."

But as I explained in my reply criticizing why I think that argument is wrong, the whole mindset of public-arguments-as-political-favors is crazy. The fact that we're having this backroom email conversation at all (instead of just being correct about the philosophy of language on Twitter) is corrupt! I don't want to strike a deal in a political negotiation; I want shared maps that reflect the territory. I thought that's what this "rationalist community" thing was supposed to do? Is that not a thing anymore? If we can't do the shared-maps thing when there's any hint of political context (such that now you can't clarify the categories thing, even as an abstract philosophy issue about bleggs, because someone would construe that as taking a side on whether trans people are Good or Bad), that seems really bad for our collective sanity?! (Where collective sanity is potentially useful for saving the world, but is at least a quality-of-life improver if we're just doomed to die in 15 years no matter what.)

I really used to look up to you. In my previous interactions with you, I've been tightly cognitively constrained by hero-worship. I was already so starstruck that Eliezer Yudkowsky knows who I am, that the possibility that Eliezer Yudkowsky might disapprove of me, was too terrifying to bear. I really need to get over that, because it's bad for me, and it's really bad for you. I remain,

Your heartbroken student,
Zack M. Davis

These emails were pretty reckless by my usual standards. (If I was entertaining some hope of serving as a mediator between the Caliphate and Vassar's splinter group after the COVID lockdowns were over, this outburst wasn't speaking well to my sobriety.) But as the subject line indicates, I was just—out of patience. I had spent years making all the careful arguments I could make. What was there left for me to do but scream?

The result of this recklessness was ... success! Without disclosing anything from any private conversations that may or may not have occurred, Yudkowsky did publish a clarification on Facebook, that he had meant to criticize only the naïve essentialism of asserting that a word Just Means something and that anyone questioning it is Just Lying, and not the more sophisticated class of arguments that I had been making.

In particular, the post contained this line:

you are being the bad guy if you try to shut down that conversation by saying that "I can define the word 'woman' any way I want"

There it is! A clear ex cathedra statement that gender categories are not an exception to the general rule that categories aren't arbitrary. (Only 1 year and 8 months after asking for it.) I could quibble with some of Yudkowsky's exact writing choices, which I thought still bore the signature of political squirming,[32] but it would be petty to dwell on quibbles when the core problem had been addressed.

I wrote to Michael, Ben, Jessica, Sarah, and "Riley", thanking them for their support. After successfully bullying Scott and Eliezer into clarifying, I was no longer at war with the robot cult and feeling a lot better (Subject: "thank-you note (the end of the Category War)").

I had a feeling, I added, that Ben might be disappointed with the thank-you note insofar as it could be read as me having been "bought off" rather than being fully on the side of clarity-creation. But I contended that not being at war actually made it emotionally easier to do clarity-creation writing. Now I would be able to do it in a contemplative spirit of "Here's what I think the thing is actually doing" rather than in hatred with flames on the side of my face.

A Private Catastrophe (December 2020)

There's a dramatic episode that would fit here chronologically if this were an autobiography (which existed to tell my life story), but since this is a topic-focused memoir (which exists because my life happens to contain this Whole Dumb Story which bears on matters of broader interest, even if my life would not otherwise be interesting), I don't want to spend more wordcount than is needed to briefly describe the essentials.

I was charged by members of the extended Michael Vassar–adjacent social circle with the duty of taking care of a mentally-ill person at my house on 18 December 2020. (We did not trust the ordinary psychiatric system to act in patients' interests.) I apparently did a poor job, and ended up saying something callous on the care team group chat after a stressful night, which led to a chaotic day on the nineteenth, and an ugly falling-out between me and the group. The details aren't particularly of public interest.

My poor performance during this incident weighs on my conscience particularly because I had previously been in the position of being crazy and benefiting from the help of my friends (including many of the same people involved in this incident) rather than getting sent back to psychiatric prison ("hospital", they call it a "hospital"). Of all people, I had a special debt to "pay it forward", and one might have hoped that I would also have special skills, that having been on the receiving end of a non-institutional psychiatric tripsitting operation would help me know what to do on the giving end. Neither of those panned out.

Some might appeal to the proverb "All's well that ends well", noting that the person in trouble ended up recovering, and that, while the stress of the incident contributed to a somewhat serious relapse of my own psychological problems on the night of the nineteenth and in the following weeks, I ended up recovering, too. But recovering normal functionality after a traumatic episode doesn't imply a lack of other lasting consequences (to the psyche, to trusting relationships, &c.). I am therefore inclined to dwell on another proverb, "A lesson is learned but the damage is irreversible."

A False Dénouement (January 2021)

I published "Unnatural Categories Are Optimized for Deception" in January 2021.

I wrote back to Abram Demski regarding his comments from fourteen months before: on further thought, he was right. Even granting my point that evolution didn't figure out how to track probability and utility separately, as Abram had pointed out, the fact that it didn't meant that not tracking it could be an effective AI design. Just because evolution takes shortcuts that human engineers wouldn't didn't mean shortcuts are "wrong". (Rather, there are laws governing which kinds of shortcuts work.)

Abram was also right that it would be weird if reflective coherence was somehow impossible: the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'its own' code." In that light, it made sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about—as sacrilegious as that felt to type.

And yet, somehow, "have accurate beliefs" seemed more fundamental than other convergent instrumental subgoals like "seek power and resources". Could this be made precise? As a stab in the dark, was it possible that the theorems on the ubiquity of power-seeking might generalize to a similar conclusion about "accuracy-seeking"? If it didn't, the reason why it didn't might explain why accuracy seemed more fundamental.


And really, that should have been the end of the story. At the cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word woman any way you like. This suggested poor cognitive returns on investment from interacting with the "rationalist" community—if it took that much effort to correct a problem I had noticed myself, I couldn't expect them to help me with problems I couldn't detect—but I didn't think I was entitled to more. If I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war.

It turned out that I would have occasion to continue waging the robot-cult religious civil war. (To be continued.)


  1. The original quote says "one hundred thousand straights" ... "gay community" ... "gay and lesbian" ... "franchise rights on homosexuality" ... "unauthorized queer." ↩︎

  2. Although Sarah Constantin and "Riley" had also been involved in reaching out to Yudkowsky and were included in many subsequent discussions, they seemed like more marginal members of the group that was forming. ↩︎

  3. At least, not blameworthy in the same way as someone who committed the same violence as an individual. ↩︎

  4. The Sequences post referenced here, "Your Price for Joining", argues that rationalists are too prone to "take their ball and go home" rather than tolerating imperfections in a collective endeavor. To combat this, Yudkowsky proposes a norm:

    If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile.

    I claim that I was meeting this standard: I was willing to personally fix the philosophy-of-categorization issue no matter how much effort it took, and the issue did arise from outright bad faith. ↩︎

  5. It was common practice in our subculture to name group houses. My apartment was "We'll Name It Later." ↩︎

  6. I'm not giving Mike a pseudonym because his name is needed for this adorable anecdote to make sense, and I'm not otherwise saying sensitive things about him. ↩︎

  7. Anna was a very busy person who I assumed didn't always have time for me, and I wasn't earning-to-give anymore after my 2017 psych ward experience made me more skeptical about institutions (including EA charities) doing what they claimed. Now that I'm not currently dayjobbing, I wish I had been somewhat less casual about spending money during this period. ↩︎

  8. I was still deep enough in my hero worship that I wrote "plausibly" in an email at the time. Today, I would not consider the adverb necessary. ↩︎

  9. I particularly appreciated Said Achmiz's defense of disregarding community members' feelings, and Ben's commentary on speech acts that lower the message length of proposals to attack some group. ↩︎

  10. No one ever seems to be able to explain to me what this phrase means. ↩︎

  11. For one important disanalogy, perps don't gain from committing manslaughter. ↩︎

  12. The draft was hidden, but the API apparently didn't filter out comments on hidden posts, and the thread was visible on the third-party GreaterWrong site; I filed a bug. ↩︎

  13. Arnold qualifies this in the next paragraph:

    [in public. In private things are much easier. It's also the case that private channels enable collusion—that was an update [I]'ve made over the course of the conversation. ]

    Even with the qualifier, I still think this deserves a "(!!)". ↩︎

  14. An advantage of mostly living on the internet is that I have logs of the important things. I'm only able to tell this Whole Dumb Story with this much fidelity because for most of it, I can go back and read the emails and chatlogs from the time. Now that audio transcription has fallen to AI, maybe I should be recording more real-life conversations? In the case of this meeting, supposedly one of the Less Wrong guys was recording, but no one had it when I asked in October 2022. ↩︎

  15. Rationality and Effective Altruism Community Hub ↩︎

  16. Oddly, Kelsey seemed to think the issue was that my allies and I were pressuring Yudkowsky to make a public statement, which he supposedly never does. From our perspective, the issue was that he had made a statement and it was wrong. ↩︎

  17. As I had explained to him earlier, Alexander's famous post on the noncentral fallacy condemned the same shenanigans he praised in the context of gender identity: Alexander's examples of the noncentral fallacy had been about edge-cases of a negative-valence category being inappropriately framed as typical (abortion is murder, taxation is theft), but "trans women are women" was the same thing, but with a positive-valence category.

    In "Does the Glasgow Coma Scale exist? Do Comas?" (published just three months before "... Not Man for the Categories"), Alexander defends the usefulness of "comas" and "intelligence" in terms of their predictive usefulness. (The post uses the terms "predict", "prediction", "predictive power", &c. 16 times.) He doesn't say that the Glasgow Coma Scale is justified because it makes people happy for comas to be defined that way, because that would be absurd. ↩︎

  18. The last of the original Sequences had included a post, "Rationality: Common Interest of Many Causes", which argued that different projects should not regard themselves "as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support." It was striking that the "Kolmogorov Option"-era Caliphate took the opposite policy: throwing politically unpopular projects (like autogynephila- or human-biodiversity-realism) under the bus to protect its own status. ↩︎

  19. The original TechCrunch comment would seem to have succumbed to linkrot, but it was quoted by Moldbug and others. ↩︎

  20. The pleonasm here ("to me" being redundant with "I thought") is especially galling coming from someone who's usually a good writer! ↩︎

  21. At best, "I" statements make sense in a context where everyone's speech is considered part of the "official record". Wrapping controversial claims in "I think" removes the need for opponents to immediately object for fear that the claim will be accepted onto the shared map. ↩︎

  22. Specifically, altruism towards the author. Altruistic benefits to other readers are a reason for criticism to be public. ↩︎

  23. That is, there's an analogy between economically valuable labor, and intellectually productive criticism: if you accept the necessity of paying workers money in order to get good labor out of them, you should understand the necessity of awarding commenters status in order to get good criticism out of them. ↩︎

  24. On the other hand, there's a case to be made that the connection between white-collar crime and the problems we saw with the community is stronger than it first appears. Trying to describe the Blight to me in April 2019, Ben wrote, "People are systematically conflating corruption, accumulation of dominance, and theft, with getting things done." I imagine a rank-and-file EA looking at this text and shaking their head at how hyperbolically uncharitable Ben was being. Dominance, corruption, theft? Where was his evidence for these sweeping attacks on these smart, hard-working people trying to make the world a better place?

    In what may be a relevant case study, three and a half years later, the FTX cryptocurrency exchange founded by effective altruists as an earning-to-give scheme turned out to be an enormous fraud à la Enron and Madoff. In Going Infinite, Michael Lewis's book on FTX mastermind Sam Bankman-Fried, Lewis describes Bankman-Fried's "access to a pool of willing effective altruists" as the "secret weapon" of FTX predecessor Alameda Research: Wall Street firms powered by ordinary greed would have trouble trusting employees with easily-stolen cryptocurrency, but ideologically-driven EAs could be counted on to be working for the cause. Lewis describes Alameda employees seeking to prevent Bankman-Fried from deploying a trading bot with access to $170 million for fear of losing all that money "that might otherwise go to effective altruism". Zvi Mowshowitz's review of Going Infinite recounts Bankman-Fried in 2017 urging Mowshowitz to disassociate with Ben because Ben's criticisms of EA hurt the cause. (It's a small world.)

    Rank-and-file EAs can contend that Bankman-Fried's crimes have no bearing on the rest of the movement, but insofar as FTX looked like a huge EA success before it turned out to all be a lie, Ben's 2019 complaints are looking prescient to me in retrospect. (And insofar as charitable projects are harder to evaluate than whether customers can withdraw their cryptocurrency, there's reason to fear that other apparent EA successes may also be illusory.) ↩︎

  25. The ungainly title was softened from an earlier draft following feedback from the posse; I had originally written "... Surprisingly Useless". ↩︎

  26. On this point, it may be instructive to note that a 2023 survey found that only 60% of the UK public knew that "trans women" were born male. ↩︎

  27. Enough to not even scare-quote the term here. ↩︎

  28. I had identified three classes of reasons not to carve reality at the joints: coordination (wanting everyone to use the same definitions), wireheading (making the map look good, at the expense of it failing to reflect the territory), and war (sabotaging someone else's map to make them do what you want). Michael's proposal would fall under "coordination" insofar as it was motivated by the need to use the same categories as everyone else. (Although you could also make a case for "war" insofar as the civil-rights model winning entailed that adherents of the TERF or medical models must lose.) ↩︎

  29. Reasonable trans people aren't the ones driving the central tendency of the trans rights movement. When analyzing a wave of medical malpractice on children, I think I'm being literal in attributing causal significance to a political motivation to affirm the narcissistic delusions of (some) guys like me, even though not all guys like me are delusional, and many guys like me are doing fine maintaining a non-guy social identity without spuriously dragging children into it. ↩︎

  30. Oskar Pfungst, Clever Hans (The Horse Of Mr. Von Osten): A Contribution To Experimental Animal and Human Psychology, translated from the German by Carl L. Rahn ↩︎

  31. I anticipate that some readers might object to the "trying to trick me into cutting my dick off" characterization. But as Ben had pointed out earlier, we have strong reason to believe that an information environment of ubiquitous propaganda was creating medical transitions on the margin. I think it made sense for me to use emphatic language to highlight what was actually at stake here! ↩︎

  32. The way that the post takes pains to cast doubt on whether someone who is alleged to have committed the categories-are-arbitrary fallacy is likely to have actually committed it ("the mistake seems like it wouldn't actually fool anybody or be committed in real life, I am unlikely to be sympathetic to the argument", "But be wary of accusing somebody of planning to do this, if you haven't documented them actually doing it") is in stark contrast to the way that "A Human's Guide to Words" had taken pains to emphasize that categories shape cognition regardless of whether someone is consciously trying to trick you ("drawing a boundary in thingspace is not a neutral act [...] Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind"). I'm suspicious that the change in emphasis reflects the need to not be seen as criticizing the "pro-trans" coalition, rather than any new insight into the subject matter.

    The first comment on the post linked to "... Not Man for the Categories". Yudkowsky replied, "I assumed everybody reading this had already read https://wiki.lesswrong.com/wiki/A_Human's_Guide_to_Words", a non sequitur that could be taken to suggest (but did not explicitly say) that the moral of "... Not Man for the Categories" was implied by "A Human's Guide to Words" (in contrast to my contention that "... Not Man for the Categories" was getting it wrong). ↩︎

New Comment
191 comments, sorted by Click to highlight new comments since: Today at 6:58 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I don't have a lot to say, but I feel like mentioning that I read the whole thing, enjoyed it, and agreed with you, including on the point that if rationalists can't agree with your philosophy of language because of instrumental motivations then it's a problem for us as a group of people who try to reason clearly without such influences.

[-]Viliam4mo2829

This is a fascinating story about obsession written in the first-person perspective. It is also too long to get an object-level reply, unless one decides to spend an entire day composing one. A meaningful meta-level reply, such as "dude, relax, and get some psychological help" will probably get me classified as an enemy, and will be interpreted as further evidence about how sick and corrupt is the mainstream-rationalist society.

Honestly, I don't care about your feud, because it became too complicated for me to understand. Is there a way to summarize this shortly? Eliezer disagreed with you about something, or maybe you just interpreted something he wrote as a disagreement with you... and now your soul can't find peace until he admits that he was wrong and you were right about things that are too meta for me to understand wtf you are talking about...

You had an erotic fantasy that became a centerpiece of your mental landscape, and you insist that it contains the actual answer to the mysteries of trans-sexuality, and you are frustrated that other people (especially rationalists) do not see it the same way. Well, maybe it does, maybe it does not. Maybe your fantasy is typical, maybe it... (read more)

[-]Vaniver4mo4824

Is there a way to summarize this shortly? Eliezer disagreed with you about something, or maybe you just interpreted something he wrote as a disagreement with you... and now your soul can't find peace until he admits that he was wrong and you were right about things that are too meta for me to understand wtf you are talking about...

Here's an attempt.

Sometimes people have expectations of each other, like "you won't steal objects from my house".  Those expectations get formed by both explicit and implicit promises. Violating those expectations is often a big deal, not just to the injured party but also to third parties--someone who stole from Alice might well steal from you, too.

To the extent this community encouraged expectations of each other, they were about core epistemic virtues and discussion practices. People will try to ensure their beliefs are consistent with their other beliefs; they won't say things without believing them; they'll share evidence when they can; when they are bound to be uncooperative, they at least explain how and why they'll be uncooperative, and so on. 

[For example, I keep secrets because I think information can be owned, even tho this is coopera... (read more)

I think this is a pretty good summary.

I do want to… disagree? quibble? (I am not actually sure how to characterize this)… on one bit, though:

I do think it makes sense to have a heresy budget

I agree that it makes sense to have a heresy budget, but I think that it’s important to distinguish between heresies that directly affect you and/or other people in your own community, and heresies that you can “safely”[1] ignore.

For example, suppose that I disagree with the mainstream consensus on climate change. But I, personally, cannot do anything to affect government policy related to climate change, or otherwise alter how society treats the issue. Maybe our community as a whole can have some effect on such things… but probably not. And there’s nothing to be done about it on an individual basis. So if I, and the rest of the rationalist community, mostly avoids talking about the subject (and, if forced to discuss it, we mouth the necessary platitudes and quickly change the subject), then relatively little is lost.

Now suppose that the subject is something like… distortions in reporting, by municipal governments, of violent crime statistics. Getting the wrong answer on a question like that... (read more)

1Algon4mo
I mean, this is probably correct. But my problem is that despite finding a lot of Zack's claims on this topic in the past quite reasonable, I find the discussion over the last year or two exhausting to engage with. This post is 20k+ words alone! I'm not reading that. And there's no article I know of which a reasonably good summary of what the heck is going on. So I'm not observing what Zack's saying, let alone deciding and acting. Right now, I'm struggling to orient.  By the way, thank you for writing this comment. Same goes for @tailcalled and @Vaniver's comments to this post. If only @Zack_M_Davis would write posts as concise! EDIT: Changed Zack_D to Zack_M_Davis bc of Rafael Hearth's correct reponse that Zack_D has not written any posts longer than 2k words.

To some degree, litigating deceptive behavior from Eliezer and Scott is just inherently going to be exhausting because it's most in their interest to make the deception confusing.

4Rafael Harth4mo
To be fair, @Zack_D hasn't written any posts longer than 2000 words!

I agree that Zack's point can sort of be unclear. To me his vibe doesn't come off as mostly focusing on trans etiology, but instead as a three-step argument about what the rationalist community should acknowledge, with most of the focus being on the first step:

  • You can't just use redefinitions to turn trans women similar to cis women.
  • Trans women start out much more similar to cis men than to cis women, and transitioning doesn't do very much.
  • Therefore transness causes a lot of political problems.

However, this doesn't match Zack's official position. While his official position starts the same way with arguing about definition, the followup seems to be that the conflict exists because the rationalist community is trying to make him transition for bad reasons, e.g.:

Who would have thought getting @ESYudkowsky's robot cult to stop trying to trick me into cutting my dick off (independently of the empirical facts determining whether or not I should cut my dick off) would involve so much math?? OK, I guess the math part isn't surprising, but—

Or

I didn't think it was fair to ordinary people to expect them to go as deep into the philosophy-of-language weeds as I could before being a

... (read more)
5Viliam4mo
Definitions are on a map. Similarity means "having some property in common", which in general is in the territory, but the perception of similarity depends on which properties we are noticing, so it is influenced by the map. (For a mathematician, an ellipse is similar to a hyperbole, because both are conic sections. For a non-mathematician, the ellipse is a lame circle, and the hyperbole is two crooked lines; not similar.) You can't use a redefinition to conjure a property that didn't exist before, but you can use it to draw attention to an already existing property. (We have already successfully "redefined" dolphins to mammals. Previously they were considered fish. The fact that they live in water did not change.) So the question is, which properties do trans women and cis women have in common (this cannot be redefined) and which properties we are paying attention to (this can be redefined). Maybe yes, maybe no; where is the evidence? (I am focusing on the first part of the sentence. I assume that by "transitioning" you refer to the act of coming out as trans, not to hormonal therapy.) Speaking for myself, I don't care whether Zack transitions or what his reasons would be. Perhaps we should make a poll, and then Zack might find out that the people who are "trying to make him transition for bad reasons" ("trying to trick me into cutting my dick off") are actually quite rare, maybe completely nonexistent. By this logic, any politeness is wireheading. If you want to know whether you are passing, perhaps you could ask directly. In that case, I agree that lying would be a sin against rationality. But in the usual social situation... if I meet a cis woman who looks not very feminine, I am not giving her unsolicited feedback either. Too bad we can't predict whether Zack would pass before he actually goes ahead and transitions. Yeah, this is exactly my problem with Zack's statements. I am okay with him making plausibly sounding statements about himself, but when h
7Martin Randall3mo
As a historical analogy, imagine a feminist saying that society is trying to make her into a housewife for bad reasons. ChatGPT suggests Simone de Beauvoir (1908-1986). Some man replies that "Speaking for myself, I don't care whether Simone becomes a housewife or what her reasons would be. Perhaps we should make a poll, and then Simone might find out that the people who are 'trying to make her a housewife for bad reasons' are actually quite rare, maybe completely nonexistent". Well, probably very few people were still trying to make Simone into a housewife after she started writing thousands of words on feminism! But also, society can collectively pressure Simone to conform even if very few people know who Simone is, let alone have an opinion on her career choices. Many other analogies possible, I picked this one for aesthetic reasons, please don't read too much into it.
2TekhneMakre4mo
What does this mean? It seems like if the original issue is something about whether to call an XY-er "she" if the XY-er asks for that, then, that's sort of like a redefinition and sort of not like a redefinition... Is the claim something like: This one is a set of empirical, objective claims.... but elsewhere you said: So I guess that was representing your viewpoint, not Zack's?
2tailcalled4mo
My understanding of Zack's position is that he fixated on this because it's something with a clear right answer that has been documented in the Sequences, and that he was really just using this as the first step to getting the rationalist community to not make him transition. Arguably what "it is" depends on why people are doing it. Zack has written extensive responses to different justifications for doing it. I can link you a relevant response and summarize it, but in order to do that I need to know what your justification is. The latter was representing my viewpoint whereas the former was an attempt at representing Zack's viewpoint, but also I don't think the two views are contradictory with each other?
-1M. Y. Zuo4mo
This still doesn't seem to address the root issue that Villiam raised, of why should a random passing reader care enough about someone's gender self-perceptions/self-declarations/etc... to actually read such long rambling essays? Caring about someone's sex maybe, since there's a biological basis that is falsifiable. But gender is just too wishy washy in comparison for some random passing reader to plausibly care so much and spend hours of their time on this. 

See, this is an example of the bad faith engagement that lies close to the core of this controversy.

People who do not care about a post click away from it. They do not make picket signs about how much they don't care and socially shame the poster for making posts that aren't aimed at random passing readers. Whether a post is aimed at random passing readers is an abysmally poor criterion for evaluating the merits of posts in a forum that is already highly technical and full of posts for specialist audiences, and in point of fact several readers did care enough to spend hours of their time on it.

-3M. Y. Zuo4mo
This seems incoherent considering I already addressed Zack's point, in a direct reply, 3d ago, just one comment chain down, along with several other folks weighing in. So I'll assume you haven't read them. Here's my other comment reposted here: The 'random passing reader' refers to all readers within a few standard deviations of the average, but not to literally every single reader.  i.e. Those who have no strong views regarding Zack either way. Hence it's unsurprising, and implied, that there are outliers.  Are you confused about this terminology?
0Cornelius Dybdahl4mo
That incoherence you speak of is precisely what my previous comment pointed out, and it pertains to your argument rather than mine. As my previous comment explained, engaging with a post even just to call it uninteresting undermines any proclamation that you do not care about the post. If your engagement is more substantive than this, then that only further calls into question the need to shame the author for making posts that random passing readers might not care about. Edited to add: If the outliers are sufficiently many to generate this much discussion, and they include such notable community members as Said Achmiz, then the critique that random passing readers might not spend hours on it is clearly asinine, regardless of the exact amount of standard deviations you include. I am not "confused about this terminology", I am just calling out your bad faith engagement.
-8M. Y. Zuo4mo

why should a random passing reader care enough [...] to actually read such long rambling essays?

I mean, they probably shouldn't? When I write a blog post, it's because I selfishly had something I wanted to say. Obviously, I understand that people who think it's boring aren't going to read it! Not everyone needs to read every blog post! That's why we have a karma system, to help people make prioritization decisions about what to read.

7tailcalled4mo
I thought people were supposed to care because you were highlighting systematic political distortions in the rationalist community? I didn't mention that part in my other comment because Villiam seemed confused about the inner part of the conflict whereas this seemed like the outer part of the conflict.
9Zack_M_Davis4mo
I mean, yes, people who care about this alleged "rationalist community" thing might be interested in information about it being biased (and I wrote this post with such readers in mind), but if someone is completely uninterested in the "rationalist community" and is only on this website because they followed a link to an article about information theory, I'd say that's a pretty good life decision!
1M. Y. Zuo4mo
They might be interested in information presented in a concise, high signal way.  The way you've presented it practically guarantees that nearly every passing reader will not. i.e. The average reader  'might be interested' only to an average degree.

I certainly haven't read even a third of your writing about this. But... I continue to not really get the basic object-level thing. Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? Or is that not a crux for anything?

Separately, isn't the obvious correct position simply: there's a bunch of objective stuff about the differences between men and women; there's uncertainty about exactly how these clusters overlap / are violated in real life, e.g. as described in the previous paragraph; and separately there's a bunch of conduct between people that people modulate depending on whether they are interacting with a man or a woman; and now that there are more people openly not falling neatly into the two clusters, there's some new questions about conduct; and some of the conduct questions involve factual questions, for which calling a particular XY-er a woman would be false, and some of the conduct questions involve factual questiosn (e.g. the brain thing) for which calling a particular XY-er a woman would be true, and some of the conduct questions are instead mainly about free choices, like whether or not to ... (read more)

Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? Or is that not a crux for anything?

Focusing on brains seems like the wrong question to me. Brains matter due to their effect on psychology, and psychology is easier to observe than neurology.

Even if psychology is similar in some ways, it may not be similar in the ways that matter though, and in fact the ways that matter need not be restricted to psychology. Even if trans women are psychologically the same as cis women, trans women in women's sports is still a contentious issue.

There are some fairly big ways in which trans women are not similar to cis women though, for instance trans women tend to be mostly sexually attracted to women, whereas cis women tend to be mostly sexually attracted to men. Whether this is policy-relevant is I guess up to you, but it certainly has a lot of high-impact implications.

2TekhneMakre4mo
Ok. (I continue to not know what the basic original object-level disagreement is!)
2tailcalled4mo
Possibly this explanation helps? As in basically he's been focusing on the first step to a multi-step argument, though it's sort of unclear what the last step(s) are supposed to add up to.

I continue to not really get the basic object-level thing. Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? 

That's a bit like saying that it's "factually unknown" whether there's an invisible dragon in the garage. 

Neuroscientists measure a lot of things about brains and if you need to define "develop like female brains" in a way that doesn't show up in any metric that neuroscientists can measure, and it's therefore "factually unknown".

Or is that not a crux for anything?

Rationalists generally aren't very favorable to god of the gaps arguments, so it's unclear why gender of the gaps should be a crux given our existing neuroscience. 

If you truly believe that there's a gap here, then why is there a gap? One straightforward reason for why there might be a gap is that any neuroscientist who would research this would be canceled. If there's a gap that's a sign for an unhealthy epistemic environment.

Part of what Zack is writing about is that this unhealthy epistemic environment was harmful to him when trying to figure out whether out not Zack is a woman or a man. 

3greylag4mo
Hm. Now I thought I’d heard of gender dysphoria/transgender/etc showing up in brain imaging (eg. https://pubmed.ncbi.nlm.nih.gov/26766406/) and while “develop like female brains” would be bounding happily ahead of the evidence, that seems at least like sporadic snorting noises from the garage in the night time
7tailcalled4mo
I can't confidently make claims about all brain imaging studies as I haven't read enough of them, but as a general rule studies that claim to find links between neurology and psychological traits are fake (same problem as candidate gene studies, plus maybe also the problem of "it's not clear we're looking at the right variables") unless the trait in question is g (IQ). This applies not just to the trans brain studies, but also to the studies claiming to find the sex differences in brain structure (while large sex differences in brain structure do exist, the ones that have been found so far appear to be completely uncorrelated with psychological traits that have sex differences once you control for sex, so they do not mediate the relationship between sex and those psychological traits).
8tailcalled4mo
Oh and I guess I should add, if we do insist on talking about brain neurology in the context of transness, there is one set of studies I expect to replicate, because it is conceptually very simple. The idea is to take a bunch of cis men and cis women, train a predictor to classify people's sex from their brain structure, and then apply that brain structure to trans women. This is essentially a multivariate approach, which I'd expect Zack to like because he talks a lot about multivariate approaches. I think I've seen three or four studies that do this, but the two I have at hand right now are Sex Matters: A Multivariate Pattern Analysis of Sex- and Gender-Related Neuroanatomical Differences in Cis- and Transgender Individuals Using Structural Magnetic Resonance Imaging and Regional volumes and spatial volumetric distribution of gray matter in the gender dysphoric brain. The general pattern from the studies I've read is that prior to transitioning, trans women have male brains, and after having been on HRT for a while, trans women's brain structure shifts to be in the middle between cis women and cis men (on the sex-separating axis). I don't know if trans women's brains change even more given even longer time; it seems conceivable that they do. But anyway most noteworthy about these studies is that this applies to both HSTSs and AGPTSs. I.e. HSTS MtFs (who Zack sees as "true transsexuals") have male brains prior to transitioning. (See the second of my links for more info on this.) This illustrates why I am not enthusiastic about arguments based on multivariate group-separating axes: HSTSs are clearly feminine in some sense, but this isn't the sense which gets emphasized when taking the neurological sex-separating axis. I'm not sure why Zack still regularly makes appeals to multivariate groups differences though. My best guess is that he doesn't pay attention to this but he should be encouraged to answer for himself.
3ChristianKl4mo
The fact that someone finds a brain pattern that describes gender dysphoria but thinks that brain pattern does not warrant the description of looking like female brain patterns, to me does not look like evidence pointing in the direction that gender dysphoria is associated with female brain patterns. Vul et al's voodoo neuroscience paper is also worth reading, to have some perspective on these kinds of findings. 
2TekhneMakre4mo
Are you claiming that Zack is claiming that there's no such thing as gender? Or that there's no objective thing? Or that there's nothing that would show up in brain scans? I continue to not know what the basic original object-level disagreement is!
3ChristianKl4mo
No, Zack does believe that there's something like gender. He believes that you are either male or female and that those categories are straightforwardly derived.  You are the person who claims that there's something that is "factually unknown". For it to be factually unknown it's necessary not to have shown up in the brain scans that people already did. 
1lalaithion4mo
What factual question is/was Zack trying to figure out? “Is a woman” or “is a man” are pure semantics, and if that’s all there is then… okay… but presumably there’s something else?
5Said Achmiz4mo
Given some referent—some definition, either intensional or extensional—of the word “man” (in other words, some discernible category with the label “man”), the question “is X a man” (i.e., “is X a member of this category labeled ‘man’”) is an empirical question. And “man”, like any commonly used word, can’t be defined arbitrarily. All of the above being the case, what do you mean by “pure semantics” such that your statement is true…?
2lalaithion4mo
Yeah, what factual question about empirical categories is/was Zack interested in resolving? Tabooing the words “man” and “woman”, since what I mean by semantics is “which categories get which label”. I’m not super interested in discussing which empirical category should be associated with the phonemes /mæn/, and I’m not super interested in the linguistic investigation of the way different groups of English speakers assign meaning to that sequence of phonemes, both of which I lump under the umbrella of semantics.
3Said Achmiz4mo
Zack has written very many words about this, including this very post, and the ones prior to it in the sequence; and also his other posts, on Less Wrong and on his blog. But other people are interested in these things (and related ones), as it turns out; and the question of why they have such interest, as well as many related questions, are also factual in nature. What’s more, “A Human’s Guide to Words” (which I linked to in the grandparent) explains why reassigning different words to existing categories is not arbitrary, but has consequences for our (individual and collective) epistemics. So even such choices cannot be dismissed by labeling them “semantics”.
1lalaithion4mo
I haven’t read everything Zack has written, so feel free to link me something, but almost everything I’ve read, including this post, includes far more intra-rationalist politicking than discussion of object level matters. I know other people are interested in those things. I specifically phrased my previous post in an attempt to avoid arguing about what other people care about. I can neither defend nor explain their positions. Neither do I intend to dismiss or malign those preferences by labeling them semantics. That previous sentence is not to be read as a denial of ever labeling them semantics, but rather as a denial of thinking that semantics is anything to dismiss or malign. Semantics is a long and storied discipline on philosophy and linguistics. I took an entire college course on semantics. Nevertheless, I don’t find it particularly interesting. I’ve read a human’s guide to words. I understand you cannot redefine reality by redefining words. I am trying to step past disagreement you and I might have regarding the definitions of words and figure out if we have disagreements about reality. I think you are doing the same thing I have seen Zack do repeatedly, which is to avoid engaging in actual disagreement and discussion, but instead repeatedly accuse your interlocutor of violating norms of rational debate. So far nothing you have said is something I disagree with, except the implication that I disagree with it. If you think I’m lying to you, feel free to say so and we can stop talking. If our disagreement is merely “you think semantics is incredibly important and I find it mostly boring and stale”, let me know and you can go argue with someone who cares more than me. But the way that Zack phrases things makes it sound, to me, like he and I have some actual disagreement about reality which he thinks is deeply important for people considering transition to know. And as someone considering transition, if you or he or someone else can say that or link to that i

I haven’t read everything Zack has written, so feel free to link me something, but almost everything I’ve read, including this post, includes far more intra-rationalist politicking than discussion of object level matters.

Certainly:

https://www.lesswrong.com/posts/LwG9bRXXQ8br5qtTx/sexual-dimorphism-in-yudkowsky-s-sequences-in-relation-to-my

https://www.greaterwrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal

https://www.lesswrong.com/posts/RxxqPH3WffQv6ESxj/blanchard-s-dangerous-idea-and-the-plight-of-the-lucid

http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/

http://unremediatedgender.space/2020/Nov/survey-data-on-cis-and-trans-women-among-haskell-programmers/

http://unremediatedgender.space/2020/Apr/book-review-human-diversity/

http://unremediatedgender.space/2019/Sep/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences/

Zack also has several posts which, although themselves written at a meta-level, nevertheless explain in great (and highly technical) detail why “is X a woman/man” (i.e., “to which of these two categories, no matter their labels, does X properly belong”) i... (read more)

6lalaithion4mo
I owe you an apology; you’re right that you did not accuse me of violating norms, and I’m sorry for saying that you did. I only intended to draw parallels between your focus on the meta level and Zack’s focus on the meta level, and in my hurry I erred in painting you and him with the same brush. I additionally want to clarify that I didn’t think you were accusing me of lying, but merely wanted preemptively close off some of the possible directions this conversation could go. Thank you for providing those links! I did see some of them on his blog and skipped over them because I thought, based on the first paragraph or title, they were more intracommunity discourse. I have now read them all. I found them mostly uninteresting. They focus a lot on semantics and on whether something is a lie or not, and neither of those are particularly motivating to me. Of the rest, they are focused on issues which I don’t find particularly relevant to my own personal journey, and while I wish that Zack felt like he was able to discuss these issues openly, I don’t really think people in the community disagreeing with him is some bizarre anti-truth political maneuvering.
3Said Achmiz4mo
Apology accepted! You’re quite welcome. Hmm. I continue to think that you are using the term “semantics” in a very odd way, but I suppose it probably won’t be very fruitful to go down that avenue of discussion… I imagine the answer to this one will depend on the details—which people, disagreeing on what specific matter, in what way, etc. Certainly it seems implausible that none of it is “political maneuvering” of some sort (which I don’t think is “bizarre”, by the way; really it’s quite the opposite—perfectly banal political maneuvering, of the sort you see all the time, especially these days… more sad to see, perhaps, for those of us who had high hopes for “rationality”, but not any weirder, for all that…).
1lalaithion4mo
I also consider myself as someone who had—and still has—high hopes for rationality, and so I think it’s sad that we disagree, not on the object level, but on whether we can trust the community to faithfully report their beliefs. Sure, some of it may be political maneuvering, but I mostly think it’s political maneuvering of the form of—tailoring the words, metaphors, and style to a particular audience, and choosing to engage on particular issues, rather than outright lying about beliefs. I don’t think I’m using “semantics” in a non-standard sense, but I may be using it in a more technical sense? I’m aware of certain terms which have different meanings inside of and outside of linguistics (such as “denotation”) and this may be one.
2tailcalled4mo
You would probably not include actual hyperlinks if you were literally saying this in the real world, so that makes this example disanalogous to the usual cases. (I do think the question would be meaningful in the usual cases, but adding hyperlinks seems like cheating as it binds the statement to a lot more information than there would otherwise be. It adds the same sort of information as you would be adding by tabooing the words.)
6Said Achmiz4mo
I added the hyperlinks for the benefit of any readers who have no idea what those terms mean. In a face-to-face conversation, if my interlocutor responded by asking “huh? ‘wolf spider’, ‘fishing spider’, what is that? I’ve never heard of these things”, then I could explain to them what the terms refer to; or we could use a smartphone or computer to access the very same Wikipedia pages which I linked to in my comment. In any case you may feel free to mentally strip out the hyperlinks—that will not change my point, which is that any good-faith interlocutor will understand from the quoted comment (possibly after asking for an explanation, to rectify a total lack of domain knowledge) that the terms “wolf spider” and “fishing spider” refer to a pair of disjoint categories, and that my inquiry is into the question of which (if either!) of the two categories any given actual spider ought properly to be placed in.

"that person, who wants to be treated in the way that people usually treat men"

Incidentally, one of the things I dislike about this framing is that gender stereotypes / scripts "go both ways". That is, it should be not just "treated like a man" but also "treat people like men do."

It was surprisingly impactful to tell myself and my parents I identified as male for purposes of elder care. Obviously I had the option to say "I will manage finances and logistics but not emotional or physical care labor" the whole time, but it was freeing to frame it as "well this is all my uncle was doing and no one thought he was defecting". 

[-]gwern4mo277

When I saw the latest zacpost was only ~25k words & 19 chapters, I was concerned, but then I skipped to the end and saw:

It turned out that I would have occasion to continue waging the robot-cult religious civil war. (To be continued.)

Phew! I guess he's OK.

[-]hwold4mo186

category boundaries should be drawn for epistemic and not instrumental reasons

 

Sounds very wrong to me. In my view, computationally unbounded agents don’t need categories at all ; categories are a way for computationally bounded agents to approximate perfect Bayesian reasoning, and how to judge the quality of the approximation will depend on the agent goals — different agents with different goals will care differently about a similar error.

(It's actually somewhat interesting; the logarithmic score doesn't work as a measure of category-system goodness because it can only reward you for the probability you assign to the exact answer, but we want "partial credit" for almost-right answers, so the expected squared error is actually better here, contrary to what you said in the "Technical Explanation" about what Bayesian statisticians do)

Yes, exactly. When you’re at the point when you’re deciding between log-loss and MSE, you’re no longer doing pure epistemics, you’re entering the realm of decision theory ; you’re crafting a measure of how good your approximation is, a measure that can and should be tailored to your specific goals as a rational agent. log-loss and MSE are only two possibilities in a vast universe of possible such measures, ones that are quite generic and therefore not optimal for a given agent goals.

2tailcalled4mo
MSE can also be seen as a special-case of log-loss for a Gaussian distribution with constant variance.
2Said Achmiz4mo
This can only be true if they do not ever have to interact with computationally bounded agents.

Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" facts.) I agreed that conflating political positions with facts would be bad.

I don't get what 'intrinsically' is doing in the middle sentence. (Well, to the extent that I have guessed what you meant, I disagree.)

Like, yes, there's one underlying reality, descriptions of it get called facts.

But isn't the broader context the propagation of propositions, not the propositions themselves? That is, saying X is also saying "pay attention to X" and if X is something whose increased salience is good for the right-wing, then it makes sense to categorize it as a 'right wing fact', as left-wing partisans will be loathe to share it and right-wing partisans will be eager to.

Like, currently there's an armed conflict going on in Israel and Palestine which is harming many people. Of the people most interested in talking about it that I see on the Internet, I sure see a lot of selectivity in which harms they want to communicate, because their motive for communicating about it is not attempting to reach an unbiased estimate, but to participate in a cultural conflict whi... (read more)

I'm gonna repost my comment on unremediatedgender.space here:

A two-dimensional political map tells you which areas of the Earth's surface are under the jurisdiction of which government. In contrast, category "boundaries" tell you which regions of very high-dimensional configuration space correspond to a word/concept, which is useful because that structure can be used to make probabilistic inferences. You can use your observations of some aspects of an entity (some of the coordinates of a point in configuration space) to infer category-membership, and then use category membership to make predictions about aspects that you haven't yet observed.

But the trick only works to the extent that the category is a regular, non-squiggly region of configuration space: if you know that egg-shaped objects tend to be blue, and you see a black-and-white photo of an egg-shaped object, you can get close to picking out its color on a color wheel. But if egg-shaped objects tend to blue or green or red or gray, you wouldn't know where to point to on the color wheel.

The analogous algorithm applied to national borders on a political map would be to observe the longitude of a place, use that to guess what c

... (read more)
[-]TAG4mo106

“Credibly helpful unsolicited criticism should be delivered in private,” he writes!

Does he apply that to himself? He appears to have criticised many people publically, over the years.

-1Shankar Sivarajan4mo
Yes, but never helpfully.
[-]Unreal4mo9-14

I was bouncing around LessWrong and ran into this. I started reading it as though it were a normal post, but then I slowly realized ... 

I think according to typical LessWrong norms, it would be appropriate to try to engage you on the object level claims or talk about the meta-presentation as though you and I were trying to collaborate on figuring things out and how to communicate things.

But according to my personal norms and integrity, if I detect that something is actually quite off (like alarm bells going) then it would be kind of sick to ignore tha... (read more)

Is this your first time running into Zack's stuff? You sound like you're talking to someone showing up out of nowhere with a no-context crackpot manuscript and has zero engagement with community. Zack's post is about his actual engagement with the community over a decade, we've seen a bunch of the previous engagement (in pretty much the register we see here so this doesn't look like an ongoing psychotic break), he's responsive to comments and his thesis generally makes sense. This isn't drive-by crackpottery and it's on LessWrong because it's about LessWrong.

[-]Viliam4mo3121

I agree that Zack has a long history of engagement with the rationalist community, and that this post is a continuation of that history (in a predictable direction).

But that doesn't necessarily make this engagement sane.

From my perspective, Zack has a long-term obsession, and also he is smart enough to be popular on LessWrong despite the fact that practically everything he says is somehow connected to this obsession (and if for a moment it seems like it is not, that's just because he is preparing some convoluted meta argument that will later be used to support the obsession). I enjoy his writings, too, until something reminds me of "oh, this is going to be yet another meta argument in support of the belief that his erotic fantasy is the ultimate truth about the nature of trans-sexuality".

This isn't drive-by crackpottery, but it is a long-term crackpottery; and it is on LessWrong because the previous parts of it were on LessWrong. It is "about LessWrong" only in the sense that it is about Zack's previous writing on LessWrong and about his interactions with various people here. This very article, and this debate we are having now, will probably be used as a reason to write yet anothe... (read more)

Even if it's true that he's obsessed with it and everything he writes is somehow connected to it - what's the problem with that? Couldn't you have said the same thing about Eliezer and AI? I bet there were lots of important contributions that were made by people following an obsession, even to their own detriment.

To me the question is whether it's true and valuable (I think so), not whether he's obsessed. 

7Viliam4mo
I agree, and I would like to see the evidence. What I get instead, are indirect arguments like "people who disagree with me only do so for political reasons" (and "the entire rationalist community is corrupt, they are enemies and we are at war" and more such nonsense). That proves nothing. For example, people may also disagree with false statements for political reasons.
3Said Achmiz4mo
This is really a very strange criticism. Zack has been writing direct arguments, and evidence, for literal years now. You’re acting as if this is the first post he’s ever written on this subject!
[-]Viliam4mo1210

Looking at the history of Zack's writing on LW...

"Dreaming of Political Bayescraft" - nice and short.

"An Intuition on the Bayes-Structural Justification for Free Speech Norms" - already goes meta about how human speech contains "a zero-sum social-control/memetic-warfare component".

"Change" - a story explaining how a word can have two different meanings.

"Blegg Mode" - a metaphor for something; the top comment says "I don't understand what point are you trying to make" and I agree.

"Where to Draw the Boundaries?" - long but good.

"But It Doesn't Matter" - short meta.

...I will stop here, but I think the pattern is visible. Zack keeps talking meta, sometimes he makes some great points and gets upvoted, sometimes the readers are confused. It takes him a very long time to get to his final point.

Unlike the Sequences, which push the reader from point A to point Z ("there is no supernatural", "therefore human intelligence is made of atoms", "therefore it is possible to make an intelligence out of silicon atoms", etc.), Zack's articles are dancing around the topic: going more meta to gain readers, going closer to the object level to lose them again, etc.

If there is a direct argument that fits ... (read more)

9Said Achmiz4mo
If there isn’t a direct argument that fits into one screen of text, then…? Zack is thereby proven wrong? The topic is thereby proven to be irrelevant? What?
[-]Viliam4mo133

Even if Zack happens to be right, the fact that people do not update about something they don't care about and which cannot be sufficiently simply explained, is not evidence of them being "fake", "corrupt", "epistemically rotten", "enemy combatants", or any other hysterical hyperbole.

Heck, I am not even saying that Blanchard is wrong (assuming that this was all about him, which I am not sure); from my perspective he might be right, or he might be wrong, or he might be right about some things or some people and wrong about other things or other people... I don't know, I do not have enough data to make an opinion on this, and I see no reason why I should spend my time figuring this out, and I see no reason why I should trust Zack's opinion on this.

The part that I do have an opinion on is that redefining the word "woman" to mean "legally woman" rather than "biologically woman" is not a choice that I would make, but that doesn't make it wrong per se. I would have voted against it, but I am not going to fight against it. (Also, this is unrelated to whether Blanchard is right or wrong.) Pluto is not a planet anymore.

This is not because I am too scared to express a politically incorrect o... (read more)

3Cornelius Dybdahl4mo
The complexity you complain about is not Zack's fault. His detractors engage in endless evasiveness including God-of-the-gaps style arguments as ChristianKI pointed out, and walking back an entire LW sequence that was previously non-controversial, simply because it has become politically inconvenient. The reception is so hostile that Zack is required to go practically all the way back to first principles, even needing to briefly revisit the modus ponens. Phrases like "epistemically rotten" and "enemy combatants" are not a hysterical hyperbole to describe that. Zack chooses these terms because he is too agreeable to call a spade a spade and point out that the rationalist community has become outright evil.

I think it's also worth emphasizing that the use of the phrase "enemy combatants" was in an account of something Michael Vassar said in informal correspondence, rather than being a description I necessarily expect readers of the account to agree with (because I didn't agree with it at the time). Michael meant something very specific by the metaphor, which I explain in the next paragraph. In case my paraphrased explanation wasn't sufficient, his exact words were:

The latter frame ["enemy combatants"] is more accurate both because criminals have rights and because enemy combatants aren't particularly blameworthy. They exist under a blameworthy moral order and for you to act in their interests implies acting against their current efforts, at least temporary [sic], but you probably would like to execute on a Marshall Plan later.

I think the thing Michael actually meant (right or wrong) is more interesting than a "Hysterical hyperbole!" "Is not!" "Is too!" grudge match.

1Cornelius Dybdahl4mo
I guess it's just not very clear to me why Michael Vassar doesn't consider them to be highly blameworthy.
2Said Achmiz4mo
That’s as may be… but surely the threshold for “sufficiently simply” isn’t as low as one screen of text…? I don’t particularly have an opinion about this either, but what has this to do with anything, really…? The OP mentions Blanchard twice in 19,000 words… very little in this discussion hinges on whether Blanchard is right or wrong. Neither “legally woman” nor “biologically woman” can possibly serve as definitions of “woman”, for obvious reasons of circularity. In any case you’re… attempting to have this debate at almost the maximally naive level, as if nobody, much less Zack, has written anything about the topic. This is silly. You’ve been on Less Wrong long enough to know better than this sort of nonsense. What opinion do you think Zack is pushing you to adopt, exactly?
8Viliam4mo
Most scientific papers have an abstract that is shorter than one screen. I don't know, and that's my point, kind of. * My current best guess is that Zack essentially makes two separate claims: First, he seems to make some object-level claim. (Or maybe multiple object-level claims.) And no matter how many of his long texts I read, I still have a problem pinpointing what exactly the object-level claim is. Some people seem to say that the object-level claims are obvious, but even they can't tell me what exactly they are. It all seems to be related to trans-sexuality, because that is a topic Zack keeps returning to. It seems to somehow contradict the mainstream narrative, otherwise Zack wouldn't keep making such a big deal out of it. This is about all I can say about it. Second, -- this part I am a little more certain of, -- Zack also makes a meta-level claim that the rationalist community is "corrupt" and "epistemically rotten" for disagreeing with his object-level claim, whatever it is. This gets upvoted; I am not sure whether it's because people literally agree with that claim, or they just enjoy watching the drama, or it's some game of vague political connotations (I suspect that it's the last one, and that the vote for Zack is somehow a vote for contrarianism and against political correctness or something like that). I resent being called corrupt for not agreeing with something that was never clearly communicated to me in the first place. I am trying to cooperate on figuring out what Zack's object-level claim actually is, but apparently this does not work -- maybe I am doing a bad job here, but I start suspecting that this is actually a feature, not a bug (if a claim is never made clearly, no one can disprove it).

Does this help? (159 words and one hyperlink to a 16-page paper)

Empirical Claim: late-onset gender dysphoria in males is not an intersex condition.

Summary of Evidence for the Empirical Claim: see "Autogynephilia and the Typology of Male-to-Female Transsexualism: Concepts and Controversies" by Anne Lawrence, published in European Psychologist. (Not by me!)

Philosophical Claim: categories are useful insofar as they compress information by "carving reality at the joints"; in particular, whether a categorization makes someone happy or sad is not relevant.

Sociological Claim: the extent to which a prominence-weighted sample of the rationalist community has refused to credit the Empirical or Philosophical Claims even when presented with strong arguments and evidence is a reason to distrust the community's collective sanity.

Caveat to the Sociological Claim: the Sociological Claim about a prominence-weighted sample of an amorphous collective doesn't reflect poorly on individual readers of lesswrong.com who weren't involved in the discussions in question and don't even live in America, let alone Berkeley.

categories are useful insofar as they compress information by "carving reality at the joints";

I think from context you're saying "...are only useful insofar...". Is that what you're saying? If so, I disagree with the claim. Compressing information is a key way in which categories are useful. Another key way in which categories are useful is compressing actions, so that you can in a convenient way decide and communicate about e.g. "I'm gonna climb that hill now". More to the point, calling someone "he" is mixing these two things together: you're both kinda-sorta claiming the person has XY chromosomes, is taller-on-average, has a penis, etc.; and also kinda-sorta saying "Let's treat this person in ways that people tend to treat men". "He" compresses the cluster, and also is a button you can push to treat people in that way. These two things are obviously connected, but they aren't perfectly identical. Whether or not the actions you take make someone happy or sad is relevant.

Sorry, the 159-word version leaves out some detail. I agree that categories are often used to communicate action intentions.

The academic literature on signaling in nature mentions that certain prey animals have different alarm calls for terrestrial or aerial predators, which elicit for different evasive maneuvers: for example, vervet monkeys will climb trees when there's a leopard or hide under bushes when there's an eagle. This raises the philosophical question of what the different alarm calls "mean": is a barking vervet making the denotative statement, "There is a leopard", or is it a command, "Climb!"?

The thing is, whether you take the "statement" or the "command" interpretation (or decline the false dichotomy), there are the same functionalist criteria for when each alarm call makes sense, which have to do with the state of reality: the leopard being there "in the territory" is what makes the climbing action called for.

The same is true when we're trying to make decisions to make people happy. Suppose I'm sad about being ugly, and want to be pretty instead. It wouldn't be helping me to say, "Okay, let's redefine the word 'pretty' such that it includes you", because the original... (read more)

If someone wants to be classified as "... has XY chromosomes, is taller-on-average, has a penis..." and they aren't that, then it's a pathological preference, yeah. But categories aren't just for describing territory, they're also for coding actions. If a human says "Climb!" to another human, is that a claim about the territory? You can try to infer a claim about reality, like "There's something in reality that makes it really valuable for you to climb right now, assuming you have the goals that I assume you have".

If someone says "call me 'he' ", it could be a pathological preference. Or it could be a preference to be treated by others with the male-role bundle of actions. That preference could be in conflict with others' preferences, because others might only want to treat a person with the male-role bundle if that person "... has XY chromosomes, is taller-on-average, has a penis..." . Probably it's both, and they haven't properly separated out their preferences / society hasn't made it convenient for them to separate out their preferences / there's a conflict about treatment that is preventing anyone from sorting out their preferences.

"Okay, let's redefine the word 'pretty' such that it includes you" actually makes some sense. Specifically, it's an appeal to anti-lookism. It's of course confused, because ugliness is also an objective thing. And it's a conflict, because most people want to treat ugly people differently than they treat pretty people, so the request to be treated like a pretty person is being refused.

2tailcalled4mo
Can you add more context? Are you talking about an experienced fighter who has been cornered by enemies with a less-experienced friend? A personal trainer whose trainee has been taking a 5 minute break from rock climbing? Something else?
3TekhneMakre4mo
Any of them. My point is that "climb!" is kind of like a message about the territory, in that you can infer things from someone saying it, and in that it can be intended to communicate something about the territory, and can be part of a convention where "Climb!" means "There's a bear!" or whatever; but still, "Climb!" is, besides being an imperative, a word that's being used to bundle actions together. Actions are kinda part of the territory, but as actions they're also sort of internal to the speaker (in the same way that a map is also part of the territory, but it's also internal to the speaker) and so has some special status. Part of that special status is that your actions, and how you bundle your actions, is up to your choice, in a way that it's not up to your choice whether there's a biological male/female approximate-cluster-approximate-dichotomy, or whether 2+4=6 etc.
4Viliam4mo
Yes, but also if people bully you for being ugly, maybe a ban on bullying is an effective action. (Unpacking the metaphor: sometimes there are multiple reasons why a person wants to do X, and some of them cannot be helped by a certain kind of action, but some could be. Then it depends on how the person will feel about the partial success.)
8tailcalled4mo
Disagree with the sociological claim because the Blanchardian arguments for the empirical claim are baaaaaaaad and it's pretty reasonable to not credit an empirical claim when the arguments presented for it are so bad. One could still defend the sociological claim due to the philosophical claim but at the same time I have the impression that there's some hestitance partly because they are so confused about the arguments around the empirical claim.
5Viliam4mo
Commenting on the linked article, as I read it: Sounds likely. (Betting on "it's complicated" is usually a safe bet.) Taking this sentence literally, it only says p(E|X) > 0.5, but it seems to imply that p(E|~X) < 0.5. As an analogy, if I said "most nonandrophilic MtF transsexuals drink Coke", the fact that I consider this relevant to the topic would imply that drinking Coke is an unusual activity among people who are not nonandrophilic MtF transsexuals. So, it is really? Because if it is not, why are we even discussing this? Okay, they got this part covered. But for the completeness, I would also like to know the prevalence of autogynephilia among cis men, and among cis women. Because different answers would give different pictures of reality. Is it "nonandrophilic MtF have this special trait" or rather "androphilic MtF have this special trait", compared to cis men? And is it "nonandrophilic MtF have this special trait that makes them different from everyone else" or "nonandrophilic MtF have this trait that is special among men, but normal among women"? (Actually, since we divide MtF to androphilic and nonandrophilic, it would also make sense to make separate statistics for cis men and women by their sexual orientation.) Also, this is probably answered somewhere, but I suppose that autogynephilia exists on a spectrum: some people may be aroused by a thought in some situation but not in another, the arousal may be weaker or stronger, it may be a once-in-a-lifetime event or a permanent obsession... The reason I am saying this is because it is easy to change the conclusion by just rounding up the values for different groups differently. (Also, recently I had to answer in a psychological test "did you ever think about suicide?", and I was like: WTF does this even mean? If I just thought about suicide once, and rejected the idea after a fraction of a second as obviously wrong, that too would technically qualify as "thinking about suicide", wouldn't it? But the test
5tailcalled4mo
It's somewhat unclear but it probably looks something like this: where "CGS" is an abbreviation of "cross-gender sexuality", and covers stuff like this (from a different survey). I mean it is certainly uncontroversial that some trans women are exclusively attracted to men and some trans women are not exclusively attracted to men. But that presumably has something to do that you see the same for other demographics, e.g. cis men or cis women, where some are attracted to men and some are not, as well as from the fact that most trans women are open about their orientation and there's plenty of trans women from each orientation available. However, Blanchardians tend to go motte/bailey a lot with this. Like they add a lot of additional claims about this, and then put forth the positions that these additional claims are also part of the uncontroversial knowledge, and obviously the more claims you add, the less uncontroversial it will be. They also have the advantage that it used to be only a handful of academics and clinicians discussing it, so "uncontroversial" within this handful of people isn't as significant as "uncontroversial" today. You're not overthinking it, Blanchardians constantly do this sort of thing, where they try to establish their ideas as true by definition. (Another example of this is, sometimes I've been studying autogynephilia in gay men, and Blanchardians have tended to say that this is definitionally impossible.)
4Viliam4mo
Thank you for the summary! (I apologize, the timing is unfortunate, I am leaving for a one-week vacation without internet access right now, so I can't give you a response this would deserve. Perhaps later.)
3the gears to ascension4mo
this does not seem like an impossible requirement for almost any scoped argument I can remember seeing (that is, a claim which is not inherently a conjunction of dozens of subclaims), including some very advanced math ones. granted, by making it fit on one screen you often get something shockingly dense. but you don't need more than about 500 words to make most coherent arguments. the question is whether it would increase clarity to compress it like that. and I claim without evidence that the answer is generally that the best explanation of a claim is in fact this short, though it's not guaranteed that one has the time and effort available to figure out how to precisely specify the claim in words that few; often, trying to precisely specify something in few words runs into "those words are not precisely defined in the mind of the readers" issues, a favorite topic of Davis. (I believe this to apply to even things that people spend hundreds of thousands of words on on this site, such as "is ai dangerous". that it took yudkowsky many blog posts to make the point does not mean that a coherent one-shot argument needs to be that long, as long as it's using existing words well. It might be the case that the concise argument is drastically worse at bridging inferential gaps, but I don't think it need be impossible to specify!)
6dirk4mo
AIUI the actual arguments are over on Zack's blog due to being (in Zack's judgement) Too Spicy For LessWrong (that is, about trans people). (Short version, Blanchardianism coupled with the opinion that most people who disagree are ignoring obvious truths about sex differences for political reasons; I expect the long version is more carefully-reasoned than is apparent in this perhaps-uncharitable summary.)
1Yoav Ravid4mo
Can you say exactly which claims Zack is making without showing enough evidence? Is it one or more of these Or something else?
4Viliam4mo
I agree with all of this. But there is a space between "any way you want" and "only one possible way". Is Mona Lisa (the painting) a woman? Paintings do not have chromosomes, and many of them do not even have sexual organs. Yet if I say "Mona Lisa is a woman", it is true in some meaningful sense... and false in some other meaningful sense. Sometimes you use one bucket for things, and then you find out that you need two. Which one of the new buckets should inherit the original name... is a social/political choice. I may disagree with the choice, but that doesn't make it wrong. If you want to be unambiguous, use an adjective, for example "trans women are not biological women" or "trans women are legally considered women". (Just like tomato is biologically a fruit but legally a vegetable; carrot is legally a fruit in EU; and ketchup is legally a vegetable in USA.)

There is no global clarity, not even in math. There are islands of framing that make reasoning locally work. They benefit from being small and robust, cheap to master and not requiring correct nuance to follow. Mountains of wisdom can be built out of such building blocks, relying on each other but making sense on their own. Occasionally contradicting each other or not making sense in each other's language.

This doesn't help with many complicated questions afflicted by necessity of nuance, where clarity is currently infeasible. A productive activity is finding small and robust observations inspired by such questions, working towards a future wisdom that would be able to digest them entirely.

I am not the best at writing thorough comments because I am more of a Redditor than a LessWronger, but I just want you to know that I read the entire post over the course of ~2.5 hours and I support you wholeheartedly and think you're doing something very important. I've never been part of the rationalist "community" and don't want to be (I am not a rationalist, I am a person who strives weakly for rationality, among many other strivings), particularly after reading all this, but I definitely expected better out of it than I've seen lately. But perhaps I s... (read more)

[-]Zane4mo50

Previously, I had already thought it was nuts that trans ideology was exerting influence on the rearing of gender-non-conforming children—that is, children who are far outside the typical norm of behavior for their sex: very tomboyish girls and very effeminate boys.

Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-a

... (read more)
7Zack_M_Davis3mo
"Essentially are" is too strong. (Sex is still real, even if some people have sex-atypical psychology.) In accordance with not doing policy, I don't claim to know under what conditions kids in the early-onset taxon should be affirmed early: maybe it's a good decision. But whether or not it turns out to be a good decision, I think it's increasingly not being made for the right reasons; the change in our culture between 2013 and 2023 does not seem sane.
2Zane3mo
If a person has a personality that's pretty much female, but a male body, then thinking of them as a woman will be a much more accurate model of them for predicting anything that doesn't hinge on external characteristics. I think the argument that society should consider such a person to be a woman for most practical purposes is locally valid, even if you reject that the premise is true in many cases.
7Rafael Harth3mo
I have to point out that if this logic applies symmetrically, it implies that Aella should be viewed as a man. (She scored .95% male on the gender-contimuum test, which is much more than the average man (don't have a link unfortunately, small chance that I'm switching up two tests here).) But she clearly views herself as a woman, and I'm not sure you think that society should consider her a man for most practical purposes (although probably for some?) You could amend the claim by the condition that the person wants to be seen as the other gender, but conditioning on preference sort of goes against the point you're trying to make.
8Zane3mo
Fair. I do indeed endorse the claim that Aella, or other people who are similar in this regard, can be more accurately modelled as a man than as a woman - that is to say, if you're trying to predict some yet-unmeasured variable about Aella that doesn't seem to be affected by physical characteristics, you'll have better results by predicting her as you would a typical man, than as you would a typical woman. Aella probably really is more of a man than a woman, as far as minds go. But your mentioning this does make me realize that I never really had a clear meaning in mind when I said "society should consider such a person to be a woman for most practical purposes." When I try to think of ways that men and women should be treated differently, I mostly come up blank. And the ways that do come to mind are mostly about physical sex rather than gender - i.e. sports. I guess my actual position is "yeah, Aella is probably male with regard to personality, but this should not be relevant to how society treats ?her."
8Zack_M_Davis3mo
Consider a biased coin that comes up Heads with probability 0.8. Suppose that in a series of 20 flips of such a coin, the 7th through 11th flips came up Tails. I think it's possible to simultaneously notice this unusual fact about that particular sequence, without concluding, "We should consider this sequence as having come from a Tails-biased coin." (The distributions include the outliers, even though there are fewer of them.) I agree that Aella is an atypical woman along several related dimensions. It would be bad and sexist if Society were to deny or erase that. But Aella also ... has worked as an escort? If you're writing a biography of Aella, there are going to be a lot of detailed Aella Facts that only make sense in light of the fact that she's female. The sense in which she's atypically masculine is going to be different from the sense in which butch lesbians are atypically masculine. I'm definitely not arguing that everyone should be forced into restrictive gender stereotypes. (I'm not a typical male either.) I'm saying a subtler thing about the properties of high-dimensional probability distributions. If you want to ditch the restricting labels and try to just talk about the probability distributions (at the expense of using more words), I'm happy to do that. My philosophical grudge is specifically against people saying, "We can rearrange the labels to make people happy."
4Zane3mo
The question, then, is whether a given person is just an outlier by coincidence, or whether the underlying causal mechanisms that created their personality actually are coming from some internal gender-variable being flipped. (The theory being, perhaps, that early-onset gender dysphoria is an intersex condition, to quote the immortal words of a certain tribute band.) If it was just that biological females sometimes happened to have a couple traits that were masculine - and these traits seemed to be at random, and uncorrelated - then that wouldn't imply anything beyond "well, every distribution has a couple outliers." But when you see that lesbians - women who have the typically masculine trait of attraction to women - are also unusually likely to have other typically masculine traits - then that implies that there's something else going on. Such as, some of them really do have "male brains" in some sense. And there are so many different personality traits that are correlated with gender (at least 18, according to the test mentioned above, and probably many more that can't be tested as easily) that it's very unlikely someone would have an opposite-sex personality just by chance alone. That's why I'd guess that a lot of the feminine "men" and masculine "women" really do have some sort of intersex condition where their gender-variable is flipped. (Although there are some cultural confounders too, like people unconsciously conforming to stereotypes about how gay people act.) I completely agree that dividing everyone between "male" and "female" isn't enough to capture all the nuance associated with gender, and would much prefer that we used more words than that. But if, as seems to often be expected by the world, we have to approximate all of someone's character traits all with only a single binary label... then there are a lot of people for whom it's more accurate to use the one that doesn't match their sex.
2Rafael Harth3mo
I think that's fair -- in fact, the test itself is evidence that the claim is literally true in some ways. I didn't mean the comment as a reductio ad absurdum, more as as "something here isn't quit right (though I'm not sure what)". Though I think you've identified what it is with the second paragraph.

Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-analyzed at Cohen's d ≈ 1.31 standard deviations for men and d ≈ 0.96 for women.) A solid majority of children diagnosed with gender dysphoria ended up growing out of it by puberty. In the culture of the current year, it seemed likely that a lot of those kids would i

... (read more)

I was skeptical of the claim that no one was "really" being kept ignorant. If you're sufficiently clever and careful and you remember how language worked when Airstrip One was still Britain, then you can still think, internally, and express yourself as best you can in Newspeak. But a culture in which Newspeak is mandatory, and all of Oceania's best philosophers have clever arguments for why Newspeak doesn't distort people's beliefs doesn't seem like a culture that could solve AI alignment.

Hm. Is it a crux for you if language retains the categories of "tran... (read more)

And as it happened, on 7 May 2019, Kelsey wrote a Facebook comment displaying evidence of understanding my thesis.

This link is dead?

But ... "I thought X seemed Y to me"[20] and "X is Y" do not mean the same thing!

And it seems to me that in the type of comment Eliezer's referring to, "X seemed stupid to me" is more often correct than "X was stupid".

Argument for this: it's unlikely that someone would say "X seemed stupid to me" if X actually didn't seem stupid to them, so it's almost always true when said; whereas I think it's quite common to misjudge whether X was actually stupid.

("X was stupid, they should have just used the grabthar device." / "Did you miss the part three chapters bac... (read more)

5Zack_M_Davis3mo
I agree that "seems to me" statements are more likely to be true than the corresponding unqualified claims, but they're also about a less interesting subject matter (which is not quite the same thing as "less information content"). You probably don't care about how it seems to me; you care about how it is.
2philh3mo
Indeed, and as I argued above, a person who reliably tracks the distinction between what-is and what-seems-to-them tells me more about what-is than a person who doesn't. I mean, I suppose that if someone happened to know that the dress was blue, and told me "the dress looks white to me" without saying "...but it's actually blue", that would be misleading on the subject of the color of the dress. But I think less misleading, and a less common failure mode, than a person who doesn't know that the dress is blue, who tells me "the dress is white" because that's how it looks to them. I mean, in the specific case of the colors of objects in photographs, I think correspondence between what-is and what-seems is sufficiently high not to worry about it most of the time. The dress was famous in part because it's unusual. If you know that different people see the dress as different colors, and you don't know what's going on, then (according to me and, I claim, according to sensible rationalist discourse norms) you should say "it looks white to me" rather than "it's white". But if you have no reason to think there's anything unusual about this particular photograph of a dress that looks white to you, then whatever. But I think this correspondence is significantly lower between "X was stupid" and "X seemed stupid". And so in this case, it seems to me that being careful to make the distinction: * Makes you better at saying true things; * Increases the information content of your words, on both the subjects what-is and what-seems-to-you; * Is kinder to authors.
2philh3mo
Hm, I think I'm maybe somewhat equivocating between "the dress looks blue to me" as a statement about my state of mind and as a statement about the dress. Like I think this distinction could be unpacked and it would be fine, I'd still endorse what I'm getting at above. But I haven't unpacked it as much as would be good.
1Martin Randall3mo
Edited to add: this is my opinion regarding media criticism, not in general, apologies for any confusion. To me, the difference between x is y" and "x seems y" and "x seems y to me" and "I think x seems y to me" and "mileage varies, I think x seems y to me" and the many variations of that is: * Expressing probabilities or confidence intervals * Acknowledging (or changing) social reality * Acknowledging (or changing) power dynamics / status In the specific case of responses to fiction there is no base reality, so we can't write "x is y" and mean it literally. All these things are about how the fictional character seems. Still, I would write "Luke is a Jedi" not "Luke seems to be a Jedi". I read the quoted portion of Yudkowsky's comment as requiring/encouraging negative literary criticism to express low confidence, to disclaim attempts to change social reality, and to express low status.
2philh3mo
Two differences I think you're missing: * "seems to me" suggests inside view, "is" suggests outside view. * "seems to me" gestures vaguely at my model, "is" doesn't. This is clearer with the dress; if I think it's blue, "it looks blue to me" tells you why I think that, while "it's blue" doesn't distinguish between "I looked at the photo" and "I read about it on wikipedia and apparently someone tracked down the original dress and it was blue". With "X seemed stupid to me", it's a vaguer gesture, but I think something like "this was my gut reaction, maybe I thought about it for a few minutes". (If someone has spoken with the author and the author agrees "oops yeah that was stupid of X, they should instead have...", then "X was stupid" seems a lot more justifiable to me.) Eh... so I don't claim to fully understand what's going on when we talk about fictional universes. But still, I'm comfortable with "Luke is a Jedi", and I think it's importantly different from, say, "Yoda is wise" or "the Death Star is indestructible" or "the Emperor has been defeated once and for all". And I think the ways it's different are similar to the differences between claims about base-level reality like "Tim Cook is a CEO" versus "the Dalai Lama is wise" or "the Titanic is unsinkable" or "Napoleon has been defeated once and for all".
1Martin Randall3mo
Thanks for replying. I'm going to leave aside non-fictional examples ("The Dress") because I intended to discuss literary criticism. I'm not sure exactly what you mean, see Taboo "Outside View". My best guess is that you mean that "X seems Y to me" implies my independent impression, not deferring to the views of others, whereas "X is Y" doesn't. If so, I don't think I am missing this. I think that "seems to me" allows for a different social reality (others say that X is NOT Y, but my independent impression is that X is Y), whereas "is" implies a shared social reality (others say that X is Y, I agree), and can be an attempt to change or create social reality (I say "X is Y", others agree, and it becomes the new social reality). Again, I don't think I am missing this. I agree that "X seems Y to me" implies something like a gut reaction or a hot take. I think this is because "X seems Y to me" expresses lower confidence than "X is Y", and someone reporting a gut reaction or a hot take would have lower confidence than someone who has studied the text at length and sought input from other authorities. Similarly gesturing vaguely at the map/territory distinction implies that the distinction is relevant because the map may be in error. Well, that isn't his stated goal. I concede that Yudkowsky makes this argument under "criticism easily goes wrong", but like Zack I notice that he only applies this argument in one direction. Yudkowsky doesn't advise critics to say: "mileage varied, I thought character X seemed clever to me", he doesn't say "please don't tell me what good things the author was thinking unless the author plainly came out and said so". Given the one-sided application of the advice, I don't take it very seriously. Also, I've read some Yudkowsky. Here is a Yudkowsky book review, excerpted from You're Calling Who A Cult Leader? from 2009. I claim that this text would not be more true and informative with "mileage varies, I think x seems y to me". What do you
2philh3mo
So uh. Fair enough but I don't think anything else in your comment hinged on examples being drawn from literary criticism rather than reality? And I like the dress as an example a lot, so I think I'm gonna keep using it. From a quick skim, I'd say many of the things in both the inside-view and outside-view lists there could fit. Like if I say "the dress looks white to me but I think it's actually blue", some ways this could fit inside/outside view: * Inside is one model available to me (visual appearance), outside is all-things-considered (wikipedia). * Inside is my personal guess, outside is taking a poll (most people think it's blue, they're probably right). * Inside is my initial guess, outside is reference class forecasting (I have a weird visual processing bug and most things that look white to me turn out to be blue). I don't really know how to reply to this, because it seems to me that you listed "acknowledging or changing social reality", I said "I think you're missing inside versus outside view", and you're saying "I don't think I am missing that" and elaborating on the social reality thing. I claim the two are different, and if they seem the same to you, I don't really know where to proceed from there. I think you have causality backwards here. I'd buy "it seems low confidence because it suggests a gut reaction" (though I'm not gonna rule out that there's more going on). I don't buy "it suggests a gut reaction because it seems low confidence". So I claim the gut-reaction thing is more specific than the low-confidence thing. Right. Very loosely speaking, Eliezer said to do it because it was kind to authors; Zack objected because it was opposed to truth; I replied that in fact it's pro-truth. (And as you point out, Eliezer had already explained that it's pro-truth, differently but compatibly with my own explanation.) Well, I can't speak for Eliezer, and what Eliezer thinks is less important than what's true. For myself, I think both of those would
1Martin Randall3mo
Absolutely not, his motive (how to be kind to authors) is clear. I think he is using the argument as a soldier. Unlike Zack, I'm fine with that in this case. I endorse that. I'll edit my grandparent post to explicitly focus on literary/media criticism. I think my failure to do so got the discussion off-track and I'm sorry. You mention that "awesome" and "terrible" are very subjective words, unlike "blue", and this is relevant. I agree. Similarly, media criticism is very subjective, unlike dress colors.
2philh3mo
I see. That's not a sense I pick up on myself, but I suppose it's not worth litigating. To be clear, skimming my previous posts, I don't see anything that I don't endorse when it comes to literary criticism. Like, if I've said something that you agree with most of the time, but disagree with for literary criticism, then we likely disagree. (Though of course there may be subtleties e.g. in the way that I think something applies when the topic is literary criticism.) Media criticism can be very subjective, but it doesn't have to be. "I love Star Wars" is more subjective than "Star Wars is great" is more subjective than "Star Wars is a technical masterpiece of the art of filmmaking" is more subjective than "Star Wars is a book about a young boy who goes to wizard school". And as I said above:

He asked for a specific example. ("Trans women are women, therefore trans women have uteruses" being a bad example, because no one was claiming that.) I quoted an article from the The Nation: "There is another argument against allowing trans athletes to compete with cis-gender athletes that suggests that their presence hurts cis-women and cis-girls. But this line of thought doesn't acknowledge that trans women are in fact women." Scott agreed that this was stupid and wrong and a natural consequence of letting people use language the way he was suggesting (

... (read more)
4Eli Tyre2mo
Or to say it differently: we can unload some-to-most of the content of the word woman (however much of it doesn't apply to transwomen) onto the word "cis-woman", and call it a day. The "woman" category becomes proportionally less useful, but it's mostly fine because we still have the expressiveness to say everything we might want to say. 
7Richard_Kennaway2mo
Then they will come for the words "cis-woman" and "trans-woman" and say that it's oppressive to make a distinction. You can't win a conflict by surrendering.
2Eli Tyre2mo
Fair enough, but is that a crux for you, or for Zack? If you knew there wasn't a slippy slope here, would this matter?
2Richard_Kennaway2mo
I believe there is a blatant slippery slope there, and redefining "woman" is not so much a step onto it as jumping into a toboggan, so I see no point in considering a hypothetical world in which somehow, magically, there wasn't.
3frontier642mo
I don’t think that solution accomplishes anything because the trans goal is to pretend to be women and the anti trans goal is to not allow trans women to be called women. The proposed solution doesn’t get anybody closer to their goals.

It might seem like a little thing of no significance—requiring "I" statements is commonplace in therapy groups and corporate sensitivity training—but this little thing coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space made a pattern click. If everyone is forced to only make claims about their map ("I think", "I feel") and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because disagreement is disrespect), that's great for reducing social

... (read more)

My takeaway is that you've discovered there are bad actors who claim to support rationality and truth, but also blatantly lie and become political soldiers when it comes to trans issues. If this is true, why continue to engage with them? Why try to convince them with rationality on that same topic where you acknowledge that they are operating as soldiers instead of scouts?

If 2019-era "rationalists" were going to commit an epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't c

... (read more)

If this is true, why continue to engage with them? Why try to convince them with rationality on that same topic where you acknowledge that they are operating as soldiers instead of scouts?

I think the point is that Zack isn’t continuing to engage with them. Indeed, isn’t this post (and the whole series of which it is a part) basically an announcement that the engagement is at an end, and an explanation of why that is?

1frontier644mo
I'm too dumb to understand whether or not Zack's post disclaims continued engagement. He continues to respond to proponents of the sort of transideology he writes about so he's engaging at least that amount. Also just writing all this is a form of engagement.

In the skeptic's view, if you're not going to change the kid's diet on the basis of the second part, you shouldn't social transition the kid on the basis of the first part.

I think I probably would change the kid's diet?? Or at least talk with them further about it, and if their preference was robust, help them change their diet.

But if the grown-ups have been trained to believe that "trans kids know who they are"—if they're emotionally eager at the prospect of having a transgender child, or fearful of the damage they might do by not affirming—they might selectively attend to confirming evidence that the child "is trans", selectively ignore contrary evidence that the child "is cis", and end up reinforcing a cross-sex identity that would not have existed if not for their belief in it—a belief that the same people raising the same child ten years ago wouldn't have held. (A September

... (read more)

messy evolved animal brains don't track probability and utility separately the way a cleanly-designed AI could.

Side-note: a cleanly designed AI could do this, but it isn't obvious to me that this is actually the optimal design choice. Insofar as the agent is ultimately optimizing for utility, you might want epistemology to be shaped according considerations of valence (relevance to goals) up and down the stack. You pay attention to, and form concepts about, things in proportion to their utility-relevance.

I have an inalienable right to talk about my own research interests, and talking about my own research interests obviously doesn't violate any norm against leaking private information about someone else's family, or criticizing someone else's parenting decisions.

I think you're violating a norm against criticizing someone's parenting decisions, to the extent that readers know whose decisions they are. I happen to know the answer, and I guess a significant number but far from a majority of readers also know. Which also means the parent or parents in question... (read more)

3Zack_M_Davis3mo
If that section were based on a real case, I would have cleared it with the parents before publishing. (Cleared in the sense of, I can publish this without it affecting the terms of our friendship, not agreement.)
2philh3mo
Nod, in that hypothetical I think you would have done nothing wrong. I think the "obviously" is still false. Or, I guess there are four ways we might read this: 1. "It is obvious to me, and should be obvious to you, that in general, talking about my own research interests does not violate these norms": I disagree, in general it can violate them. 2. "It is obvious to me, but not necessarily to you, that in general...": I disagree for the same reason. 3. "It is obvious to me, and should be obvious to you, that in this specific case, talking about my own research interests does not violate these norms": it's not obvious to the reader based on the information presented in the post. 4. "It is obvious to me, but not necessarily to you, that in this specific case...": okay sure. To me (1) is the most natural and (4) is the least natural reading, but I suppose you might have meant (4). ...not that this particularly matters. But it does seem to me like an example of you failing to track the distinction between what-is and what-seems-to-you, relevant to our other thread here.
4Zack_M_Davis3mo
Alternatively, 1. "My claim to 'obviously' not being violating any norms is deliberate irony which I expect most readers to be able to pick up on given the discussion at the start of the section about how people who want to reveal information are in an adversarial relationship to norms for concealing information; I'm aware that readers who don't pick up on the irony will be deceived, but I'm willing to risk that"?
2philh3mo
Fair enough! I did indeed miss that.

"—but if one hundred thousand [normies] can turn up, to show their support for the [rationalist] community, why can't you?"

I said wearily, "Because every time I hear the word community, I know I'm being manipulated. If there is such a thing as the [rationalist] community, I'm certainly not a part of it. As it happens, I don't want to spend my life watching [rationalist and effective altruist] television channels, using [rationalist and effective altruist] news systems ... or going to [rationalist and effective altruist] street parades. It's all so ... prop

... (read more)

I do not think this post serves some greater goal (if it does, like many others in this comment section, I am confused)

(I'll try to explain as best I understand, but some of it may not be exactly right)

The goal of this post is to tell the story of Zack's project (which also serves the project). The goal of Zack's project is best described by the title of his previous post - he's creating a Hill of Validity in Defense of Meaning.

Rationalists strive to be consistent, take ideas seriously, and propagate our beliefs, which means a fundamental belief about the meaning of words will affect everything we think about, and if it's wrong, then it will eventually make us be wrong about many things.

Zack saw Scott and Eliezer, the two highest status people in this group/community, plus many others, make such a mistake. With Eliezer it was "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.". With Scott it was "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life.".

This was relevant to questions about trans, whi... (read more)

1PhilosophicalSoul4mo
Thank you so much for this explanation. Through this lens, this post makes a lot more sense; a meaningful aesthetic death then.
2Yoav Ravid4mo
I don't know what you mean by aesthetic death, but I'm glad to help :)

I don’t know man, really seems to me that Eliezer was quite clear in politics are the mind-killer that we couldn’t expect our rationality skills to be as helpful in determining truth in politics.

He didn't say anything like that in Politics is the Mind-Killer, quite the contrary:

"Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational."

"I’m not saying that I think we should be apolitical"

The main point of the post was to not shove politics where it's unnecessary, because it can have all these bad effects. I expect Eliezer agrees far more with the idea that Politics is hard mode, than the idea that "we couldn’t expect our rationality skills to be as helpful in determining truth in politics".

2Chris_Leong4mo
Thanks for sharing. Maybe I should have spoken more precisely. He wasn't telling individuals to be apolitical. It's more that he didn't think it was a good idea to center the rationalist community around it as it would interfere with the rationalist project. ie. That even with our community striving to improve our rationality that it'd still be beyond us to bring in discussions of politics without corrupting our epistemology. So when I said "we couldn’t expect our rationality skills to be as helpful in determining truth in politics", I was actually primarily talking about the process of a community attempting to converge on the truth rather than an individual.