Try non-paywalled link here.

Damning allegations; but I expect this forum to respond with minimization and denial.

A few quotes:

At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.

 

Of the subgroups in this scene, effective altruism had by far the most mainstream cachet and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.

Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.

 

Even leading EAs have doubts about the shift toward AI. Larissa Hesketh-Rowe, chief operating officer at Leverage Research and the former CEO of the Centre for Effective Altruism, says she was never clear how someone could tell their work was making AI safer. When high-status people in the community said AI risk was a vital research area, others deferred, she says. “No one thinks it explicitly, but you’ll be drawn to agree with the people who, if you agree with them, you’ll be in the cool kids group,” she says. “If you didn’t get it, you weren’t smart enough, or you weren’t good enough.” Hesketh-Rowe, who left her job in 2019, has since become disillusioned with EA and believes the community is engaged in a kind of herd mentality.

 

In extreme pockets of the rationality community, AI researchers believed their apocalypse-related stress was contributing to psychotic breaks. MIRI employee Jessica Taylor had a job that sometimes involved “imagining extreme AI torture scenarios,” as she described it in a post on LessWrong—the worst possible suffering AI might be able to inflict on people. At work, she says, she and a small team of researchers believed “we might make God, but we might mess up and destroy everything.” In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post. Although she acknowledged taking psychedelics for therapeutic reasons, she also attributed the delusions to her job’s blurring of nightmare scenarios and real life. “In an ordinary patient, having fantasies about being the devil is considered megalomania,” she wrote. “Here the idea naturally followed from my day-to-day social environment and was central to my psychotic breakdown.”

 

Taylor’s experience wasn’t an isolated incident. It encapsulates the cultural motifs of some rationalists, who often gathered around MIRI or CFAR employees, lived together, and obsessively pushed the edges of social norms, truth and even conscious thought. They referred to outsiders as normies and NPCs, or non-player characters, as in the tertiary townsfolk in a video game who have only a couple things to say and don’t feature in the plot. At house parties, they spent time “debugging” each other, engaging in a confrontational style of interrogation that would supposedly yield more rational thoughts. Sometimes, to probe further, they experimented with psychedelics and tried “jailbreaking” their minds, to crack open their consciousness and make them more influential, or “agentic.” Several people in Taylor’s sphere had similar psychotic episodes. One died by suicide in 2018 and another in 2021.

 

Within the group, there was an unspoken sense of being the chosen people smart enough to see the truth and save the world, of being “cosmically significant,” says Qiaochu Yuan, a former rationalist.

 

Yuan started hanging out with the rationalists in 2013 as a math Ph.D. candidate at the University of California at Berkeley. Once he started sincerely entertaining the idea that AI could wipe out humanity in 20 years, he dropped out of school, abandoned the idea of retirement planning, and drifted away from old friends who weren’t dedicating their every waking moment to averting global annihilation. “You can really manipulate people into doing all sorts of crazy stuff if you can convince them that this is how you can help prevent the end of the world,” he says. “Once you get into that frame, it really distorts your ability to care about anything else.”

 

That inability to care was most apparent when it came to the alleged mistreatment of women in the community, as opportunists used the prospect of impending doom to excuse vile acts of abuse. Within the subculture of rationalists, EAs and AI safety researchers, sexual harassment and abuse are distressingly common, according to interviews with eight women at all levels of the community. Many young, ambitious women described a similar trajectory: They were initially drawn in by the ideas, then became immersed in the social scene. Often that meant attending parties at EA or rationalist group houses or getting added to jargon-filled Facebook Messenger chat groups with hundreds of like-minded people.

 

The eight women say casual misogyny threaded through the scene. On the low end, Bryk, the rationalist-adjacent writer, says a prominent rationalist once told her condescendingly that she was a “5-year-old in a hot 20-year-old’s body.” Relationships with much older men were common, as was polyamory. Neither is inherently harmful, but several women say those norms became tools to help influential older men get more partners. Keerthana Gopalakrishnan, an AI researcher at Google Brain in her late 20s, attended EA meetups where she was hit on by partnered men who lectured her on how monogamy was outdated and nonmonogamy more evolved. “If you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men” who are sometimes in positions of influence or are directly funding the movement, she wrote on an EA forum about her experiences. Her post was strongly downvoted, and she eventually removed it.

 

The community’s guiding precepts could be used to justify this kind of behavior. Many within it argued that rationality led to superior conclusions about the world and rendered the moral codes of NPCs obsolete. Sonia Joseph, the woman who moved to the Bay Area to pursue a career in AI, was encouraged when she was 22 to have dinner with a 40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. Joseph says he also argued that it was normal for a 12-year-old girl to have sexual relationships with adult men and that such relationships were a noble way of transferring knowledge to a younger generation. Then, she says, he followed her home and insisted on staying over. She says he slept on the floor of her living room and that she felt unsafe until he left in the morning.

 

On the extreme end, five women, some of whom spoke on condition of anonymity because they fear retribution, say men in the community committed sexual assault or misconduct against them. In the aftermath, they say, they often had to deal with professional repercussions along with the emotional and social ones. The social scene overlapped heavily with the AI industry in the Bay Area, including founders, executives, investors and researchers. Women who reported sexual abuse, either to the police or community mediators, say they were branded as trouble and ostracized while the men were protected.

 

In 2018 two people accused Brent Dill, a rationalist who volunteered and worked for CFAR, of abusing them while they were in relationships with him. They were both 19, and he was about twice their age. Both partners said he used drugs and emotional manipulation to pressure them into extreme BDSM scenarios that went far beyond their comfort level. In response to the allegations, a CFAR committee circulated a summary of an investigation it conducted into earlier claims against Dill, which largely exculpated him. “He is aligned with CFAR’s goals and strategy and should be seen as an ally,” the committee wrote, calling him “an important community hub and driver” who “embodies a rare kind of agency and a sense of heroic responsibility.” (After an outcry, CFAR apologized for its “terribly inadequate” response, disbanded the committee and banned Dill from its events. Dill didn’t respond to requests for comment.)

 

Rochelle Shen, a startup founder who used to run a rationalist-adjacent group house, heard the same justification from a woman in the community who mediated a sexual misconduct allegation. The mediator repeatedly told Shen to keep the possible repercussions for the man in mind. “You don’t want to ruin his career,” Shen recalls her saying. “You want to think about the consequences for the community.”

 

One woman in the community, who asked not to be identified for fear of reprisals, says she was sexually abused by a prominent AI researcher. After she confronted him, she says, she had job offers rescinded and conference speaking gigs canceled and was disinvited from AI events. She says others in the community told her allegations of misconduct harmed the advancement of AI safety, and one person suggested an agentic option would be to kill herself.

 

For some of the women who allege abuse within the community, the most devastating part is the disillusionment. Angela Pang, a 28-year-old who got to know rationalists through posts on Quora, remembers the joy she felt when she discovered a community that thought about the world the same way she did. She’d been experimenting with a vegan diet to reduce animal suffering, and she quickly connected with effective altruism’s ideas about optimization. She says she was assaulted by someone in the community who at first acknowledged having done wrong but later denied it. That backpedaling left her feeling doubly violated. “Everyone believed me, but them believing it wasn’t enough,” she says. “You need people who care a lot about abuse.” Pang grew up in a violent household; she says she once witnessed an incident of domestic violence involving her family in the grocery store. Onlookers stared but continued their shopping. This, she says, felt much the same.

 

The paper clip maximizer, as it’s called, is a potent meme about the pitfalls of maniacal fixation.

Every AI safety researcher knows about the paper clip maximizer. Few seem to grasp the ways this subculture is mimicking that tunnel vision. As AI becomes more powerful, the stakes will only feel higher to those obsessed with their self-assigned quest to keep it under rein. The collateral damage that’s already occurred won’t matter. They’ll be thinking only of their own kind of paper clip: saving the world.

New Comment
72 comments, sorted by Click to highlight new comments since: Today at 9:40 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

“5-year-old in a hot 20-year-old’s body.”

40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. 

To me, two of the stories look like they are about the same person and that person has been banned from multiple rationalist spaces without the journalist considering it important to mention that.

Yeah, this seems very likely to be about Michael Vassar. Also, HPMOR spoiler:

I also think him "bragging" about this is quite awkward, since modeling literal Voldemort after you is generally not a compliment. I also wouldn't believe that "bragging" has straightforwardly occurred.

FWIW, I'm a female AI alignment researcher and I never experienced anything even remotely adjacent to sexual misconduct in this community. (To be fair, it might be because I'm not young and attractive; more likely the Bloomberg article is just extremely biased.)

8Portia7mo
That unfortunately implies nothing. Abusers will rarely abuse everyone they encounter, but pick vulnerable and isolated victims purposefully, and often also purposefully cultivate a public persona that covers their abuse. It is entirely possible and common to work with abusers daily and experience them as charming and lovely while they are absolutely awful to others. I believe you had a great time, but that does not make me believe the victims less in any way, and I would hope this is true for other readers, too.
2trevor1y
I read it, that was a pretty good one and also short. It reminds me of Gell-Mann amnesia.

Damning allegations; but I expect this forum to respond with minimization and denial.

One quoted section is about Jessica Taylor's post on LW, which was controversial but taken seriously. (I read a draft of the post immediately preceding it and encouraged her to post it on LW.) Is that minimization or denial?

Out of the other quoted sections (I'm not going to click thru), allegations are only against one named person; Brent Dill. We took that seriously at the time and I later banned him from LessWrong. Is that minimization or denial?

To be clear, I didn't ban him directly for the allegations, but for related patterns of argumentation and misbehavior. I think the risks of online spaces are different from the risks of in-person spaces; like the original Oxford English Dictionary, I think Less Wrong the website should accept letters from murderers in asylums, even if those people shouldn't be allowed to walk the streets. I think it's good for in-person events and organizations do their part to keep their local communities welcoming and safe, while it isn't the place of the whole internet to try to adjudicate those issues; we don't have enough context to litigate them in a fair and wise w... (read more)

A lot of the defenses here seem to be relying on the fact that one of the accused individuals was banned from several rationalist communities a long time ago. While this definitely should have been included in the article, I think the overall impression they are giving is misleading. 

In 2020, the individual was invited to give a talk for an unofficial SSC online meetup (scott alexander was not involved, and does ban the guy from his events). The post was announced on lesswrong with zero pushback, and went ahead. 

Here is a comment from Anna Salamon 2 years ago, discussing him, and stating that his ban on meetups should be lifted:

I hereby apologize for the role I played in X's ostracism from the community, which AFAICT was both unjust and harmful to both the community and X. There's more to say here, and I don't yet know how to say it well. But the shortest version is that in the years leading up to my original comment X was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various o

... (read more)

I personally think the current relationship the community has to Michael feels about right in terms of distance.

I also want to be very clear that I have not investigated the accusations against Michael and don't currently trust them hugely for a bunch of reasons, though they seem credible enough that I would totally investigate them if I thought that Michael would pose a risk to more people in the community if the accusations were true.

As it is, the current level of distance I don't see it as hugely my, or the rationality community's, responsibility to investigate them though if I had more time and was less crunched, I might.

3Kenny1y
What kind of more severe punishment should "the rationalist community" mete out to X and how exactly would/should that work?

Several things can be true simultaneously:

  • This article is similar to much other mainstream coverage of EA/rationality and paints the community in an unfairly negative light.
  • The specific claims in the article have been previously addressed.
  • There is no good evidence that the LW / rationalist community has higher than average levels of abuse.
  • It is worthwhile putting effort into finding out if the community has higher than average levels of abuse, which it does not seem has been done by people in the community. Given the gender imbalance, our prior should be that higher than average levels of abuse are somewhat likely.
  • We can and should have much lower than average levels of abuse.
  • This community strives to exceed the rest of society in many domains. It is anomalous that people are quite uninterested in optimizing this as it seems clearly important.

To be clear, I'm not at all confident that all of the empirical claims above are true. But it seems that people are using the earlier points as an excuse to ignore the later ones. 

5lc1y
Agreed. I think while we're at it, we should also investigate the DNC for child sex trafficking. After all: * There is no good evidence that DNC staffers abuse children at any higher-than-average rate, but: * We can and should lower average levels of child sexual abuse. * It is worthwhile putting effort into finding out if a community has higher than average levels of child sexual abuse, which does not seem to have been done by DNC staffers. * The DNC strives to exceed the rest of society in many domains. It's anomalous that such people seem quite disinterested in optimizing this problem, as preventing child sexual abuse is clearly important.
6Joseph Miller1y
I don't think there is grounds for a high profile external investigation into the rationalist community. But yes, we should try to be better than the rest of society in every way. I think the risk of sexual abuse is high enough that this would be a profitable use of resources whereas my prior is that the risk of child abuse (at least child sex trafficking) does not merit spending effort to investigate. Idk anything about the DNC so I don't know what it's worth their effort to do. I think you are suggesting that I am committing the fallacy of privileging the hypothesis, but I think the stories in the article and associated comment sections are sufficient to raise this to our attention.
[-]lc1y1816

I think you are suggesting that I am committing the fallacy of privileging the hypothesis...

No, I am accusing you of falling for a naked political trap. Internet accusations of pedophilia by DNC staffers are not made in good faith, and in fact the content of the accusation (dems tend to be pedophiles) is selected to be maximally f(hard to disprove, disturbing). If the DNC took those reports seriously and started to "allocate resources toward the problem", it would first be a waste of resources, but second (and more importantly) it would lend credibility to the initial accusations no matter what their internal investigation found or what safeguards they put in place. There's no basic reason to believe the DNC contains a higher number of sexual predators than e.g. a chess club, so the review itself is unwarranted and is an example of selective requirements.

In the DNC's case, no one actually expects them to be stupid enough to litigate the claim in public by going over ever time someone connected to the DNC touched a child and debating whether or not it's a fair example. I think that's a plausible outcome for rationalists, though, who are not as famously sensible as DNC staffers.

2Portia7mo
You don't think that picture ought to change in the hypothetical parallel scenario of multiple children independently saying that they were sex trafficked by DNC staffers, and also notably saying that they were given reasons for why this was normal and unfixable and in fact probably an average and hence acceptable rate of sex trafficking, reasons and arguments that were directly derived from Democratic positions?  This is not a random outside accusation to frame the rationalist community. It comes from people drawn to the community for the promise of rationality and ethics, and then horribly disillusioned. Who are referencing not just abuse, but abuse specifically related to rationalist content. The girl who committed suicide because she had literally been led to believe that this community was the only place to be rational, and that being repeatedly sexually assaulted in it in ways that she found unbearable was utterly inevitable, was horrifying. She wasn't just assaulted, she was convinced that it was irrational to expect humane treatment as a woman, to a degree where she might as well commit suicide if she was committed to rationality. That speaks to a tremendous systematic problem. How can the first response to that be "I bet it is this bad in other communities, too, so we needn't do anything, not even investigate if it actually is equally bad elsewhere or if that is just a poor justification for doing nothing"?
2Joseph Miller1y
Oh okay, I misunderstood. I forgot about that whole DNC scandal. I agree that a public investigation would probably hurt the rationalist's reputation. However reputation is only one consideration and the key disanalogy is still the level of evidence. Also a discreet investigation may be possible.
1Kenny1y
I have the opposite sense. Many people seem very interested in this. "This community" is a nebulous thing and this site is very different than any of the 'in-person communities'. But I don't think there's strong evidence that the 'communities' don't already "have much lower than average levels of abuse". I have an impression that, among the very-interested-in-this people, any abuse is too much.
[-]lc1y1510

Damning allegations; but I expect this forum to respond with minimization and denial.

Minimization and denial is appropriate when you're being slandered.

5Noosphere891y
I don't agree with this take, though I do think there's a common error that almost everyone makes, including LWers, and that's ignoring base rates and over focusing on special, tailored explanations over general explanations, and this article seems to commit this error. I do think that the verifiable facts are correct, but I don't believe the framing is right.
1Kenny1y
I think the quoted text is inflammatory and "this forum" (this site) isn't the same as wherever the alleged bad behavior took place. Is contradicting something you believe to be, essentially, false equivalent to "denial"?

I think abuse issues in rationalist communities are worth discussing, but I don't think people who have been excluded from the community for years are a very productive place to begin such a discussion.

The minimization and denial among these comments is horrifying.

I am a female AI researcher. I come onto this forum for Neel Nanda's interpretability research which has recently been fire. I've experienced abuse in these communities which makes the reaction here all the more painful.

I don't want to come onto this forum anymore.

This is how women get driven out of AI. 

It is appropriate to minimize things which are in fact minimal. The majority of these issues have been litigated (metaphorically) before. The fact that they are being brought up over and over again in media articles does not ipso facto mean that the incident has not been adequately dealt with. You can make the argument that these incidents are part of a larger culture problem, but you have to actually make the argument. We're all Bayesians here, so look at the base rates.

 

The one piece of new information which seems potentially important is the part where Sonia Joseph says, "he followed her home and insisted on staying over." I would like to see that incident looked into a bit more.

5pmk1y
Given the gender ratio in EA and rationality, it would be surprising if women in EA/rationality didn’t experience more harassment than women in other social settings with more even gender ratios. Consider a simplified case: suppose 1% of guys harass women and EA/rationality events are 10% women. Then in a group of 1000 EAs/rationalists there would be 9 harassers targeting 100 women. But if the gender ratio was even, then there would be 5 harassers targeting 500 women. So the probability of each woman being targeted by a harasser is lower in a group with more even gender ratio. For it to be the case that women in EA/rationality experience the same amount of harassment as women in other social settings the men in EA/rationality would need to be less likely to harass women than the average man in other social settings. It is also possible that the average man in EA/rationality is more likely to harass women than the average man in other social settings. I can think of some reasons for this (being socially clumsy, open to breaking social norms etc) and some against (being too shy to make advances, aspiring to high moral standards in EA etc).
2whistleblower671y
While many of these claims are "old news" to those communities, many of these claims are fresh. The baseline rate reasoning is flawed because a) sexual assault remains the most underreported crime, so there is likely instead an iceberg effect, and b) women who were harassed/assaulted have left the movement which changes your distribution, and c) women who would enter your movement otherwise now stay away due to whisper networks and bad vibes.
6Daniel 1y
Can you clarify which specific claims are new? A claim which hasn’t been previously reported in a mainstream news article might still be known to people who have been following community meta-drama. I’m not sure how this refutes the base rate argument. The iceberg effect exists for both the rationalist community and for every other community you might compare it to (including the ones used to compute the base rates). These should cancel out unless you have reason to believe the iceberg effect is larger for the rationalist community than for others. (For all you know, the iceberg effect might be lower than baseline due to norms about speaking clearly and stating one’s mind.) Maybe? This seems more plausible to confound the data than a) or c), but again there are reasons to suppose the effect might lean the other way. (Women might be more willing to tolerate bad behavior if they think it’s important to work on alignment than they would tolerate at say, their local Magic the Gathering group). Even if true, I don’t see how that would be relevant here? Women who enter the movement, get harassed, and then leave would make the harassment rate seem lower because their incidents don’t get counted. Women who never entered the movement in the first place wouldn’t affect the rate at all.

Strong upvote. As another female ai researcher: yeah, it's bad here, as it is everywhere to some degree.

To other commenters, especially ones hesitant to agree that there have been problems due to structural issues, claiming otherwise doesn't make this situation look better - the local network of human connections can only look good to the outer world of humans by being precise about what problems have occurred and what actual knowledge and mechanisms can prevent them. you're not gonna retain your looking good points to the public by groveling about it, nor by claiming there's no issue; you'll retain looking good points by actually considering the problem each time it comes up, discussing the previous discussions, etc. (though, of course, like, efficiently, according to taste. Not everyone has to write huge braindumps like I find myself often wanting to.) nobody can tell you how to be a good person; just be one. put your zines in the slot in the door and we'll copy em and print em out. but dont worry about making a fuss apologizing; make a fuss explaining a verifiable understanding.

Some local fragments of social network are somewhat well protected by local emotional habits; but many... (read more)

[-]Liron1y2032

It seems from your link like CFAR has taken responsibility, taken corrective action, and states how they’ll do everything in their power to avoid a similar abuse incident in the future.

I think in general the way to deal with abuse situations within an organization is to identify which authority should be taking appropriate disciplinary action regarding the abuser’s role and privileges. A failure to act there, like CFAR’s admitted process failure that they later corrected, would be concerning if we thought it was still happening.

If every abuse is being properly disciplined by the relevant organization, and the rate of abuse isn’t high compared to the base rate in the non-rationalist population, then the current situation isn’t a crisis - even if some instances of abuse unfortunately involve the perpetrator referencing rationality or EA concepts.

9Ebenezer Dukakis1y
Sorry you experienced abuse. I hope you will contact the CEA Community Health Team and make a report: https://forum.effectivealtruism.org/posts/hYh6jKBsKXH8mWwtc/contact-people-for-the-ea-community

I think the healthy and compassionate response to this article would be to focus on addressing the harms victims have experienced. So I find myself disappointed by much of the voting and comment responses here.

I agree that the Bloomberg article doesn't acknowledge that most of the harms that they list have been perpetrated by people who have already mostly been kicked out of the community, and uses some unfair framings. But I think the bigger issue is that of harms experienced by women that may not have been addressed: that of unreported cases, and of insu... (read more)

4Kenny1y
I think some empathy and sympathy is warranted to the users of the site that had nothing to do with any of the alleged harms! It is pretty tiresome to be accused-by-association. I'm not aware of any significant problems with abuse "in LessWrong". And, from what I can tell, almost all of the alleged abuse happened in one particular 'rationalist community', not all, most, or even many of them. I'm extremely skeptical that the article or this post were inspired by compassion towards anyone.

I read it, wish I hadn't. It's the usual thing with very large amounts of smart-sounding words and paragraphs, and a very small amount of thought being used to generate them.

5Ben Pace1y
Thanks for saying. Sounds like another piece I will skip!  While I am generally interested in justice around these parts, I generally buy the maxim that if the news is important, I will hear they key info in it directly from friends (this was true both for covid and for Russia-nukes stuff), and that otherwise the news media spend enough effort to do narrative-control that I'd much rather not even read the media's account of things.

This seems like a bad rule of thumb. If your social circle is largely comprised of people who have chosen to remain within the community, ignoring information from "outsiders" seems like a bad strategy for understanding issues with the community.

6Ben Pace1y
Yeah, but that doesn't sound like my strategy. I've many times talked to people who are leaving or left and interviewed them about why and what they didn't like and their reasons for leaving.
5ojorgensen1y
Didn't get that impression from your previous comment, but this seems like a good strategy!

Damning allegations; but I expect this forum to respond with minimization and denial.

This is so spectacularly bad faith that it makes me think the reason you posted this is pretty purely malicious.

Out of all of the LessWrong and 'rationalist' "communities" that have existed, how many are ones for which any of the alleged bad acts occurred? One? Two?

Out of all of the LessWrong users and 'rationalists', how many have been accused of these alleged bad acts? Mostly one or two?

My having observed extremely similar dynamics about, e.g. sexual harassment, in se... (read more)

[-]nim1y3-2

I read the first half and kind of zoned out -- I wish that the author had shown any examples of communities lacking such problems, to contrast EA against.

[-]Celer1y1610

How do you expect journalism to work? The author is trying to contribute one specific story, in detail. Readers have other experiences to compare and draw from. If this was an academic piece, I might be more sympathetic.

I feel confused by this argument. 

The core thesis of the post seems to rely on the level of abuse in this community being substantially higher than in other communities (the last sentence seems to make that pretty explicit). I think if you want to compellingly argue for your thesis you should provide the best evidence you have for that thesis. Journalism commonly being full of fallacious reasoning doesn't mean that it's good or forgivable for journalism to reason fallaciously. 

I do think journalists from time to time summarizing and distilling concrete data is good, but in that case people still clearly benefit if the data is presented in a relatively unbiased way that doesn't distort the underlying truth a lot, or omits crucial pieces of information that the journalist very likely knew but didn't contribute to their narrative. I think journalists not doing that is condemnable and the resulting articles are rarely worth reading.

[-]Ben1y1817

I don't think the core thesis is "the level of abuse in this community is substantially higher than in others".  Even if we (very generously) just assumed that the level of abuse in this community was lower than that in most places, these incidents would still be very important to bring up and address.

When an abuse of power arises the organisation/community in which it arises has roughly two possible approaches - clamping down on it or covering it up. The purpose of the first is to solve the problem, the purpose of the second is to maintain the reputation of the organisation. (How many of those catholic church child abuse stories were covered up because they were worried about the reputational damage to the church). By focusing on the relative abuse level it seem like you are seeing these stories (primarily) as an attack on the reputation of your tribe ("A blue abused someone? No he didn't its Green propaganda!"). Does it matter whether the number of children abused in the catholic church was higher than the number abused outside it?

If that is the case, then there is nothing wrong with that emotional response. If you feel a sense of community with a group and you yourself have... (read more)

5Noosphere891y
Yes, it does matter here, since base rates matter in general. Honestly, one of my criticisms that I want to share as a post later on is that LW ignores the base rates and focuses too much on the inside view over the outside view, but in this case, it does matter here since the analogous claim would be that the church is uniquely bad at sexual assault, and if it turned out that it wasn't uniquely bad, then it means we don't have to panic. That's the importance of base rates: It gives you a solid number that is useful to compare against. Nothing is usually nearly as unprecedented or new as a first time person thinks.
1Ben1y
The base-rates post sounds like an interesting one, I look forward to it. But, unless I am very confused, the base rates are only ever going to help answer questions like:  "is this group of people better than society in general by metric X" (You can bring a choice Hollywood producer and Prince out as part of the control group). My point was that I think a more useful question might be something like "Why was the response to this specific incident inadequate?".
2Noosphere891y
That might be the problem here, since there seem to be two different conversations, going by the article: 1. Why was this incident not responded to accurately? 2. Is our group meaningfully worse or better, compared to normal society? And why is it worse or better?
8Quadratic Reciprocity1y
I can see how the article might be frustrating for people who know the additional context that the article leaves out (where some of the additional context is simply having been in this community for a long time and having more insight into how it deals with abuse). From the outside though, it does feel like some factors would make abuse more likely in this community: how salient "status" feels, mixing of social and professional lives, gender ratios, conflicts of interests everywhere due to the community being small, sex positivity and acceptance of weirdness and edginess (which I think are great overall!). There are also factors pushing in the other direction of course.  I say this because it seems very reasonable for someone who is new to the community to read the article and the tone in the responses here and feel uncomfortable interacting with the community in the future. A couple of women in the past have mentioned to me that they haven't engaged much with the in-person rationalist community because they expect the culture to be overly tolerant of bad behaviour, which seems sad because I expect them to enjoy hanging out in the community. I can see the reasons behind not wanting to give the article more attention if it seems like a very inaccurate portrayal of things. But it does feel like that makes this community feel more unwelcoming to some newer people (especially women) who would otherwise like to be here and who don't have the information about how the things mentioned in the article were responded to in the past. 

Yeah, I might want to write a post that tries to actually outline the history of abuse that I am aware of, without doing weird rhetorical tricks or omitting information. I've recently been on a bit of a "let's just put everything out there in public" spree, and I would definitely much prefer for new people to be able to get an accurate sense of the risk of abuse and harm, which, to be clear, is definitely not zero and feels substantial enough that people should care about it.

I do think the primary reason why people haven't written up stuff in the past is exactly because they are worried their statements will get ripped out of context and used as ammunition in hit pieces like this, so I actually think articles like this make the problem worse, not better, though I am not confident of this, and the chain of indirect effects is reasonably long here.

6Quadratic Reciprocity1y
I would be appreciative if you do end up writing such a post. Sad that sometimes the things that seem good for creating a better, more honest, more accountable community for the people in it also give outsiders ammunition. My intuitions point strongly in the direction of doing things in this category anyway. 
2LVSN1y
I don't disagree with the main thrust of your comment, but, I just wanna point out that 'fallacious' is often a midwit objection, and either 'fallacious' is not the true problem or it is the true problem but the stereotypes about what is fallacious do not align with reality: A Unifying Theory in Defense of Logical Fallacies
2habryka1y
Yeah, that's fair. I was mostly using it as a synonym for "badly reasoned and inaccurate" here. Agree that there are traps around policing speech by trying to apple rhetorical fallacies, which I wasn't trying to do here.
1TAG1y
Mainstream academia?

A bit of searching brings me to https://elephantinthelab.org/sexual-harassment-in-academia/ :

Is Sexual Harassment in the Academy a Problem?

Yes. Research on sexual harassment in the academy suggests that it remains a prevalent problem. In a 2003 study examining incidences of sexual harassment in the workplace across private, public, academic, and military industries, Ilies et al (2003) found academia to have the second highest rates of harassment, second only to the military. More recently, a report by the The National Academies of Sciences, Engineering, and Medicine (NASEM) summarized the persistent problem of sexual harassment in academia with regard to faculty-student harassment, as well as faculty-faculty harassment. To find more evidence of this issue, one can also turn to Twitter – as Times Higher Education highlighted in their 2019 blog. 

Another paper suggests:

In 2019, the Association of American Universities surveyed 33 prominent research universities and found 13% of all students experienced a form of sexual assault and 41.8% experienced sexual harassment (Cantor et al., Citation2020). 

Mainstream academia is not free from sexual abuse. 

-1MSRayne1y
Whataboutism is a fallacy.
[-]gjm1y3521

It is. But if someone is saying "this group of people is notably bad" then it's worth asking whether they're actually worse than other broadly similar groups of people or not.

I think the article, at least to judge from the parts of it posted here, is arguing that rationalists and/or EAs are unusually bad. See e.g. the final paragraph about paperclip-maximizers.

1MSRayne1y
I fail to see why it matters what other broadly similar groups of people do. Rationalists ought to predict and steer the future better than other kinds of people, and so should be held to a higher standard. Deflecting with "but all the other kids are equally abusive!" is just really stupid. As for the article, I'm not concerned with the opinion of a journalist either; they can be confused or bombastic about the exact extent of the problem if they want, it's rather standard for journalists; but I don't doubt that the problem is real and hasn't been preemptively fixed before it happened, which bothers me because the founders of this community are more than smart enough to have at least made an attempt to do so.
[-]gjm1y2522

Whether it matters what other broadly similar groups do depends on what you're concerned with and why.

If you're, say, a staff member at an EA organization, then presumably you are trying to do the best you could plausibly do, and in that case the only significance of those other groups would be that if you have some idea how hard they are trying to do the best they can, it may give you some idea of what you can realistically hope to achieve. ("Group X has such-and-such a rate of sexual misconduct incidents, but I know they aren't really trying hard; we've got to do much better than that." "Group Y has such-and-such a rate of sexual misconduct incidents, and I know that the people in charge are making heroic efforts; we probably can't do better.")

So for people in that situation, I think your point of view is just right. But:

If you're someone wondering whether you should avoid associating with rationalists or EAs for fear of being sexually harassed or assaulted, then you probably have some idea of how reluctant you are to associate with other groups (academics, Silicon Valley software engineers, ...) for similar reasons. If it turns out that rationalists or EAs are pretty much like t... (read more)

1nim1y
I am a pessimist who works from the assumption that humans are globally a bit terrible. Thus, I don't consider the isolated data point of "humans in group x have been caught being terrible" to be particularly novel or useful. Reporting that I would find useful would ultimately take the form "humans in group x trend toward differently terrible from humans in other groups", whether that's claiming that they're worse, differently bad, or better. Whenever someone claims that a given group is better than most of society, the obvious next question is "better at being excellent to each other, or better at covering it up when they aren't?". The isolated data point of "people in power are accused of using that power to harm others" is like... yes, and? That's kind of baseline for our species. And as a potential victim, reporting on misconduct is only useful to me if it updates the way I take precautions against it, by pointing out that the misconduct in a given community is notably different from that in the world at large.
3Kenny1y
No, it's not, especially given that 'whataboutism' is a label used to dismiss comparisons that don't advance particular arguments. Writing the words "what about" does not invalidate any and all comparisons.
[-]Portia7mo-1-2

That article had me horrified. But I was hoping the reactions would point to empathy and a commitment to concrete improvement. 

The opposite happened, the defensive and at times dismissive or demanding comments made it worse. It was the responses here and on the effective altruism forum that had me reassess EA related groups as likely unsafe to work for.

This sounds like a systematic problem related to the way this community is structured, and the community response seems aimed not at fixing the problem, but at justifying why it isn't getting fixed, abusing rationality to frame abuse as normal and inevitable.

At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.

Like... an actual evil amoral BBEG wizard? Is this something true rationalists now brag about? 
Just because someone uses rationality toolset doesn't make them role model :(

9habryka1y
I wouldn't trust the news articles to mention this accurately. Separately HPMOR was also not over back then, so it is plausible that Michael owned himself by bragging about this before the professors real identity was revealed.

It didn't have to be revealed. That Quirrel was Voldemort was obvious almost within the first chapter introducing him (eg already taken for granted in the earliest top-level discussion page in 2010), to the extent that fandebate over 'Quirrelmort' was mostly "he's so obviously Voldemort, just like in canon - but surely Yudkowsky would never make it that easy, so could he possibly be someone or something else? An impostor? some sort of diary-like upload? a merger of Quirrel/Voldemort? Harry from the future?" Bragging about being the inspiration for a character so nakedly amoral, at best, that the defense in fan debates is "he's too evil to be Voldemort!" is not a great look no matter when in the series he was doing that bragging.

[+]MSRayne1y-20-2