ialdabaoth is banned

by Vaniver4 min read13th Dec 201968 comments

31

LW ModerationSite Meta
Personal Blog

ialdabaoth is banned from LessWrong, because I think he is manipulative in ways that will predictably make the epistemic environment worse. This ban is unusual in several respects: it relies somewhat heavily on evidence from in-person interactions and material posted to other sites as well as posts on LessWrong, and the user in question has been an active user of the site for a long time. While this decision was made in the context of other accusations, I think it can be justified solely on epistemic concerns. I also explain some of the reason for the delay below.

However, in the interests of fairness, and because we believe ideas from questionable sources can be valid, we’ll make edits he suggests to his post Affordance Widths so that it can fully participate in the 2018 Review. My hope is that announcing this now will cause the discussion on that post to be focused solely on the post rather than social coordination about whether he should or should not be banned. (Commentary on this decision should happen here.)

Some background context:

Back in September of 2018, I posted this comment about a discussion involving allegations of serious misconduct, and said LW was not the place to conduct investigations, but that it would be appropriate to link to findings once the investigation concluded.

As far as I'm aware, he has made no claims of either guilt or innocence, and went into exile, including ceasing posting or commenting on LessWrong. To the best of my knowledge, none of the panels that conducted investigations posted findings, primarily for reasons of legal liability, and so there was never an obvious time to publicly and transparently settle his status (until now).

One of the primary benefits of courts is that they allow for a cognitive specialization of labor, where a small number of people can carefully collect information, come to a considered judgment, and then broadcast that judgment. Though a number of groups have run their own investigations and made calls about whether ialdabaoth is welcome in their spaces, generally choosing no, there has been no transparent and accountable process which has made public pronouncement on the allegations brought against him.

About six months ago, ialdabaoth messaged Raemon, asking if he was banned. Raemon replied that the team was considering banning him but had multiple conflicting lines of thought that hadn’t been worked through yet, and that if he commented Raemon or someone else would respond with another comment making that state of affairs transparent.

I think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations. This ban is not intended to provide a ruling either way on other allegations, as we have not conducted any investigation of our own into those allegations, nor do we plan to, nor do we think we have the necessary resources for such work.

It does seem important to point out that some of the standards used to assess that risk stem from processing what happened in the wake of the allegations. A brief characterization is that I think the community started to take more seriously not just the question of "will I be better off adopting this idea?" but also the question "will this idea mislead someone else, or does it seem designed to?". If I had my current standards in 2017, I think they would have sufficed to ban ialdabaoth then, or at least do more to identify the need for arguing against the misleading parts of his ideas.

This processing was gradual, and ialdabaoth going into exile meant there wasn't any time pressure. I think it's somewhat awkward that we became comfortable with the status quo and didn't notice when a month and then a year had passed without us making this state transparent, or doing the discussion necessary to prepare this post. However, with one of his posts nominated for 2018 in Review, this post became urgent as well as important.

Some frameworks and reasoning:

In moderating LessWrong, I don’t want to attempt to police the whole world or even the whole internet. If someone comes to LessWrong with an accusation that a LW user mistreated them someplace else, the response is generally “handle it there instead of here.” This is part of a desire to keep LW free of politics and factionalism, and instead focused on the development of shared tools and culture, as well as cause issues to be settled in contexts that have the necessary information. That said, it also seems to me like sensible Bayesianism to keep evidence from the rest of the world in mind when judging behavior on the site, and paying more attention to users who we expect to be problematic in one way or another.

But what does it mean that ideas from questionable sources can be valid? Argument screens off authority, but authority (positive or negative) has some effect. Consider these cases:

Suppose you are running a physics journal, and a convicted murderer sends you a paper draft; you might feel some disgust at handling the paper, but it seems to me that the correct thing to do is handle the paper like any other, and accept it if the science checks out and reject it if it doesn’t. If your primary goal is getting the best physics, blinded review seems useful; you don’t care very much whether or not the author is violent, and you care a lot about whether the thing they said was true. If, instead, the person was convicted of manufacturing data or the other sorts of scientific misconduct that are difficult to detect with peer review, it seems justified to simply reject the submission. You also might not want them to give a talk at your conference.

Suppose instead you are running a trading fund, and someone previously convicted of fraud sends you an idea for a new financial instrument. Here, it seems like you should be much more suspicious, not just of the idea but also of your ability to successfully notice the trap if there is one. It seems relevant now to check both whether the idea is true and whether or not it is manipulative. Rather than just performing a process that catches simple mistakes or omissions, one needs to perform a process that's robust to active attempts to mislead the judging process.

Suppose instead you’re running an entertainment business like a sports team, and someone affiliated with the team does something unpopular. Since the primary goal you’re maximizing is not anything epistemic, but instead how popular you are, it seems efficient to primarily act based on how the affiliation affects your reputation.

I think the middle case is closest to the situation we’re in now, for reasons like those discussed in comments by jimrandomh and by Zack_M_Davis. Much of ialdabaoth's output is claims about social dynamics and reasoning systems that seem, at least in part, designed to manipulate the reader, either by making them more vulnerable to predation or more likely to ignore him / otherwise give him room to operate.

While we can’t totally ignore reputation costs, I currently think LessWrong can and should consider reputation costs as much less important than epistemic costs. I don’t think we should ban people simply for having bad reputations or committing non-epistemic crimes, but I think we should act vigorously to maintain a healthy epistemic environment, which means both being open and having an active immune system. This, of course, is not meant to be a commentary on how in-person gatherings should manage who is and isn't welcome, as the dynamics of physical meetups and social communities are quite different than websites. When the two intersect, we do take seriously our duty of care towards our users and people in general.

Plan for including content of banned users

In general, LessWrong does not remove the posts or comments of banned users, with the exception of spam. It seems worth sharing our rough plan of what to do if a post by a banned user passes through an annual review, but it seems to me like the standard mechanisms we have in place for the review will handle this possibility gracefully.

As with all posts, the post will only be included with the consent of the author. If a post is controversial for any reason, we may decide that inclusion requires some sort of editor's commentary or inclusion of user comments or reviews, which would be shared with the author before they make their decision to consent or not to inclusion.

31