Thank you Habryka (and the rest of the mod team) for the effort and thoughtfulness you put into making LessWrong good.
I personally have had few problems with Said, but this seems like an extremely reasonable decision. I'm leaving this comment in part to help make you feel empowered to make similar decisions in the future when you think it necessary (and ideally, at a much lower cost of your time).
I hereby voice strong approval of the meta-level approaches on display (being willing to do unpopular and awkward things to curate our walled garden, noticing that this particular decision is worth justifying in detail, spending several thousand words explaining everything out in the open, taking individual responsibility for making the call, and actively encouraging (!) anyone who leaves LW in protest or frustration to do so loudly), coupled with weak disapproval of the object-level action (all the complicating and extenuating factors still don't make me comfortable with "we banned this person from the rationality forum for being annoyingly critical").
If I were a moderator, I would have banned Jesus Christ Himself if He required me to spend one hundred hours moderating His posts on multiple occasions. Given your description here I am surprised you did not do this a long time ago. I admire your restraint, if not necessarily your wisdom.
I know what you mean, of course, but it is funny that you use Jesus as an example of someone unlikely to be banned when, historically, Jesus was in fact "banned". :)
Fwiw I've found Said's comments to be clear, crisp and valuable. I don't recall being ever annoyed by his comments and found him a most useful bloodhound for bad epistemic practices and rhetorics. Many cases that Said's comment is the only good, clear, and crisp critic of vagueposting and applauselighting.
The examples in this post don't seem compelling at all. One of the primary examples seems to be Duncan who comes off [from a distance] as thin-skinned and obscurantist, emotionally blowing up at very fair criticism.
Despite my disagreement I endorse Habryka unilaterally taking these kinds of decisions and approve of his transparency and conduct in this matter.
Farewell, lesswrong gadfly. You will be missed.
One of the primary examples seems to be Duncan who comes off [from a distance] as thin-skinned and obscurantist, emotionally blowing up at very fair criticism.
This is my view too. I remember once trying (I think on Facebook) to gently talk him out of being really angry at someone for making what I thought was a reasonable criticism, and he ended up getting mad at me too.
One of the primary examples seems to be Duncan who comes off [from a distance] as thin-skinned and obscurantist, emotionally blowing up at very fair criticism.
I don't think I link to a single Duncan/Said interaction in any of the core narratives of the post. I do link the moderation judgement of the previous Said/Duncan thread, but it's not the bulk of this post.
Like none of these comments:
link to any threads between Said and Duncan.
And the moderation judgement in the Said/Duncan also didn't really have much to do with Said's conduct in that thread, but with his conduct on the site in general.
You might still not find the examples compelling, but there is basically no engagement with Duncan that played any kind of substantial role in any of this.
As another outside observer I also got the impression that the Duncan conflict was the most significant of the ones leading up to the ban, since he wrote a giant post advocating for banning Said, left the site in a huff shortly thereafter, and seems to be the main example of a top contributor by your lights who said they didn't post due to Said.
Nah, you can see in the moderation history that we threatened Said with bans and moderation actions for many years before then. My honest best guess is that we would have banned Said somewhat earlier if not for the Duncan thread, though that we also wouldn't have given him a rate-limit around that time, but it's of course hard to tell.
My experience was that Said's behavior in the Duncan thread was among the most understandable cases of him behaving badly (because I too have found myself ending up drawn into conflicts with Duncan that end up quite aggressive and at least tempt me to behave badly). That's part why I don't link to any comments of his in the thread above (I might somewhere in there, but if so it's not intended as a particularly load-bearing part of the case).
I should comment publicly on this; I've talked with various people about it extensively in private. In case you just want my conclusion before my reasoning, I am sad but weakly supportive. An outline of six points, which I will maybe expand on if people ask questions:
I’m still not sure what Zack or Said think of the Royal Society example; Zack talks about it a bit in another comment on that page but not in a way that feels connected to the question of how to balance virtues against each other, and what virtues cultures should strive towards. (Said, in an email, strongly rejects my claim that there’s a difference between his culture of commenting and the Royal Society culture of commenting that I describe.)
This seems to be by far the most important crux, nothing else could've substantially changed attitudes on either side. Do environments widely recognized for excellence and intellectual progress generally have cultures of harsh and blunt criticism, and to what degree its presence/absence is a load-bearing part? This question also looks pretty important on its own, and the apparent lack of interest/attention is confusing.
Criticism is a pretty thankless job. People mostly do it for the status reward, but consider if you detect some potentially fatal flaw in an average post (not written by someone very high status), but you're not sure because maybe the author has a good explanation or defense, or you misunderstood something. What's your motivation to spend a lot of effort to write up your arguments? If you're right, both the post and your efforts to debunk it are quickly forgotten, but if you're wrong, then the post remains standing/popular/upvoted and your embarrassing comment is left for everyone to see. Writing up a quick "clarifying" question makes more sense from a status/strategic perspective, but I rarely do even that nowadays because I have so little to gain from it, and a lot to lose including my time (including expected time to handle any back and forth) and personal relations with the author (if I didn't word my comment carefully enough). (And this was before today's decision, which of course disincentivizes such low-effort criticism even more.)
A few more quick thoughts as I'm not very motivated to get into a long discussion given the likely irreversible nature of the decision:
I think in some sense both making top-level posts and criticism are thankless jobs. What is your motivation to spend a lot of effort to write up your arguments in top-level post form in the first place? I feel like all the things you list as making things unrewarding apply to top-level posts just as much as writing critical comments (especially in as much as you are writing on a topic, or on a forum, where people treat any reasoning error or mistake with grave disdain and threats of social punishment).
If you get rid of people like Said or otherwise discourage low-effort criticism, you'll just get less criticism not better criticism.
I don't buy this. I am much more likely to want to comment on LessWrong (and other forums) if I don't end up needing to deal with comment-sections that follow the patterns outlined in the OP, and I am generally someone who does lots of criticism and writes lots of critical comments. Many other commenters who I think write plenty of critique have reported similar.
Much of LessWrong has a pretty great reward-landscape for critique. I know that if I comment on a post by Steven Byrnes, or Buck, or Ryan Greenblatt or you, or Scott Alexander or many others, wit...
Top-level posts are not self-limiting (from a status perspective) in the way I described for a critical comment. If you come up with a great new idea, it can become a popular post read and reread by many over the years and you can become known for being its author. But if you come up with a great critical comment that debunks a post, the post will be downvoted and forgotten, and very few people will remember your role in debunking it.
I agree this is largely true for comments (largely by necessity of how comment visibility works)[1]. Indeed one thing I frequently encourage good commenters to do is to try to generalize their comments more and post them as top-level posts.
And as far as I can tell this is an enormously successful mechanism for getting highly-upvoted posts on LessWrong. Indeed, I would classify the current second most-upvoted post of all time on LessWrong as a post of this kind: https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer
Dialogues were also another attempt at making it so that critique is less self-limiting, by making it so that a more conversation can happen at the same level as a post. I don't think that plan succeeded amazingly well (largely because dialogues ended up hard to read, and hard to coordinate between authors), but it is a thing I care a lot about and expect to do more work on.
The popular comments section on the frontpage has also changed this situation a non-trivial amount. It is now the case that if you write a very good critique that causes a post to be downvoted, that this will still result in your comment getting a lo...
I disagree. Posts seem to have an outsized effect and will often be read a bunch before any solid criticisms appear. Then are spread even given high quality rebuttals... if those ever materialize.
I also think you're referring to a group of people who write high quality posts typically and handle criticism well, while others don't handle criticism well. Despite liking many of his posts, Duncan is an example of this.
As for Said specifically, I've been annoyed at reading his argumentation a few times, but then also find him saying something obvious and insightful that no one else pointed out anywhere in the comments. Losing that is unfortunate. I don't think there's enough "this seems wrong or questionable, why do you believe this?"
Said is definitely more rough than I'd like, but I also do think there's a hole there that people are hesitant to fill.
So I do agree with Wei that you'll just get less criticism, especially since I do feel like LessWrong has been growing implicitly less favorable towards quality critiques and more favorable towards vibey critiques. That is, another dangerous attractor is the Twitter/X attractor, wherein arguments do exist but they matter to the overall di...
FWIW I feel like I get sufficient status reward for criticism and this moderation decision basically won't affect my behavior
Now it's true that most of these comments are super long and high effort. But it's possible to get status reward for lower effort comments too, e.g. this, though it feels more like springing a "gotcha". Many of the examples of Said's critiques in the post at least seemed either deliberately inflammatory or unhelpful or targeted at some procedural point that isn't maximally relevant.
As for risking being wrong...
I think these status motivations/dynamics are active whether or not you consciously think of them, because your subconscious is already constantly making status calculations. It's possible consciously framing things this way makes it even worse, "hurts your motivation to comment" even more, but it seems unavoidable if we want to explicitly discuss these dynamics. (Sometimes I do deliberately avoid bringing up status in a discussion due to such effects, but here the OP already talked about status a bunch, and it seems like an unavoidable issue anyway.)
I have not read this post yet (I assume it's about more than just Said), but just to be clear: I personally trust you guys to ban people that are worth banning without writing thousands of words about it.
(Have read the post.) I disagree. I think overall habryka has gone through much greater pains than I think he should have to, but I don't think this post is a part he should have skimped on. I would feel pretty negative about it if habryka had banned Said without an extensive explanation for why (modulo past discussions already kinda providing an explanation). I'd expect less transparency/effort for banning less important users.
My experience of Said has been mostly as described, a strong sense of sneer on mine and others posts that I find unpleasant.
I think there's a large swathe of experience/understanding that Said doesn't have, and which no amount of his socratic questioning will ever actually create that understanding- and it's not designed for Said to try to understand, but to punish others for not making sense in Said's worldview.
Thank you for this decision.
But I think a lot of Said's confusions would actually make more sense to Said if he came to the realization that he's odd, actually, and that the way he uses words is quite nonstandard, and that many of the things which baffle and confuse him are not, in fact, fundamentally baffling or confusing but rather make sense to many non-Said people.
(My own writing, from here.)
As I have said before, on the object-level topic of Said Achmiz, I have written all I care about here, and I shall not pollute this thread further by digressing into that again. My thoughts on this topic are well-documented at those links, if anyone is interested.
It's an understatement to say I think this is the wrong decision by the moderators. I disagree with it completely and I think it represents a critical step backwards for this site, not just in isolation but also more broadly because of what it illustrates about how moderators on this site view their powers and responsibilities and what proper norms of user behavior are. This isn't the first time I have disagreed with moderators (in particular, Habryka) about matters I view as essential to this site's continued epistemic success,[1] but it will be the last.
I have written words about why I view Said and Said-like contributions as critical. But words are wind, in this case. Perhaps actions speak louder. I will be deactivating my account[2] and permanently quitting this site, in protest of this decision.
It doesn't make me happy to do so, as I've had some great interactions on here that have helped me learn and ...
This seems like not a useful move. Your contributions, in my view, consistently avoid the thing that makes Said's a problem. Your criticisms will be missed.
Seconded, I consistently find your comments both much more valuable and ~zero sneer. I would be dismayed by moderation actions towards you, while supporting those against Said. You might not have a sense of how his are different, but you automatically avoid the costly things he brings.
I think you shouldn't leave, and Habryka shouldn't have so prominently talked about leaving LW as something one should consider doing in response to this post. LW is the best place by far to discuss certain topics, and nowhere else provides comparable utility if one was interested in these topics. It's technically true but misleading to say "There are many other places on the internet to read interesting ideas, to discuss with others, to participate in a community." This underplays not only the immense value that LW provides to its members but also the value that a member could provide to LW and potentially to the world by influencing its discourse.
For your part, I think "quitting in protest" is unlikely to accomplish anything positive, and I'd much rather have your voice around than the (seemingly tiny) chance that your leaving causes Habryka to change his mind.
My first reaction is that this is bad decision theory.
It makes sense to actualize on strikes when the party it's against would not otherwise be aware of or willing to act on the preferences of people whose product they're utilizing. It can also make sense if you believe the other party is vulnerable to coercion and you want to extort them. If you do want fair trade and credibly believe the other party is knowing and willing, the meta strategy is to simply threaten your quorum, and never actually have to strike.
We don't seem to be in the case where an early strike makes sense. The major reaction to this post is not of an unheard or silenced opposition, but various flavours of support. In order for the moderators to cede to your demand, they have to explicitly overrule a greater weight of other people's preferences on the basis that those people will be less mean about it. But we're on LessWrong, people here are not broadly open to coercion.
Additively, we also don't seem to be in a world where your preferences have been marginalized beyond the degree that they're the minority preference. The moderators clearly spent a huge personal cost and took a huge time delay precisely because pr...
Let me join the chorus: please do not leave in protest; your comments here do some of the same positive things that Said's comments do, and your leaving would have a bunch of the negative consequences of Said's banning without the positive ones (because, at least so it seems to me, you are much less annoying than Said).
(For the avoidance of doubt, I find you a net-positive commenter here for reasons other than that you do some of the useful things Said has done, but that particular aspect seems the most relevant on this occasion.)
I read the whole post and appreciated the detail and the decision. I have had discussions with Said that were valuable, and I am sad to see that he didn't change what I consider to be a bad pattern in order to continue the version of it that's good. I've mostly just been impressed with sunwillrise's version of it lately, for example. I also try to do a version of this occasionally, and it's not clear to me my contributions are uniformly good. Input welcome. But I sometimes go through and try to find posts with no comments, see if I have anything to say about them, and try to both try to describe something I found positive and ask about something that confused me. Hopefully that's been helpful.
Many years ago I lurked on LessWrong, making a very occasional comment but finding the ideas and discussion fascinating and appealing. I believe I am not as smart as the average commenter here, and I am certainly less formally educated. I eventually drifted away to follow other of my interests and did not put in the work to learn enough to feel like I could contribute meaningfully. I specifically recall Said Achmiz as being a commenter I was afraid of and did not want to engage with. I didn't leave entirely because of Said, it was more about the effort of learning all the concepts, but maybe 1/8 of my decision was based on him. I imagine his attitude towards this will be, if I'm too much of a coward to risk an unknown internet commenter saying possibly bad things about my own comments, then I really don't belong here anyway. Which, maybe it's true. I don't know if I will try again in the upcoming 3 years, but I'm more likely to than before Said was banned.
Context: I much more recently gravitated to the Duncansphere, as it were, and am kinda on the fringes of that these days (I missed the Duncan/Said thing, and only know about it from comments on this post). I was encouraged there to come here and post this anecdote.
(It was me, and in the place where I encouraged DrShiny to come here and repeat what they'd already said unprompted, I also offered $5 to anybody who disagreed with the Said ban to please come and leave that comment as well.)
I am disappointed and dismayed.
This post contains what feels to me like an awful lot of psychoanalysis of the LW readership, assertions like "it is clear to most authors and readers", and a second-person narrative about what it is like to post here:
After all of this you are left questioning your own sanity, try a bit to respond more on the object-level, and ultimately give up feeling dejected and like a lot of people on LessWrong hate you. You probably don't post again.
And like, man, is that true? Did you conduct a poll? I didn't get a survey. You pay some attention to Zack's perspective on Said, maybe because it'd be kind of laughable to pretend you hadn't heard about it; but I'm one of the less-strident people Zack commiserates with about Said's travails, and you had access to my opinion on the matter if you were willing to listen to a wheel that only squeaked a little bit. My comment is toplevel and has lots of votes and netted positive on both karma and agreement and most of the nested remarks are about whether it was polite of me to compare a non-Said person to a weird bug.
This post spends so much time talking about the complaints you've gotten, the exp...
I am sorry you didn't like the post! I do think if you were still more active here, I would have probably reached out in some form (I am aware of that one comment you left a while ago, and disagreed with it).
I generally respect you and wish you participated more on the site and also do think of you as someone whose opinion I would be interested in on this and other topics.
After all of this you are left questioning your own sanity, try a bit to respond more on the object-level, and ultimately give up feeling dejected and like a lot of people on LessWrong hate you. You probably don't post again.
And like, man, is that true? Did you conduct a poll?
I think the narrative above pretty accurately describes the experiences of a bunch of authors. I only ran it by like 2-3 non-LW team members since this post already took an enormous amount of time to write. I am of course not intending to capture some kind of universal experience on LessWrong, and of course definitely wouldn't be aiming for that section to represent your experience on LessWrong, since I don't think you ever had any of the relevant interactions with Said, at least since I've been running LW.
...This post contains what feels
Okay, but... why. Why do you think that.
I mean, I really tried to explain a lot of my models for what I think the underlying generators of this are. That's why the post is 15,000 words long.
Is there a reason you think that, which other people could inspect your reasoning on, which is more viewable than unenumerated "complaints"?
To be clear, LessWrong is not a democracy, and while I think the complaints are important, I don't consider them to be the central part of this post. I tried to explain in more mechanistic terms what I think is going wrong in conversations with Said, and those mechanistic terms are where my cruxes for this decision are located. If I changed my mind on those, I would make different decisions. If all the complaints disappeared, but I still had the same opinions on the underlying mechanics, then I would still make the same decision.
Again, I believe the complaints exist. How many, order of magnitude? Were they all from unique complainants?
I link to something like 5-15 comment threads in the post above. Many of the complaints are on those comments threads and so are public. See for example the Benquo ones that I have quot...
Cool, I think this clarified a bunch. Summarizing roughly where I think you are at:
In moderation space, there is one way to run things that feels pretty straightforward to you, which you here for convenience called "modularity", where you treat moderation as a pragmatic thing for which "I don't have the resources to deal with this kind of person" without much explanation or elaboration is par for the course. You are both confused, and at least somewhat concerned about what I am trying to do in the OP, which is clearly not that thing.
There are at least two dimensions on which you feel concerned/confused about what is going on:
I think your model of me as represented in this comment is pretty good and not worth further refining in detail.
I read something into those comments - I might even possibly call it "disdain", but - "disdain (neutral)", not "disdain (derogatory)". It just... doesn't bother me, that he writes in a way that communicates that feeling. It certainly bothers me less than when (for example) Eliezer Yudkowsky communicates disdain, purely as a stylistic matter. If I thought Said would want to be on my Discord server I would invite him and expect this to be fine. (Eliezer is on my Discord server, which is also usually fine.)
It bothers you. I'm not trying to argue you out of being bothered. I'm not trying to argue the complainants out of being bothered. It bothering you would, under the Modularity regime, be sufficient.
But you're not doing that. You're trying to make the case that you are objectively right to feel that way, that you have succeeded at a Sense Motive check to detect a pattern of emotions and intentions that are really there. I don't agree with you, about that.
But I don't have to. I don't have your job. (I wouldn't want it.)
But you’re not doing that. You’re trying to make the case that you are objectively right to feel that way, that you have succeeded at a Sense Motive check to detect a pattern of emotions and intentions that are really there. I don’t agree with you, about that.
I think the claim I'd make is not necessarily that Oli's Sense Motive check has succeeded, but that Oli's Sense Motive check correlates much better with other people's Sense Motive checks than yours does, and that ultimately that's what ends up mattering for the effects on discourse.
Like, in the sense that someone's motives approximately only affect LessWrong by affecting the words that they write. So when we know the words they write, knowing their motives doesn't give us any more information about how they're going to affect LessWrong. For some people, there's something like... "okay, if this person actually felt disdain then the words they write in future are likely to be _, and if not they're likely to be _ instead; and we can probably even shift the distribution if we ask them hey we detect disdain from your comment, is that intended?". But we don't really have that uncertainty with Said. We know how he's going to write, whether he feels disdain or not.
I am somewhat interested in his True Motives, but I don't think they should be relevant to LW moderation.
(This is not intended to say "Said's comments are just fine except that people detect disdain".)
I mean, to be clear, I did have like 20+ hours of conversation with many authors and contributors who had very strong feelings on this topic just as part of writing this post[1], with many different disagreeing viewpoints, so I think we did a lot more than "run a focus group".
Not to mention the many more conversations I've had over the last decade about this
Funnily enough I think I kind of feel about Duncan the same way Oli feels about Said. I detect a sinister and disquieting pattern in his writing that I cannot prove in a court of law or anything that is slightly larping as one. But I'm not trying to moderate any space he's in.
Crimes that are harder to catch should be more harshly punished
Please, don't do this.
Your reasoning amounts to "we need to increase the punishment to compensate for all the false negatives".
If the only kind of error that existed was false negatives, you might have a point. But it isn't. False positives exist too. And crimes that are harder to catch are probably going to have more false positives. Harsher punishments also create bigger incentives for either false positives, or for standards that make everyone guilty of serious crimes all the time, thus letting anyone be punished at the whim of the moderators while pretending that they are not.
Agree that you need to account for false positives (and the above math didn't do that)!
Sometimes crimes are harder to catch, but you can still prove they happened without much risk of false positives. I do sure agree that the kind of misbehavior discussed in this post is at risk of false positives, so taking that into account is quite important for finding the right punishment threshold. Generally appreciate the reminder of that.
Heads-up: I am nearing the limit of the roughly 10 hours I set aside for engaging on this, so I'll probably stop responding to things soon (and also if someone otherwise wants to open up this topic again in e.g. a top-level post, I'll probably just link back to the discussion that has been had here, and not engage further).
Ok, I think that's a wrap for me. Thanks all for the discussion so far. I am now hoping to get back to all the other work I am terribly behind on.
My two cents. There's a certain kind of posts on LW that to me feel almost painfully anti-rational. I don't want to name names, but such posts often get highly upvoted. Said was one of very few people willing to vocally disagree with such posts. As such, he was a voice for a larger and less vocal set of people, including me. Essentially, from now on it will be harder to disagree with bullshit on LW - because the example is gone, and you know that if you disagree too hard, you might become another example. So I'm not happy to see him kicked out, at all.
My thoughts are similar to yours although I'm more willing to tolerate posts that you call "almost painfully anti-rational" (while still wishing Said was around to push back hard on them). I think in the early stages of genuine intellectual progress, it may be hard to distinguish real progress from "bullshit". I would say that people (e.g. authors of such posts) are overly confident about their own favorite ideas, rather than that the posts are clearly bullshit and should not have appeared. My sense is that it would be a bad idea to get rid of such overconfidence completely because intellectual progress is a public good and it would be harder to motivate people to work on some approach if they weren't irrationally optimistic about it, but equally bad or worse if there was little harsh or sustained criticism to make clear that at least some people think there are serious problems with their ideas.
FWIW my personal intention -- only time will tell whether I actually stick to it -- is to be a little more vigorous in disagreeing with things that I think likely to be anti-rational, precisely because Said will no longer be doing it.
Bad call. You don't exactly have an unlimited supply of people who have a solid handle on the formative LW mindset and principles from 15 years ago and who are still actively participating on the forums, and latter-day LessWrong doesn't have as much of a coherent and valuable identity to stand firmly on its own.
A key idea in the mindset that started LessWrong is that people can be wrong. Being wrong can exist as an abstract thing to begin with, it's not just an euphemism for poor political positioning. And people in positions of authority can be wrong. Kind, well-meaning, likable people can be wrong. People who have considerate friendly conversations that are a joy to moderate can be wrong. It's not always easy to figure out right and wrong, but it is possible, and it's not always socially harmonious to point it out loud, but it used to be considered virtuous still.
A forum that has principles in its culture is going to have cases where moderation is annoying around something or someone who doggedly sticks to those principles. It's then a decision for the moderators whether they want to work to keep the forum's principles alive or to have a slightly easier time moderating in the future.
I'm pretty sure people drifted away because of a more complex set of dynamics and incentives than "Said might comment on their posts" and I don't expect to see much of a reversal.
Fwiw, my interaction with lw and more broadly the rationalist scene in the Bay area was most of what formed my current stance that communities that I want to participate in operate on white lists, not black lists. This is such a fundamental shift that it affects everything about how I socialize, and made my life much better. Banning someone requiring a post of this effort level predicts that lots and lots of other good things aren't happening, and that is mostly invisible.
Good work.
The hardest part of moderation is the need to take action in cases where someone is consistently doing something that imposes a disproportionate burden on the community and the moderators, but which is difficult to explain to a third party unambiguously.
Moderators have to be empowered to make such decisions, even if they can’t perfectly justify them. The alternative is a moderation structure captured by proceduralism, which is predictably exploitable by bad actors.
That said — this is Less Wrong, so there will always be a nitpick — I do think people need to grow a thicker skin. I have so many friends who have valuable things to say, but never post on LW due to a feeling of intimidation. The cure for this is, IMO, not moderating the level of meanness of the commentariat, but encouraging people to learn to regulate their emotions in response to criticism. However, at the margins, clipping off the most uncharitable commenters is doubtless valuable.
Like seemingly many others, I found Said a mix of "frequently incredibly annoying, seemingly blind to things that are clear to others, poorly calibrated with the confidence level he expresses things, occasionally saying obviously false things[1]" and "occasionally pointing out the-Emperor-has-no-clothes in ways that are valuable and few other people seem to do".
(I had banned him from my personal posts, but not from my frontpaged posts.)
And I wish we could get the good without the bad. It sure seems like that should be possible. But in practice it doesn't seem to exist much?
I have occasionally noticed in myself that I want to give some criticism; I could choose to put little effort in but then it would be adversarial in a way I dislike, or I could choose to put a bunch of effort in to make a better-by-my-lights comment, or I could just say nothing; and I say nothing.
I think this is less of a loss than I think Said thinks it is. (At least as a pattern. I don't know if Said has much opinion about my comments in specific.) But I do think it's a bit of a loss. I think it's plausible that a version of me who was more willing to be disagreeable and adversarial would have left some valuabl...
I don't spend enough time in the LW comments to have any idea who Said is or to be very invested in the decision here. I think I agree with the broad picture here, and certainly with the idea that an author is under no obligation to respond to comments, whether because the author finds the comments unhelpful or overly time consuming or for whatever other reason. That said, I am mostly commenting here to register my disagreement with the idea of giving post authors any kind of moderating privileges on their posts. That just seems like an obviously terrible idea from an epistemic perspective. Just because a post author doesn't find a comment productive doesn't mean someone else won't get something out of it, and allowing an author to censor comments therefor destroys value. LW is the last site I would have expected to allow such a thing.
I think ultimately someone needs to do the job of moderation, and in as much as we want to allow for something like an archipelago of cultures, the LW moderation team really can't do all the moderation necessary to make such things possible.
Note that there are a bunch of restrictions on author moderation:
In general, I am not a huge fan of calling all deletion censorship. You are always welcome to make a new top-level post or shortform with your critique or comments. The general thing to avoid is to not always force everyone into the same room, so to speak.
I do think an alternative is for the LW team to do a lot more moderation, and more opinionated moderation, but I think this is overall worse (both because it's a huge amount of work, and because it centralizes the risk so that if we end up messing up or being really dumb about ...
This is the mentioned comment thread under which Said can comment for the next two weeks. Anyone can ask questions here if you want Said to have the ability to respond.
Said, feel free to ask questions of commenters or of me here (and if you want to send me some statement of less than 3,000 words, I can add it to the body of the post, and link to it from the top).
(I will personally try to limit my engagement with the comments of this post to less than 10 hours, so please forgive if I stop engaging at some point, I just really have a lot of stuff to get to)
Edit: And the two weeks are over.[1]
I decided to not actually check the "ban" flag on Said's account, on account of trusting him to not post and vote under his account, and this allowing him to keep accessing any drafts he has on his account, and other things that might benefit from being able to stay logged in.
I am, of course, ambivalent about harshly criticizing a post which is so laudatory toward me.[1] Nevertheless, I must say that, judging by the standards according to which LessWrong posts are (or, at any rate, ought to be) judged, this post is not a very good one.
The post is very long. The length may be justified by the subject matter; unfortunately, it also helps to hide the post’s shortcomings, as there is a tendency among readers to skim, and while skimming to assume that the skimmed-over parts say basically what they seem to, argue coherently for what they promise to argue for, do not commit any egregious offenses against good epistemics, etc. Regrettably, those assumptions fail to hold for many parts of the post, which contains a great deal of sloppy argumentation, tendentious characterizations, attempts to sneak in connotations via word choice and phrasing, and many other improprieties.
The problems begin in the very first paragraph:
For roughly [7 years] have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.
This phrasing assumes that there’s something to “understand” (...
(After all, I too can say: “For roughly 7 years, I have spent many hours trying to get Oliver Habryka to understand and learn how to run a discussion forum properly by my lights.” Would this not sound absurd? Would he not object to this formulation? And rightly so…)
FWIW, this seems to me like a totally fine sentence. The "by my lights" at the end is indeed communicating the exact thing you are asking for here, trying to distinguish between a claim of obvious correctness, and a personal judgement.
Feel free to summarize things like this in the future, I would not object.
Of course, the truth of this claim hinges on how many is “few”. Less than 10? Less than 100?
It of course depends on how active someone on LessWrong is (you are not as widely known as Eliezer or Scott, of course). My modal guess would be that you would be around place 20 in how people would bring up your name. I think this would be an underestimate of your effect on the culture. If someone else thinks this is implausible, I would be happy to operationalize, find someone to arbitrate, and then bet on it.
...To which Said responded by trying to rally up a group of people attacking anyone who dared to use moderati
Just noting that
one should object to tendentious and question-begging formulations, to sneaking in connotations, and to presuming, in an unjustified way, that your view is correct and that any disagreement comes merely from your interlocutor having failed to understand your obviously correct view
is a strong argument for objecting to the median and modal Said comment.
I think this reply is rotated from the thing that I'm interested in--describing vice instead of virtue, and describing the rule that is being broken instead of the value from rule-following. As an analogy, consider Alice complaining about 'lateness' and Bob asking why Alice cares; Alice could describe the benefits of punctuality in enabling better coordination. If Alice instead just says "well it's disrespectful to be late", this is more like justifying the rule by the fact that it is a rule than it is explaining why the rule exists.
But my guess at what you would say, in the format I'm interested in, is something like "when we speak narrowly about true things, conversations can flow more smoothly because they have fewer interruptions." Instead of tussling about whether the framing unfairly favors one side, we can focus on the object level. (I was tempted to write "irrelevant controversies", but part of the issue here is that the controversies are about relevant features. If we accept the framing that habryka knows something that you don't, that's relevant to which side the audience should take in a disagreement about principles.)
That said, let us replace the symbol with the s...
A commenter writes:
If I were a moderator, I would have banned Jesus Christ Himself if He required me to spend one hundred hours moderating His posts on multiple occasions. Given your description here I am surprised you did not do this a long time ago. I admire your restraint, if not necessarily your wisdom.
This strikes me as either deeply confused, or else deliberately… let’s say “manipulative”[1].
Suppose that I am a moderator. I want to ban someone (never mind why I want this). I also want to seem to be fair. So I simply claim that this person requires me to spend a great deal of effort on them. The rest of the members will mostly take this at face value, and will be sympathetic to my decision to ban this tiresome person. This obviously creates an incentive for me to claim, of anyone whom I wish to ban, that they require me to spend much effort on them.
Alright, but still, can’t such a claim be true? To some degree, yes; for example, suppose that someone constantly lodges complaints, makes accusations against others, etc., requiring an investigation each time. (On the other hand, if the complaints are valid and the accusations true, then it seems odd to say that it’s the compla...
The most charitable interpretation I can think of is that Elizabeth meant you should have added “I think that...” or ”...for me” specifically to the line "Also, it comes from CFAR, which is an anti-endorsement."
But regardless, it seems crazy that your comment was downvoted to -17 (-16 now, someone just upvoted it by 1) and got a negative mod judgment for this.
Calling an author a “coward” for banning you from their post
FYI, that link goes to a very weird URL, which I doubt is what you intended.
The link you had in mind, I am sure, is to this thread. And your description of that thread, in this comment and in the OP, is quite dishonest. You wrote:
Said took to his own shortform where (amongst other things) he and others called that author a coward for banning him
Calling an author a “coward” for banning you from their post
In ordinary conversation between normal people, I wouldn’t hesitate to call this a lie. Here on LessWrong, of course, we like to have long, nuanced discussions about how something can be not technically a lie, what even is “lying”, etc., so—maybe this is a “lie” and maybe not. But here’s the truth: the first use of the word “coward” in that thread was on Gordon’s part. He wrote:[1]
I didn’t have to say anything. I could have just banned you. But I’m not a coward and I’ll own my action. I think it’s the right one, even if I pay some reputational cost for it.
And I replied:
...I’m not a coward
Well, I wasn’t going to say it, but now that you’ve denied it explicitly—sorry, no, I have to disagree. Banning critics fr
That’s the picture that someone would come away with, after reading your characterization. And, of course, it would be completely inaccurate.
I'm not sure the more accurate picture is flawless behavior or anything, but I do think I definitely had an inaccurate picture in the way Said describes.
I am surprised there are so few - perhaps in that calculation I was mistakingly tracking some comments you made in other posts that I didn't directly participate in.
Nevertheless, every single example you bring up above was in fact unpleasant for me, some substantially so - while reasonable conclusions were reached (and in many cases I found the discussion fruitful in the end), the tone in your comments was one that put me on edge and sucked up a lot of my mental energy. I had the feeling that to interact with you at all was to an invitation to be drawn into an vortex of fact-checking and quibbling (as this current conversation is a small example of).
It is not surprising to me that you find all of these conversations unobjectionable. To me, your entrance to my comment threads was a minor emergency. To you, it was Tuesday.
I stand by the claim that a plurality of my unpleasant interactions on this site involved you - this is not a high bar. I do not recall another user with whom I had more than one.
I remain confused as to whether banning you is the correct move for the health of the site in general. The point I was trying to make was along the lines of [for a class of writers like alkjash, removing Said Achmiz from LessWrong makes us feel more relaxed about posting].
I spent ~2 hours reading the comments, and I just want to say I regret it. The comments are painful to evaluate in an unbiased way (very combative) and overall doesn't really matter.
What you're doing here is conflating contempt based on group membership with contempt based on specific behaviors. Sneer-clubbers will sneer at anyone they identify as a Rationalist simply for being a Rationalist. Said Achmiz, in contrast, expresses some amount of contempt for people who do fairly specific and circumscribed things like write posts that are vague or self-contradictory or that promote religion or woo. Furthermore, if authors had been willing to put a disclaimer at the top of their posts along the lines of "This is just a hypothesis I'm considering. Please help me develop it further rather than criticizing it, because it's not ready for serious scrutiny yet." my impression is that Said would have been completely willing to cooperate. But possible norms like that were never seriously considered because, in my opinion, LW's issue is not not the "LinkedIn attractor" but the "luminary attractor". I think certain authors here see how Eliezer Yudkowsky is treated by his fans and want some of that sweet acclamation for themselves, but without legitimately earning it. They want to make a show of encouraging criticism, but only in a kayfabe, neutered form that allows them to smoothly answer in a way that only reinforces their status. And Oliver Habryka and the other mods apparently approve of this behavior, or at least are unwilling to take any effective steps to curb it, which I find very disappointing.
You say:
Furthermore, if authors had been willing to put a disclaimer at the top of their posts along the lines of "This is just a hypothesis I'm considering. Please help me develop it further rather than criticizing it, because it's not ready for serious scrutiny yet." my impression is that Said would have been completely willing to cooperate.
Out of curiosity, I clicked on the first post that Said received a moderation warning for, which is this Ray's post on 'Musings on Double Crux (and "Productive Disagreement")'. You might notice the very first line of that post:
Epistemic Status: Thinking out loud, not necessarily endorsed, more of a brainstorm and hopefully discussion-prompt.
It's not the exact kind of disclaimer you proposed here (it importantly doesn't say that readers shouldn't criticize it) but it also clearly isn't claiming some kind of authority or fully worked-out theory, and is very explicit about the draft status of it. This didn't change anything about Said's behavior as far as I can tell, resulting in a heavily-downvoted comment with a resulting moderator warning.
There are also multiple other threads (which I don't have the time to dig up) in which Said ma...
That's not true! Did you read the very first moderation conversation that we had with Said that is quoted in the OP?
After the comment above, we reached out to Said privately and Elizabeth had something like an hour long chat conversation with him asking him what we need to do to get him to change his behavior, to which his response was:
...Buuuut what's going on here is that - and this is imo unfortunate - the website you guys have built is such that posting or commenting on it provides me with a fairly low amount of value
This is something I really do find disappointing, but it is what it is (for now? things change, of course)
So again it's not that I disagree with you about anything you've said
But the sort of care / attention / effort w.r.t. tone and wording and tact and so on, that you're asking, raises the cost of participation for me above the benefit
(Another aspect of this is that if I have to NOT say what I actually think, even on e.g. the CFAR thing w.r.t. Double Crux, well, again, what then is the point)
(I can say things I don't really believe anywhere)
[...]
If the takeaway here is that I have to learn things or change my behavior, well - I'm not averse in principle to doing that
Random thought: maybe there could have been disproportional gains got by getting Said to involve more humor in his messaging and branding him the official Fool of Lesswrong.com?
It seems the community indeed gets service out of Said shooting down low quality communication, and limiting that form of communication socially to his specific role maybe would have insulated the wider social implications, so that most value would have been preserved each way, maybe?
which would seem to indicate that a relatively small nudge would have tipped his contributions to the positive side.
Just to be clear, this overall does not strike me as a close call. The situation seems to me more related to the section on "Crimes that are harder to catch should be more socially punished" plus some other dynamics. My epistemic state changed a lot over the years, but not in a way that would result in thin margins, but in a way where some important consideration, or some part of my model would shift, and this would switch things from "in expectation this is extremely costly" to "in expectation what Said is doing is quite important".
Something being a difficult call to make does not generally mean that it also needed to be a close call.
Also, if the door to Said changing his behavior was so completely closed, I'm really confused about what all those hundreds of hours were spent on.
I mean, we tried anyways, but I do think it was overall a mistake and a reasonable thing to do at the time would have been to respond with "well, sorry, if you as a commenter are already pre-empting that you are not willing to change basically at all based on moderator feedback, then yeah, goodbye, farewell, goodluck, we really need more cooperation than that". Elizabeth advocated for this IIRC, and I instead tried to make things work out. I think Elizabeth was ultimately right here.
I think the people who talk as though the contested issue here is Said's disagreeableness combined with him having high standards are missing the point.
Said Achmiz, in contrast, expresses some amount of contempt for people who do fairly specific and circumscribed things like write posts that are vague or self-contradictory or that promote religion or woo.
If it was just that (and if by "posts that are vague" you mean "posts that are so vague that they are bad, or posts that are vague in ways that defeat the point of the post"), I'd be sympathetic to your take. However, my impression is that a lot more posts would trigger Said's "questioning mode." (Personally I'm hesitant to use the word "contempt," but it's fair to say it made engaging more difficult for authors and they did involve what I think of as "sneer tone" sometimes.)
The way I see it, there are posts that might be a bit vague in some ways but they're still good and valuable. This could even be because the post was gesturing at a phenomeon with nuances where it would require a lot of writing (and disentanglement work) to make it completely concise and comprehensive, or it could be because an author wanted to share an i...
I feel like Said not only has a personal distaste of that sort of “post that contains bits that aren’t pinned down,” but it also seemed like he wouldn’t get any closer to seeing the point of those posts or comments when it was explained in additional detail.
If a post starts off vague and exploratory, on a topic that isn't very easy to think/write about, it would make sense that it usually couldn't be clarified enough to meet Said's standards within a few back-and-forth comments.
That’s pretty frustrating to deal with for authors and other commenters.
Yes, but I think that's in part because of the nature of intellectual progress, and in part because there are so few people like Said who is incentivized (by his own personality) to push back hard and persistently on this kind of post (so people are not used to it). I think it's also in part due to the tone that he typically employs, which he theoretically could change, but that seems connected with his personality in a way that we seemingly couldn't get one without the other.
As a tenured (albeit perhaps now 'emeritus') member of the "generally critical commentator crew", I think this is the wrong decision (cf.). As the OP largely anticipates the reasons I would offer against it, I think the disagreement is a matter of degrees among the various reasons pro and con. For a low resolution sketch of why I prefer my prices of 'pro tanto' to the moderators:
I think it would be a good norm to never strong-downvote someone you're debating, no matter how carefully you've read them, because it's just too easy to be biased in such situations, and it makes people suspicious/resentful/angry (due to thinking that the vote is biased/unfair, and having no recourse or ability to hold anyone accountable), which is not conducive to having calm and productive discussions. Rather surprised that you don't support or follow this.
I somewhat agree and apply a substantially higher bar to downvoting people I am debating, especially on non-moderation discussions (in the threads on this post, I abstained from voting for a lot of his replies to me, though less on his replies to others, e.g. the Vaniver thread).
As a site-moderator my job is often more messy and I think allows less of this principle than it does for others. In many cases where I would encourage other people to just "downvote and move on", I often do not have that choice, as the role of actually explaining the norms of the space, or justifying a moderation decisions, or explaining how the site works, falls on me. In many cases, if I didn't vote on those comments, the author would not get the appropriate feedback at all.
Another thing that I think is important is to have gradual escalation. It is indeed better for someone to be downvoted before they are banned. As a moderator, voting is the first step of moderation. Moderators should vote a lot, and pay attention to voting patterns, and how voting goes wrong, because it's a noisy measure and the moderators are generally in the best position to remove the most distortions. Most moderation s...
Its not obvious this is dumb to me. If two people are super angry at each other, that conversation seems likely to create more heat than light.
I'm not a regular user of LW, but I wanted to weigh in anyway. The style of endless asymmetric-effort criticism can be very wearing on people with perfectionist or OCD-like tendencies. I am, sadly, one of those people. In my head is a multi-faced voice of rage and criticism that constantly second guesses my decisions and thoughts and says many of the same things about anyone else's work or life or decisions. This kind of thing is one of the faces, able to find fault in anything and treat it all with importance both high and invariant over any sort of context. I think the voice is something like an IFS firefighter. In fact, here he is now:
wow. You come to LessWrong (stop abbreviating) and you can't even be bothered to put five seconds into reading Kaj's Unlocking the Emotional Brain summary to see if it really is a firefighter and not a protector?
It's exhausting and demoralizing. This is far from the only component, to be fair, and I actually don't doubt that Said is honestly trying to make the world a better place... but this particular flavor of criticism is not making things better. It can be done well, but this isn't it. This makes people, over time and without really notici...
I'm very good friends with someone who is persistently critical and it has imo largely improved my mental health, fwiw, by forcing me to construct a functioning and well-maintained ego which I didn't really have before.
I feel vaguely good about this decision. I've only had one relatively brief round of Said commenting, but it's not free.
If Said returns, I'd like him to have something like a "you can only post things which Claude with this specific prompt says it expects to not cause <issues>" rule, and maybe a LLM would have the patience needed to show him some of the implications and consequences of how he presents himself.
I also feel vaguely good about it, but I feel decisively bad about this suggestion!
I've been investigating LLM-induced psychosis cases, and in the process have spent dozens of hours reading through hundreds if not thousands of possible cases on reddit. And nothing has made me appreciate Said's mode of communication (which I have a natural distaste towards) more than wading through all that sycophantic nonsense slop!
In particular, it has made it more clear to me what the epistemic function of disagreeableness is, and why getting rid of it completely would be very bad. (I'm distinguishing 'disagreeableness' here from 'criticism', which I believe can almost always be done in an agreeable way.) Not something I really would have disagreed with before (ha), but it helps me to see a visceral failure mode of my natural inclination to really drive the point home.
FWIW, no need to anonymize if this was an attempt to lightly protect me, this was me:
Last month, a user banned Said from commenting on his posts. Said took to his own shortform where (amongst other things) he and others called that author a coward for banning him.
Also FWIW, I've had some genuinely positive interactions with Said in the last couple weeks. I was surprised as anyone. I don't know if it's because he was trying to be on his best behavior or what, but if that was how Said commented on everything, I'd be very happy to see him unbanned (I had even had the idea that if we continued to have positive interactions I would unban him after whatever felt like enough time for me to believe in the new pattern).
Now, one might think that it seems weird for one person to be able to derail a comment thread.
This does not seem weird to me at all. LW is a scary place for many newcomers, and many posts get 0–1 comments, and one comment that makes someone feel dumb seems likely to result in their never posting again.
I strongly agree that it's important to avoid the LinkedIn attractor; I simultaneously think that we should value newcomers and err at least a little bit on the side of being gentle with them.
From my very much outside view, extending the rate limiting to 3 comments a week indefinitely would have solved most of the stated issues.
I have two feature requests in response to this class of concerns.
Problem statement: authors feel pressure to respond to comments even if they think responding is low value. Meanwhile, readers hesitate to comment because they do not wish to impose costs (response costs or social costs) on the author.
Solution: authors can use emoji be able to tag a comment to indicate why they are choosing not to respond. LessWrong already has this via emoji responses, and I have used them for this purpose (as a comment author). A beneficial side-effect is that emojis can't be karma-voted, further reducing social pressure. My feature requests aim to improve this avenue.
Tiny: remove emoji question marks. For example, the emoji that says "Seems offtopic?" can just be "Offtopic", like "Soldier Mindset". This would make the emoji better express something like "I am not responding because this is (in my opinion) offtopic" rather than "This might be offtopic but I am not sure, l am not responding because I can't be bothered to find out". This suggestion also applies to:
I started posting to Less Wrong in 2011, under the name Fezziwig. I lost the password, so I made this account for LW2.0. I quit reading after the dustup in 2022, because I didn't like how the mods treated Said. I started up again this summer; I guess I came back at the wrong time.
Object-level I think Said was right most of the time, and doing an important job that almost no one else around here is willing to do. A few times I thought of trying to do the same thing more kindly; I'm a more graceful writer than he is, so I thought I had a good shot. But I never did it, because I don't believe Said's tone was ever really the issue: what upset people, what tended to produce those long ugly subthreads, was when he made a good point that couldn't be persuasively answered, and didn't get distracted by evasions. There isn't, actually, a kind way to ask for examples from someone who doesn't have any.
That's not to say all his comments were like that; some really were just bad. But the bad ones didn't tend to spawn demon threads. People didn't have to reply, because they knew that he was wrong, instead of just wishing it.
Also, I think that if ".....
Thank you for your hard work! Neither the decision itself nor the work of justifying it and discussing it is particularly easy, as I can say from experience. I appreciate you putting so much effort into trying to keep the site healthy.
This post has comments from some people who agree and from some people who disagree with the decision. It seems worth making explicit that this discussion may underrepresent the amount of people who agree, because some of the people with the strongest agreement would be the ones who've already left the site because of Said.
I don't think this sort of abstract analysis is valid. For instance, you could argue that it may underrepresent the people who disagree, because it's become increasingly clear that Said-style criticism is unwelcome on LW in the past few months, as the conflict has escalated.
Think it's just really hard to know without doing a lot of work.
I think it'd be more accurate to say that "there's this other factor too" rather than "this analysis is not valid"?
There are a number of comments expressing disagreement that have gotten a fair number of upvotes, so it doesn't look to me like expressing disagreement would be unwelcome.
Edited to add: I should also mention that I don't think this comment came out of "abstract analysis". It came from the fact that back when I banned Eugine Nier, I then reached out to a user who had left the site because of him to let them know their harasser was banned. The user's response was basically, "glad to hear, but I still don't feel like coming back". So at least in one previous case, users who had left because of a now-banned user were actually permanently out of the resulting discussion.
My best guess is that the usual ratio of "time it takes to write a critical comment" to "time it takes to respond to it to a level that will broadly be accepted well" is about 5x. This isn't in itself a problem in an environment with lots of mutual trust and trade, but in an adversarial context it means that it's easily possible to run a DDOS attack on basically any author whose contributions you do not like by just asking lots of questions, insinuating holes or potential missing considerations, and demanding a response, approximately independently of the quality of their writing.
For related musings see the Scott Alexander classic Beware Isolated Demands For Rigor.
Not strictly related to this post, but I'm glad you know this and it makes me more confident in the future health of Lesswrong as a discussion place.
It's been many years since I've been active on LW, but while I was, Said was the source of a plurality of my unpleasant interactions on this site. Many other commenters leveraged serious criticisms of my writing, but only Said consistently ruined my day while doing so.
I cannot say whether this decision was right in the end, but will attest that seeing this post made me happy.
Tangential feature request: allow people to embed other comments in posts natively. This article uses screenshots of LessWrong to display conversations, but this does not responsively size them for mobile users and makes it harder to copy-paste stuff from this post, which a native implementation could fix.
I'd also like to say that a lot of Duncan's conflict-oriented nature in the Duncan/Said moderation post and comments, as well as other posts where they interact was precisely because of the issues described in the section But why ban someone, can't people just ignore Said?, in that it's much less easy to ignore comments than a lot of people realize.
While it doesn't explain all of the conflict, I do think it explains a non-trivial amount of the reason why Duncan has the tendency to get into conflict with Said, because there's a social norm that criticism ha...
And if the stakes are even higher, you can ultimately try to get me fired from this job. The exact social process for who can fire me is not as clear to me as I would like, but you can convince Eliezer to give head-moderatorship to someone else, or convince the board of Lightcone Infrastructure to replace me as CEO, if you really desperately want LessWrong to be different than it is.
I don't plan on doing this, but who is on the board of Lightcone Infrastructure? This doesn't seem to be on your website.
This outcome makes me a little sad. I have a sense that more is possible.
How would this situation play out in a world like dath ilan? A world where The Art has progressed to something much more formidable.
Is there some fundamental incompatibility here that can't be bridged? Possibly. I have a hunch that this isn't the case though. My hunch is that there is a lot of soldier mindsetting going on and that once The Art figures out the right Jedi mind tricks to jolt people out of that mindset and into something more scout-like, these sorts of conflicts will oft...
Is there some fundamental incompatibility here that can't be bridged? Possibly. I have a hunch that this isn't the case though.
I don't believe Said is having very contingent bad interactions with tons of commenters and the mod team, but rather that this is a result of a principled commitment to a certain kind of forum commenting behavior that involves things like any commenter being able to demand answers to questions at the risk of the post-author's status, holding extreme disdain and disrespect for interlocutors while being committed to never saying anything explicitly or even denying that it is the case, and other things discussed in the OP, that in combination are extremely good at sucking energy out of people with little intellectual productivity as a result. My guess is that if we played the history of LW 2.0 over like 10 more times making lots of changes to lots of variables that seem promising or relevant to you, the outcome would've eventually been the same basically each time.
To take your proposal, I think it's likely that Said has literally written a disdainful comment about NVC — yep, I looked a little, Said writes "It has been my experience that NVC is used exclusively...
I have some suggestions for mechanistic improvements to the LW website that may help alleviate some of the issues presented here.
RE: Comment threads with wild swings in upvotes/downvotes due to participation from few users with large vote-weights; a capping/scaling factor on either total comment karma or individual vote-weights could solve this issue. An example total-karma-capping mechanism would be limiting the absolute value of the displayed karma for a comment to twice its parent's karma. An example vote-weight-capping mechanism would be limiting vote ...
I have an information question about the 3 year ban, in that I'd like to ask why you chose a temporary ban over a indefinite ban?
In particular, given the history of Said Achmiz here, including the case where he did the same behavior he was rate-limited before, I am a bit confused on what you are hoping to do by simply performing a temporary ban in lieu of an indefinite ban:
...
- To which we responded by telling Said to please let authors moderate as they desire and to not do that again, and gave him a 3 month rate-limit
- After the rate-limit he seemed to behave be
3 years is long enough that LessWrong might be a very different place by then, or Said might have changed quite a bit, or maybe things will have actually sunk in in 3 years. I think it's likely for the threshold for rebanning to be pretty low in 3 years, but it seemed to me potentially worth it to leave some door open in the more distant future.
Now, I do recommend that if you stop using the site, you do so by loudly giving up, not quietly fading. Leave a comment or make a top-level post saying you are leaving. I care about knowing about it, and it might help other people understand the state of social legitimacy LessWrong has in the broader world and within the extended rationality/AI-Safety community.
Sure. I think this is a good decision because it:
There's some cluster of ideas surrounding how authors are informed/encouraged to use the banning options. It sounds like the entire topic of "authors can ban users" is worth revisiting so my first impulse is to avoid investing in it further until we've had some more top-level discussion about the feature.
Free Hearing, Not Speech seems like a better approach to me. Give users the affordances to automatically see the kinds of comments they want to interact with, or the conversations they want to have. Users don't have to see what they believe is bad-faith, l...
Completely omitted my post about Said, and my response to your responses on that post.
https://www.lesswrong.com/posts/SQ8BrC5MJ9jo9n83i/said-achmiz-helps-me-learn , cross posted at Data secrets lox.
I'll have to follow his comments elsewhere.
My philosophy is no more “totalizing” than that which is described in, say… the Sequences. (Or, indeed, basically any other normative view on almost any intellectual topic.) Do you consider Eliezer to have constantly been “making dominance threats” in all of his posts?
EDIT: Uh… not sure what happened here. The parent comment was deleted, and now this comment is in the middle of nowhere…?
Man, posting on LessWrong seems really unrewarding. You show up, you put a ton of effort into a post, and at the end the comment section will tear apart some random thing that isn't load bearing for your argument, isn't something you consider particularly important, and whose discussion doesn't illuminate what you are trying to communicate, all the while implying that they are superior in their dismissal of your irrational and dumb ideas.
You could run an LLM every time someone tries to post a comment. If a top level reply tries to nitpick something t...
I think the crux is what feeds the dangerous norms, and what makes norms dangerous. I expect that when considered in detail, Said or most others with similar behaviors aren't intending or causing the kinds of damage you describe to an important extent. But at the same time, norms (especially insane ones) feed on first impressions, not on detailed analyses.
Such norms might gain real power and do major damage if they do take hold. I don't believe they have, and so the damage you are describing is overstated, but the risk the norms represent is real. Said mig...
The affective conflationary alliance discussion is interesting (it likely would've been better standalone). This has implications for the architecture of internal judgement, dangers of forming conflationary alliances among your own understandings when making holistic judgements. This is a distinction between non-specific contemplation of some decision for an extended period of time, and doing detailed analyses from dubious technical premises followed by dismissal of the poorly founded but legible conclusions and settling the matter with an intuitive overal...
Incentivizing strong judge performance seems difficult. The default outcome I've seen from committees and panels is that everyone on them half-asses their job, because they rarely have stake in the outcome being good. Even if someone cares about LessWrong, that is not the same as being generally held personally responsible for it, and having your salary and broader reputation depend on how well LessWrong is going.
Couldn't prediction markets solve this? Make one for decisions by judges asking whether you'd agree with them. After some time, randomly choose t...
Disclaimer: Note that my analysis is based on reading only very few comments of Said (<15).
To me it seems the "sneering model" isn't quite right. I think often what Said is doing seems to be:
One of the main problems seems to be that in 1. any flaw is a valid target. It does not need to be important or load bearing to the points made in the text.
It's like somebody building a rocket shooting it to the moon and Said complaining tha...
So, despite it being close to site-consensus that authors do not face obligations to respond to each and every one of Said's questions, on any given post, there is basically nothing to be done to build common knowledge of this.
Please could you write a policy regarding what obligations/duties/commitments/responsibilities people DO have by contributing LessWrong, regarding responding to comments? This could be a top-level post similar to Policy for LLM Writing on LessWrong.
After reading Banning Said Achmiz..., and associated comments, I thought that I und...
Moving this top-level question by @Sting to this comment thread:
Habryka recently decided to ban Said Achmiz. He wrote an extensive post explaining the decision. There were some very good things about this decision at the meta level, such as having one person make the decision and take full responsibility for it, explaining the reasoning in detail, and giving Said a comment thread under which he can respond.
However, I did not find the specific examples given for the ban persuasive. E.g., the example given u...
A classical example of microeconomics-informed reasoning about criminal justice is the following snippet of logic.
If someone can gain in-expectation dollars by committing some crime (which has negative externalities of dollars), with a probability of getting caught, then in order to successfully prevent people from committing the crime you need to make the cost of receiving the punishment () be greater than , i.e. .
Note that this is more centrally an example of micro-informed reasoning ...
So with all that Said
Kudos for this heading. A passing pun on someone’s name is a great way of poking fun & mildly insulting them (warranted in this case). I am reminded of a paper critiquing one by QM physicist Henry Stapp, entitled “A Stapp in the wrong direction”.
I frequently expect people on the Lightcone team to disagree with decisions I make, and when that happens, I will encourage them to write up their perspective and serve as a record that will make it easier to spot broader blind spots in my decision-making (and also reduce gaslighting dynamics where people feel forced to support decisions I make out of fear of being retaliated against).
I don't know if you know this, but if you encourage this "correctly" (something that I suspect literally no one knows how to do but which we can aim for) it also helps you in that no one can accuse the team of being secretly fractious (since it would be public).
Is this decision generally considered final and not subject to appeal, or do you expect comments on here/arguments by Said/etc to affect the final outcome you decide on?
Some days it’s hard to not start rooting for the paperclip maximizers.
Some days I actually do start rooting for the paperclip maximizers, but so far I’ve returned to not rooting for them in an hour or a day or two.
I’ve been chewing on the contents of this post for a week+ now.
I think the decision behind this post lurched my set point permanently towards, but not totally in, “root for the paperclip maximizers”, assuming habryka isn’t overridden or removed for this.
When a site that’s supposed to be humanity at its most rational removes one of its backstops a...
most of what i want to say is about The Sneer Attractor and The Niceness Attractor, and unrelated to Said. is there some canonical post on that? i think this part of the post should have been separate post, that allow discussion of that.
***
This is Bad:
Elizabeth said roughly "if you don't change your behavior in some way you'll be banned"
He did not change his behavior, we did not end up banning him at this time, and he also did not stop participating on LW.
as in, it make things LW stuff say untrustworthy, and LW team should not do that. empty threats ...
It's been roughly 7 years since the LessWrong user-base voted on whether it's time to close down shop and become an archive, or to move towards the LessWrong 2.0 platform, with me as head-admin. For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.[1] Today I am declaring defeat on that goal and am giving him a 3 year ban.
What follows is an explanation of the models of moderation that convinced me this is a good idea, the history of past moderation actions we've taken for Said, and some amount of case law that I derive from these two. If you just want to know the moderation precedent, you can jump straight there.
I think few people have done as much to shape the culture of LessWrong as Said. More than 50% of the time when I would ask posters, commenters and lurkers about their models of LessWrong culture, they'd say some version of either:
Of all the places on the internet, LessWrong is a place that really forces you to get your arguments together. It's very much a no-bullshit culture, and I think this is one of the things that makes it one of the most valuable forums on the internet.
Or
Man, posting on LessWrong seems really unrewarding. You show up, you put a ton of effort into a post, and at the end the comment section will tear apart some random thing that isn't load bearing for your argument, isn't something you consider particularly important, and whose discussion doesn't illuminate what you are trying to communicate, all the while implying that they are superior in their dismissal of your irrational and dumb ideas.
And frequently when I dig into how they formed these impressions, a comment by Said would be at least heavily involved in that.
I think both of these perspectives are right. LessWrong is a unique place on the internet where bad ideas do get torn apart in ways that are rare and valuable, and also a place where there is a non-trivial chance that your comment section gets derailed by someone making some extremely confident assumption about what you intended to say, followed by a pile of sneering dismissal.[2]
I am overall making this decision to ban Said with substantial sadness. As is evident by me spending hundreds of hours over the years trying to resolve this via argument and soft-touch moderation, this was very far from an obvious choice. This post itself was also many dozens of hours of work, and I hope it illuminates some of the ways this decision was made, what it means for the future of LessWrong, and how it will affect future moderation. I apologize for the length.
One of the recurring attractors of the modern internet, dominating many platforms, subreddits, and subcultures, is the sneer attractor. Exemplified in my mind by places like RationalWiki, the eponymous "SneerClub", but also many corners of Reddit and of course 4chan. At its worst, that culture looks like this:
Sociologically, my sense is this culture comes from a mixture of the following two dynamics:
Since the sneer attractor is one of the biggest and most destructive attractors of the modern internet, I worry about LessWrong also being affected by its dynamics. I think we are unlikely to ever become remotely as bad as SneerClub, or most of Reddit or Twitter, but I do see similar cultural dynamics rear their head on LessWrong. If these dynamics were to get worse, the thing I would mostly expect to see is the site quietly dying; fewer people venturing new/generative content, fewer people checking the site in hopes of such content, and an eventual ghost town of boring complaints about the surrounding scene, with links.
But before I go into that, let's discuss the sneer attractor's mirror image:
In the other corners of the internet and the rest of the world, especially in the land of professional communities, we have what I will call the "LinkedIn attractor". In those communities saying anything bad about another community member is frowned upon. Disputes are supposed to be kept private. When someone intends to run an RCT on which doctors in your hospital are most effective, you band together and refuse to participate, because establishing performance metrics would hurt the unity of your community.
Since anything but abstract approval is risky in those communities, a usual post in those communities looks like this largely vacuous engagement[4]:
And at the norms level like this (cribbed from an interesting case study of the "Obama Campaign Alumni" Facebook group descending into this attractor):
This cultural attractor is not mutually exclusive with the sneer attractor. Indeed, the LinkedIn attractor appears to be the memetically most successful way groups relate to their ingroup members, while the sneer attractor governs how they relate to their outgroups.
The dynamics behind the LinkedIn attractor seem mechanistically straightforward. I think of them as "mutual reputation protection alliances".
In almost every professional context I've been in, these alliances manifest as a constant stream of agreements—"I say good things about you, if you say good things about me."
This makes sense. It's unlikely I would benefit from people spreading negative information about you, and we would both clearly benefit from protecting each other's reputation. So a natural equilibrium emerges where people gather many of these mutual reputation protection alliances, ultimately creating groups with strong commitments to protect each other's reputation and the group's reputation in the eyes of the rest of the world.
Of course, the people trying to use reputation to navigate the world — to identify who is trustworthy — are much more diffuse and aren't party to these negotiations. But their interests weigh heavily enough that some ecosystem pressure exists for antibodies to mutual reputation protection alliances to develop (such as rating systems for Uber drivers, as opposed to taxi cartels where feedback to individuals is nearly impossible to aggregate).
Ok, now why am I going on this digression about The Sneer Attractor and the LinkedIn Attractor? The reason is because I think that much of the heatedness of moderation discussions related to Said, and people's fears, have been routing through people being worried that LessWrong will end up in either the Sneer Attractor or the LinkedIn Attractor.
As I've talked with many people with opinions on Said's comments on the site, a recurring theme has been that Said is what prevents LessWrong from falling into the LinkedIn attractor. Said, in many people's minds, is the bearer of a flag like this:
"Just because you are hurt by, and anxious about others criticizing you or your ideas, doesn't mean we are going to accommodate you. It is the responsibility of your audience to determine what they think of you and your contributions.
You do not own your reputation. Every individual owns their own judgment of you.
You can shape it by doing good or bad things, but you do not get to shape it by preventing me and others from openly discussing you and your contributions."
And I really care about this flag too. Indeed, much of the decisions I have made around LessWrong have been to foster a culture that understands and rallies behind this flag. LessWrong is not LinkedIn, and LessWrong is not the EA Forum, and that is good and important.
And I do think Said provides a shield. Having Said comment on LessWrong posts, and having those comments be upvoted, helps against sliding down the attractor towards LinkedIn.
But on the other hand, I notice that in myself a lot of what I am most worried about is the Sneer Attractor. For LessWrong to become a place that can't do much but to tear things down. Where criticism is vague and high-level and relies on conflationary alliances to get traction, but does not ultimately strengthen what it criticizes or who reads the criticism. Filled with comments that aims to make the readers and the voters feel superior to all those fools who keep saying wrong things, despite not equipping readers to say any less wrong things themselves.
And I do think Said moves LessWrong substantially towards that path. When Said is at his worst, he writes comments like this:
This, to be clear, is still better than the SneerClub comment visible above. For example when asked to clarify, Said obliges:
But the overall effect on the culture is still there, and the thread still results in Benquo eventually disengaging in frustration intending to switch his moderation guidelines to "Reign of terror" and deleting any future similar comment threads, as Said (as far as I can tell) refuses to do much cognitive labor in the rest of the thread until Benquo runs out of energy.
So, to get this conversation started[5] and to maybe give people a bit more trust that I am tracking some things they care about: "Yes, a lot of the world is broken and stuck in an equilibrium of people trying to punish others for saying anything that might reflect badly on anyone else, in endless cycles of mutual reputation protection that make it hard to know what is fake and what is real, and yes, LessWrong, as most things on the internet, is at risk of falling into that attractor. I am tracking this, I care a lot about it, and even knowing that, I think it's the right call to ban Said."
Now that this is out of the way, I think we can talk in more mechanistic terms about what is going wrong in comment threads involving Said, and maybe even learn some things about online moderation.
An excerpt from a recent Benquo post on the Said moderation decision:
Said is annoying, both because his demands for rigor don't seem prioritized reasonably, and because he's simultaneously insulting and rude, dismissive of others' feelings around being "insulted," and sensitive to insults himself. He's also disagreeable. I asked Zack for a list of Said's best comments (see email), and they're pretty much all procedural criticisms or calls for procedural rigor seemingly with no sense of proportion. In the spirit of his "show me the cake" principle, I don't see the cake there. On the other hand, he's a leading contributor to GreaterWrong, which makes this site more usable.
I concur with much of this. To get more concrete, in my experience a breakdown of the core dynamics that make comment threads with Said rarely worth it looks something like this:[6]
Said will write a top level comment that will read like an implicit claim that you have violated some social norm in the things you have written for which you deserve to be punished (though this will not be said explicitly), or if not that, make you look negligent by not answering an innocuous open-seeming question
You will try to address this claim by writing some kind of long response or explanation, answering his question or providing justification on some point
Said will dismiss your response as being totally insufficient, confused, or proving the very point he was trying to make
You will try to clarify more, while Said will continue to make insinuations that your failure to respond properly validates whatever judgment he is invoking
Motivated commenters/authors will go up a level and ask "by what standard are you trying to invoke a negative judgment here?"[7]
Said will deny and all such invocation of standards or judgment, saying he is (paraphrased) "purely talking on the object level and not trying to make any implicit claims of judgment or low-status or any such kind"
After all of this you are left questioning your own sanity, try a bit to respond more on the object-level, and ultimately give up feeling dejected and like a lot of people on LessWrong hate you. You probably don't post again.
In the more extreme cases, someone will try to prosecute this behavior and reach out to the moderators, or make a top-level post or quick take about it. Whoever does this quickly finds out that the moderators feel approximately as powerless to stop this cycle as they are. This leaves you even more dejected.
With non-trivial probability your post or comment ends up hosting a 100+ comment thread with detailed discussion of Said's behavior and moderation norms and whether it's ever OK to ban anyone, in which voting and commenting is largely dominated by the few people who care much more than average about banning and censorship. You feel an additional pang of guilt and concern about how many people you might have upset with your actions, and how much time you might have wasted.
Now, I think it is worth asking what the actual issue with the comments above is. Why do they produce this kind of escalation?
Asymmetric effort ratios and isolated demands for rigor
A key dynamic in many threads with Said is that the critic has a pretty easy job at each step. First of all, they have little to lose. They need to make no positive statements and explain no confusing phenomena. All they need to do is to ask questions, or complain about the imprecision of some definition. If the author can answer compellingly, well, then they can take credit for how they helped elicit an explanation to a confusion that clearly many people must have had. And if the author cannot answer compellingly, then even better, then the critic has properly identified and prosecuted bad behavior and excised the bad ideas that otherwise would have polluted the commons. At the end of the day, are you really going to fault someone for just asking questions? What kind of totalitarian state are you trying to create here?
The critic can disengage at any point. No one faults a commenter for suddenly disappearing or not giving clear feedback on whether a response satisfied them. The author, on the other hand, does usually feel responsible for reacting and responding to any critique made of his ideas, which he dared to put so boldly and loudly in front of the public eye.
My best guess is that the usual ratio of "time it takes to write a critical comment" to "time it takes to respond to it to a level that will broadly be accepted well" is about 5x. This isn't in itself a problem in an environment with lots of mutual trust and trade, but in an adversarial context it means that it's easily possible to run a DDOS attack on basically any author whose contributions you do not like by just asking lots of questions, insinuating holes or potential missing considerations, and demanding a response, approximately independently of the quality of their writing.
For related musings see the Scott Alexander classic Beware Isolated Demands For Rigor.
Maintaining strategic ambiguity about any such dynamics
Of course, commenters on LessWrong are not dumb, and have read Scott Alexander, and have vastly more patience than most commenters on the internet, and so many of them will choose to dissect and understand what is going on.
The key mechanism that shields Said from the accountability that would accompany such analysis is his care to avoid making explicit claims about the need for the author to respond, or the implicit judgment associated with his comments. In any given comment thread, each question is phrased as to be ambiguous between a question driven by curiosity, and a question intended to expose the author's hypocrisy.
This ambiguity is not without healthy precedent. I have seen healthy math departments and science departments in which a prodding question might be phrased quite politely, and phrased ambiguously between a matter of personal confusion and the intention of pointing out a flaw in a proof.
"Can you explain to me how you got from Lemma C to Proof D? It seems like you are invoking an assumption here I can't quite understand"
is a common kind of question. And I think overall fine, appropriate, healthy.
That said, most of the time, when I was in those environments, I could tell what was going on, and I mostly knew that other people could tell as well. If someone repeatedly asked questions in a way that did clearly indicate an understanding of a flaw in the provided proofs or arguments, but kept insisting on only getting there via Socratic questioning, they would lose points over time. And if they kept asking probing questions in each seminar that were easily answered, with each question taking up space and bandwidth, then they would quickly lose lots of points and asked to please interrupt less. And furthermore, the tone of voice would often make it clear whether the question asked was more on the genuine curiosity side, or the suggested criticism side.
But here on the internet, in the reaches of an online forum, with...
...the mechanisms that productively channeled this behavior no longer work.
And so here, where you can endlessly deflect any accusations with little risk that common-knowledge can be built that you are wasting time, or making repeated bids for social censure that fail to get accepted, by just falling back on saying "look, all I am doing is asking questions and asking for clarification, clearly that is not a crime", there is no natural limit to how much heckling you can do. You alone could be the death of a whole subculture, if you are just persistent enough.
And so, at the heart of all of this, is either a deep obliviousness, or more likely, the strategic disarmament of opposition by denying load-bearing subtext (or anything else that might obviously allow prosecution) in these interactions.
And unfortunately, this does succeed. My guess is here on LessWrong better than most places, because we have a shared belief in the power of explicit reasoning, and we have learned an appropriate fear of focusing on subtext with a culture where debate is supposed to focus on the object level claims, not whatever status dynamics are going on, and I think this is good and healthy most of the time. But the purpose of those norms is not to completely eschew analysis and evaluation of the underlying status dynamics, but simply to separate them from the object level claims (I also think it's good to have some norms against focusing too much on status dynamics and claims in total, so I think a lot of the generic hesitation is justified, which I elaborate on a bit later).
But I think in doing so, part by selection, part by training, we have created an environment where trying to police the subtext and status-dynamics surrounding conversations gets met with fear and counter-reactions, that make moderation and steering close very difficult.[8]
Crimes that are harder to catch should be more harshly punished
A small sidenote on a dynamic relevant to how I am thinking about policing in these cases:
A classical example of microeconomics-informed reasoning about criminal justice is the following snippet of logic.
If someone can gain in-expectation dollars by committing some crime (which has negative externalities of dollars), with a probability of getting caught, then in order to successfully prevent people from committing the crime you need to make the cost of receiving the punishment () be greater than , i.e. .
Or in less mathy terms, the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.
In this case, a core component of the pattern of plausibly deniable aggression that I think is present in much of Said's writing is that it is very hard to catch someone doing it, and even harder to prosecute it successfully in the eyes of a skeptical audience. As such, in order to maintain a functional incentive landscape the punishment for being caught in passive or ambiguous aggression needs to be substantially larger than for e.g. direct aggression, as even though being straightforwardly aggressive has in some sense worse effects on culture and norms (though also less bad effects in some other ways)[9], the probability of catching someone in ambiguous aggression is much lower.
An under-discussed aspect of LessWrong is how voting affects culture, author expectations and conversational dynamics. Voting is anonymous, even to admins (the only cases where we look at votes is when we are investigating mass-downvoting or sockpuppetting or other kinds of extreme voting abuse). Now, does that mean that everyone is free to vote however they want?
The answer is a straightforward "no". Ultimately, voting is a form of participation on the site that can be done well and badly, and while I think it's good for that participation to be anonymous and generally shielded from retaliation, at a broader level, it is a job of the moderators to pay attention to unhealthy vote dynamics. We cannot police what you do with your votes, but if you do abuse your votes, you will make the site worse, and we might end up having to change the voting system towards something less expressive but more robust as a result.
These are general issues, but how do they relate to this whole Said banning thing?
Well, another important dimension of how the dynamics in these threads go is roughly the following:
This is bad. The point of voting is to give an easy way of aggregating information about the quality and reception of content. When voting ends up dominated by a small interest[10] group without broader site buy-in, and with no one being able to tell that is what's going on, it fails at that goal. And in this case, it's distorting people's perception about the site consensus in particularly high-stakes contexts where authors are trying to assess what people on the site think about their content, and about the norms of posting on LessWrong.
I don't really know what to do about this. It's one of the things that makes me more interested in bans than other things, since site-wide banning also comes with removal of vote-privileges, though of course we could also rate-limit votes or find some other workaround that achieves the same aim. I also am not confident this is really what's going on as I do not look at vote data, and nothing here alone would make me confident I would want to ban anyone, but I think the voting has been particularly whack in a lot of these threads, and that seemed important to call out.
I hope the dynamics I outlined help understand why ignoring Said is not usually a socially viable option. Indeed, Said himself does not think it's a valid option:
What this de-facto means is that there is always an obligation by the author to respond to your comment, or otherwise be interpreted to be ignorant.
There is always an obligation by any author to respond to anyone’s comment along these lines. If no response is provided to (what ought rightly to be) simple requests for clarification (such as requests to, at least roughly, define or explain an ambiguous or questionable term, or requests for examples of some purported phenomenon), the author should be interpreted as ignorant. These are not artifacts of my particular commenting style, nor are they unfortunate-but-erroneous implications—they are normatively correct general principles.Many people don’t have the time, or find engaging with commenters exhausting
Then they shouldn’t post on a discussion forum, should they? What is the point of posting here, if you’re not going to engage with commenters?
this creates a default expectation that if they do not engage extensively with your comments in particular (with higher priority than anything else in the comment thread) there will be a public attack on them left unanswered.
This is only because most people don’t bother to ask (what I take to be) such obvious, and necessary, clarifying questions. (Incidentally, I take this fact to be a quite damning indictment of the epistemic norms of most of Less Wrong’s participants.) When I ask such questions, it is because no one else is doing it. I would be happy to see others do it in my stead.
distinguish between a question that is intended as a critique when left unanswered, and one that is an optional request for clarification
Viewing such clarifications as “optional” also speaks to an unacceptable low standard of intellectual honesty.
Once again: there is no confusion; there is no dichotomy. A request for clarification is neither an attack nor even a critique. The normal, expected form of the interaction, in the case where the original post is correct, sensible, and otherwise good (and where the only problem is an insufficiency in communicating the idea), is simply “[request for clarification] -> [satisfactory clarification] -> [end]”. Only a failure of this process to take place is in need of “defending”.[11]
Now, in the comment thread in which the comment above was made, both mods and authors have clarified that no, authors do not have an obligation to respond with remotely the generality outlined here, and the philosophy of discourse Said outlines is absolutely not site consensus. However, this does little for most authors. Most people have never read the thread where those clarifications were made, and never will. And even if we made a top-level post, or added a clarification to our new user guidelines, this would do little to change what is going on.
Because every time Said leaves a top-level comment, it is clear to most authors and readers that he is implying the presence of a social obligation to respond. And because of the dynamics I elaborated on in the previous section, it is not feasible for moderators or other users to point out the underlying dynamic each time, which itself requires careful compiling of evidence and pointing out (and then probably disputing) subtext.
So, despite it being close to site-consensus that authors do not face obligations to respond to each and every one of Said's questions, on any given post, there is basically nothing to be done to build common knowledge of this. Said can simply make another comment thread implying that if someone doesn't respond they deserve to be judged negatively, and there will always be enough people voting who have not seen this pattern play out, or even support Said's relationship to author obligation against the broad site consensus, to make the question and critiques be upvoted enough. And so despite their snark and judgment they will appear to be made in good standing, and so deserve to be responded to, and the cycle will begin anew.
Now in order to fix this dynamic, the moderation team has made multiple direct moderation requests of Said, summarized here as a high-level narrative (though in reality it played out over more like a decade):
So ultimately, what other option do we have but a ban? We could attach a permanent mark to Said's profile with a link to a warning that this user has a long history of asking heckling questions and implying social punishment without much buy-in for that, but that seems to me to create more of an environment of ongoing hostility, and many would interpret such a mark as cruel to have imposed.
So no, I think banning is the best option available.
I expect it to be uncontroversial to suggest that most moderation on LessWrong should be soft-touch. The default good outcome of a moderator showing up on a post is to leave a comment warning of some bad conversational pattern, or telling one user to change their behavior in some relatively specific way. The involved users take the advice, the thread gets better or ends, and everyone moves on with their day.
Ideally this process starts and completes within 10 minutes, from the moderator noticing something going off the rails to sending off the comment providing either advice or a warning.
However, sometimes these interactions escalate, or moderators notice a more systematic pattern with specific users causing repeated problems, and especially if someone disagrees with the advice or recommendation of a moderator, it's less clear how things are supposed to proceed. Historically the moderation standard for LessWrong has been unilateral dictatorship awarded to a head admin. But even with that dictatorship being granted to me, it is still up to me to decide how I want myself and the LessWrong team to handle these kinds of cases.
At a high-level I think there are two reasonable clusters of approaches here:
(In either case it would make sense to try to generalize any judgments or decisions made into meaningful case-law to serve as the basis for future similar decisions, and to be added to some easily searchable set of past judgments that users and authors can use to gain more transparency into the principles behind moderation decisions, and predict how future decisions are likely to be made.)
Both of these approaches are fairly common in general society. Companies generally have a CEO who can unilaterally make firing and rule-setting decisions. Older institutions and governmental bodies often tend to have courts or committees. I considered for quite a while whether as part of this moderation decision I should find and recruit a set of more impartial judges to make high-stakes decisions like this.
But after thinking about it for a while, I decided against it. There are a few reasons:
Considerations like these are what convinced me that even high-stakes decisions like this should be made on my own personal conscience. I have a stake in LessWrong going well, and I can take the time to give these kinds of decisions the resources they deserve to get made well, and I can be available for people to complain and to push on if people disagree with them.
But in doing so, I do want to do a bunch of stuff that gets us the good parts of the more judicially oriented process. Here are some things that I think make sense, even in a process oriented around personal responsibility:
Well, the first option you always have, and which is the foundation of why I feel comfortable governing LessWrong with relatively few checks and balances, is to just not use LessWrong. Not using LessWrong probably isn't that big of a deal for you. There are many other places on the internet to read interesting ideas, to discuss with others, to participate in a community. I think LessWrong is worth a lot to a lot of people, but I think ultimately, things will be fine if you don't come here.
Now, I do recommend that if you stop using the site, you do so by loudly giving up, not quietly fading. Leave a comment or make a top-level post saying you are leaving. I care about knowing about it, and it might help other people understand the state of social legitimacy LessWrong has in the broader world and within the extended rationality/AI-Safety community.
Of course, not all things are so bad as to make it the right choice to stop using LessWrong altogether. You can complain to the mods on Intercom, or make a shortform, or make a post about how you disagree with some decision we made. I will read them, and there is a decent chance we will respond or try to clarify more or argue with you more, though we can't guarantee this. I also highly doubt you will end up coming away thinking that we are right on all fronts, and I don't think you should use that as a requirement for thinking LessWrong is good for the world.
And if the stakes are even higher, you can ultimately try to get me fired from this job. The exact social process for who can fire me is not as clear to me as I would like, but you can convince Eliezer to give head-moderatorship to someone else, or convince the board of Lightcone Infrastructure to replace me as CEO, if you really desperately want LessWrong to be different than it is.
But beyond that, there is no higher appeals process. At some point I will declare that the decision is made, and stands, and I don't have time to argue it further, and this is where I stand on the decision this post is about.
I have tried to make this post relatively self-contained and straightforward to read, trying to avoid making you the reader feel like you have to wade through 100,000+ words of previous comment threads to have any idea what is going on[15], at least from my perspective. However, for the sake of completeness, and because I do think it provides useful context for the people who want to really dive into this kind of decision, here is a quick overview over past moderation discussion and decisions related to Said:
The most substantial of these is Ray's moderation judgement from two years ago. I would recommend the average reader not read it all, but it is the result of another 100+ hour effort, and so does contain a bunch of explanation and context. You can read through the comments Ray made in the appendix to this post.
My current best guess is that not that much has to change. My sense is Said has been a commenter with uniquely bad effects on the site, and while there are people who are making mistakes along similar lines, there are very few who are as prolific or have invested as much into the site. I think the most likely way I can imagine the considerations in this post resulting in more than just banning Said is if someone decides to intentionally pick up the mantle of Said Achmiz in order to fill the role that they perceive he filled on the site, and imitate his behavior in ways that recreate the dynamics I've pointed out.[17]
There are a few users who I have similar concerns about as I had about Said, and I do want this post to save me effort in future moderation disputes. I do also expect to refer back to the ideas in this post for many years in various moderation discussions and moderation judgments, but don't have any immediate instances of that in mind.
I do think it makes sense to try to squeeze out some guidance for future moderation decisions out of this. So in case-law fashion, here are some concrete guidelines derived from this case:
You are at least somewhat responsible for the subtext other people read into your comments, you can't disclaim all responsibility for that
Sometimes things we write get read by other people to say things we didn't mean. Sometimes we write things that we hope other people will pick up, but we don't want to say straight out. Sometimes we have picked up patterns of speech or metaphors that we have observed "working", but that actually don't work the way we think they do (like being defensive when we get negative feedback resulting in less negative feedback, which one might naively interpret as being assessed less negatively).
On LessWrong, it is okay if an occasional stray reader misreads your comments. It is even okay if you write a comment that most of the broad internet would predictably misunderstand, or view as some kind of gaffe or affront. LessWrong has its own communication culture.
But if a substantial fraction of other commenters consistently interpret your comments to mean something different than what you claim they say when asked for clarification, especially if they do so in contexts where that misinterpretation happens to benefit you in conflicts you are involved in, then that is a thing you are at least partially responsible for.[18]
This all also intersects with "decoupling vs. contextualizing" norms. A key feature of LessWrong is that people here tend to be happy to engage with any specific object-level claim, largely independently of what the truth of that claim might imply at a status or reputation or blame level about the rest of the world. This, if you treat it as a single dimension, puts LessWrong pretty far into having "decoupling" norms . I think this is good and important and a crucial component of how LessWrong has maintained its ability to develop important ideas, and help people orient to the world.
This intersection produces a tension. If you are responsible for people on LessWrong reading context and implications and associations into your contributions you didn't intend, then that sure sounds like the opposite of the kind of decoupling norms that I think is so important for LessWrong.
I don't have a perfect resolution to this. Zack had a post on this with some of his thoughts that I found helpful:
I argue that, at best, this is a false dichotomy that fails to clarify the underlying issues—and at worst (through no fault of Leong or Nerst), the concept of "contextualizing norms" has the potential to legitimize derailing discussions for arbitrary political reasons by eliding the key question of which contextual concerns are genuinely relevant, thereby conflating legitimate and illegitimate bids for contextualization.
Real discussions adhere to what we might call "relevance norms": it is almost universally "eminently reasonable to expect certain contextual factors or implications to be addressed." Disputes arise over which certain contextual factors those are, not whether context matters at all.
The standard academic account explaining how what a speaker means differs from what the sentence the speaker said means, is H. P. Grice's theory of conversational implicature. Participants in a conversation are expected to add neither more nor less information than is needed to make a relevant contribution to the discussion.
I disagree with Zack that the dichotomy between decoupling and contextualizing norms fails to clarify any of the underlying issues. I do think you can probably graph communities and spaces pretty well on a vector from "high decoupling" to "high contextualizing", and this will allow you to make a lot of valid predictions.
But as Zack helpfully points out here, the key thing to understand is that of course many forms of context and implications are obviously relevant and important, and worth taking into account during a conversation. This is true on LessWrong as well as anywhere else. If your comments have a consistent subtext of denigrating authors who invoke reasoning by analogy, because you think most people who reason by analogy are confused (a potentially reasonable if contentious position on epistemics), then you better be ready to justify that denigration when asked about it.
Responses of the form "I am just asking these people for the empirical support they have for their ideas, I am not intending to make a broader epistemological point" are OK if they reflect a genuine underlying policy of not trying to shift the norms of the site towards your preferred epistemological style, and associated (bounded) efforts to limit such effects when asked. If you do intend to shift the norms of the site, you better be ready to argue for that, and it is not OK to follow an algorithm that is intending to have a denigrating effect, but that shields itself from the need for justification or inspection by invoking decoupling norms. What work is respected and rewarded on LessWrong is of real and substantial relevance to the participants of LessWrong. Sure, obsession with that dimension is unhealthy for the site, and I think it's actively good to most of the time ignore it. But, especially if the subtext is repeated across many comments from the same author, it is the kind of thing that we need to be able to talk about, and sometimes moderate.
As as such, within these bounds, "tone" is very much a thing the LessWrong moderators will pay attention to, as are the implied connotations of the words you use, as are the metaphors you choose, and the associations that come with them. And while occasionally a moderator will take the effort to disentangle all your word choices, and pin down in excruciating detail why something you said implied something else and how you must have been aware of that on some level given what you were writing, they do not generally have the capacity to do so in most circumstances. Moderators need the authority to, at some level, police the vibe of your comments, even without a fully mechanical explanation of how that vibe arises from the specific words you chose.
Do not try to win arguments by fights of attrition
A common pattern on the internet is that whoever has the most patience for re-litigating and repeating their points ultimately wins almost any argument. As long as you avoid getting visibly frustrated, or insulting your opponents, and display an air of politeness, you can win most internet arguments by attrition. If you are someone who might have multiple hours per day available to write internet comments, you can probably eke out some kind of concession or establish some kind of norm in almost any social space, or win some edit war that you particularly care about.
This is a hard thing to combat, but the key thing that makes this tactic possible is to be in a social space in which it is assumed that comments or questions are made in good standing as long as they aren't obviously egregious.
On LessWrong, if you make a lot of comments, or ask a lot of questions, with a low average hit-rate on providing value by the lights of the moderators, my best guess is you are causing more harm than good, especially if many of those comments are part of conversations that try to prove some kind of wrongdoing or misleadingness on behalf of your interlocutor (so that they feel an obligation to respond). And this is a pattern we will try to notice and correct (while also recognizing that sometimes it is worth pressing people on crucial and important questions, as people can be evasive and try to avoid reasonable falsification of their ideas in defense of their reputation/ego).
Building things that help LessWrong's mission will make it less likely you will get banned
While the overall story of Said is one of him ultimately getting banned from LessWrong, it is definitely the case that having built readthesequences.com and greaterwrong.com and his contributions to gwern.net have all increased the thresholds we had for banning very substantially.
And I overall stand behind this choice. Being banned from LessWrong does affect people's ability to contribute and participate in the broader Rationality community ecosystem, and I think it makes sense to tolerate people's weaknesses in one domain, if that allows them to be a valuable contributor in another domain, even if those two domains are not necessarily governed by the same people.
So yeah, I do think you get to be a bit more of a dick, for longer, if you do a lot of other stuff that helps LessWrong's broader mission. This has limits, and we will invest a bunch into limiting the damage or helping you improve, but it does also just help.
And so we reach the end of this giant moderation post. I hope I have clarified at least my perspective on many things. I will aim to limit my engagement with the comments of this post to at most 10 hours. Said is also welcome to send additional commentary to me in the next 10 days, and if so, I will append it to this post and link to it somewhere high up so that people can see it if they get linked here.[19] I will also make one top-level comment below this post under which Said will be allowed to continue commenting for the next 2 weeks, and where people can ask questions.
Farewell. It's certainly been a ride.
In 2022 Ray wrote 10,000+ words the last time we took moderation action on Said, which I extracted here for convenience. I don't recommend the average reader read them all, but I do think they were another high-effort attempt at explaining what was going on.
Overview/outline of initial comment
Okay, overall outline of thoughts on my mind here:
It seems worthwhile to touch on each of these at least somewhat. I'll follow up on each topic at least somewhat.
Recap of mod team history with Said Achmiz
First, some background context. When LW2.0 was first launched, the mod team had several back-and-forths with Said over complaints about his commenting style. He was (and I think still is) the most-complained-about LW user. We considered banning him.
Ultimately we told him this:
As Eliezer is wont to say, things are often bad because the way in which they are bad is a Nash equilibrium. If I attempt to apply it here, it suggests we need both a great generative and a great evaluative process before the standards problem is solved, at the same time as the actually-having-a-community-who-likes-to-contribute-thoughtful-and-effortful-essays-about-important-topics problem is solved, and only having one solved does not solve the problem.
I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that's fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the evaluative before the whole system works, and while we've not reached there you're correct to be worried and want to enforce the standards yourself with low-effort comments (and I don't mean to imply the comments don't often contain implicit within them very good ideas).
But unfortunately, given your low-effort criticism feels so aggressive (according to me, the mods, and most writers I talk to in the rationality community), this is just going to destroy the first stage before we get the second. If you write further comments in this pattern which I have pointed to above, I will not continue to spend hours trying to pass your ITT and responding; I will just give you warnings and suspensions.
I may write another comment in this thread if there is something simple to clarify or something, but otherwise this is my last comment in this thread.
Followed by:
This was now a week ago. The mod team discussed this a bit more, and I think it's the correct call to give Said an official warning (link) for causing a significant number of negative experiences for other authors and commenters.
Said, this moderation call is different than most others, because I think there is a place for the kind of communication culture that you've advocated for, but LessWrong specifically is not that place, and it's important to be clear about what kind of culture we are aiming for. I don't think ill of you or that you are a bad person. Quite the opposite; as I've said above, I deeply appreciate a lot of the things you've build and advice you've given, and this is why I've tried to put in a lot of effort and care with my moderation comments and decisions here. I'm afraid I also think LessWrong will overall achieve its aims better if you stop commenting in (some of) the ways you have so far.
Said, if you receive a second official warning, it will come with a 1-month suspension. This will happen if another writer has an extensive interaction with you primarily based around you asking them to do a lot of interpretive labour and not providing the same in return, as I described in my main comment in this thread.
I do have a strong sense of Said being quite law-abiding/honorable about the situation despite disagreeing with us on several object and meta-level moderation policy, which I appreciate a lot.
I do think it's worth noting that LessWrong 2.0 feels like it's at a more stable point than it was in 2018. There's enough critical mass of people posting here I that I'm less worried about annoying commenters killing it completely (which was a very live fear during the initial LW2.0 revival)
But I am still worried about the concerns from 5 years ago, and do basically stand by Ben's comment. And meanwhile I still think Said's default commenting style is much worse than nearby styles that would accomplish the upside with less downside.
My summary of previous discussions as I recall them is something like:
Mods: "Said, lots of users have complained about your conversation style, you should change it."
Said: "I think a) your preferred conversation norms here don't make sense to me and/or seem actively bad in many cases, and b) I think the thing my conversation style is doing is really important for being a truthtracking forum."
[...lots of back-and-forth...]
Mods: "...can you change your commenting style at all?"
Said: "No, but I can just stop commenting in particular ways if you give me particular rules."
Then we did that, and it sorta worked for awhile. But it hasn't been wholly satisfying to me. (I do have some sense that Said has recently ended up commenting more in threads that are explicitly about setting norms, and while we didn't spell this out in our initial mod warning, I do think it is extra costly to ban someone from discussions of moderation norms than from other discussion. I'm not 100% sure how to think about this)
Death by a thousand cuts and "proportionate"(?) response
A way this all feels relevant to current disputes with Duncan is that thing that is frustrating about Said is not any individual comment, but an overall pattern that doesn't emerge as extremely costly until you see the whole thing. (i.e. if there's a spectrum of how bad behavior is, from 0-10, and things that are a "3" are considered bad enough to punish, someone who's doing things that are bad at a "2.5" or "2.9" level don't quite feel worth reacting to. But if someone does them a lot it actually adds up to being pretty bad.
If you point this out, people mostly shrug and move on with their day. So, to point it out in a way that people actually listen to, you have to do something that looks disproportionate if you're just paying attention to the current situation. And, also, the people who care strongly enough to see that through tend to be in an extra-triggered/frustrated state, which means they're not at their best when they're doing it.
I think Duncan's response looks very out-of-proportion. I think Duncan's response is out of proportion to some degree (see Vaniver thread for some reasons why. I have some more reasons I plan to write about).
But I do think there is a correct thing that Duncan was noting/reacting to, which is that actually yeah, the current situation with Said does feel bad enough that something should change, and it indeed the mods hadn't been intervening on it because it didn't quite feel like a priority.
I liked Vaniver's description of Duncan's comments/posts as making a bet that Said was in fact obviously banworthy or worthy of significant mod action, and that there was a smoking gun to that effect, and if this was true then Duncan would be largely vindicated-in-retrospect.
I'll lay out some more thinking as to why, but, my current gut feeling + somewhat considered opinion is that "Duncan is somewhat vindicated, but not maximally, and there are some things about his approach I probably judge him for."
Maybe explicit rules against blocking users from "norm-setting" posts.
On blocking users from commenting
I still endorse authors being able to block other users (whether for principles reasons, or just "this user is annoying"). I think a) it's actually really important for authors for the site to be fun to use, b) there's a lot of users who are dealbreakingly annoying to some people but not others. Banning them from the whole site would be overkill. c) authors aren't obligated to lend their own karma/reputation to give space to other people's content. If an author doesn't want your comments on his post, whether for defensible reasons or not, I think it's an okay answer that those commenters make their own post or shortform arguing the point elsewhere.
Yes, there are some trivial inconveniences to posting that criticism. I do track that in the cost. But I think that is outweighed by the effect on authors being motivated to post.
That all said...
Blocking users on "norm-setting posts"
I think it's more worrisome to block users on posts that are making major momentum towards changing site norms/culture. I don't think the censorship effects are that strong or distorting in most cases, but I'm most worried about censorship effects being distorting in cases that affect ongoing norms about what people can say.
There's a blurry line here, between posts that are putting forth new social concepts, and posts advocating for applying those concepts towards norms (either in the OP or in the comments), and a further blurry line between that and posts which arguing about applying that to specific people. i.e. I'd have an ascending wariness of:
I think it was already a little sketchy that Basics of Rationalist Discourse went out of it's way to call itself "The Basics" rather than "Duncan's preferred norms" (a somewhat frame-control-y move IMO although not necessarily unreasonably so), while also blocking Zack at the time. It feels even more sketchy to me to write Killing Socrates, which AFAICT a thinly veiled "build-social-momentum-against-Said-in-particular", where Said can't respond (and it's disproportionately likely that Said's allies also can't respond)
Right now we don't have tech to unblock users from a specific post, who have been banned from all of a user's posts. But this recent set of events has me learning towards "build tech to do that", and then make it a rule that post over at the threshold of "Basics" or higher (in terms of site-norm-momentum-building), need to allow everyone to comment.
I do expect that to make it less rewarding to make that sort of post. And, well, to (almost) quote Duncan:
Put another way: a frequent refrain is "well, if I have to put forth that much effort, I'll never say anything at all," to which the response is often ["sorry I acknowledge the cost here but I think that's an okay tradeoff"]
Okay but what do I do about Said when he shows up doing his whole pattern of subtly-missing/and/or/reframing-the-point-while-sprawling massive threads, in an impo
My answer is "strong downvote him, announce you're not going to engage, maybe link to a place where you went into more detail about why if this comes up a lot, and move on with your day." (I do generally wish Duncan did more of this and less trying to set-the-record straight in ways that escalate in IMO very costly ways)
(I also kinda wish gjm had also done this towards the beginning of the thread on LW Team is adjusting moderation policy)
Verdict for 2023 Said moderation decisions
Preliminary Verdict (but not "operationalization" of verdict)
tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:
(After the two comments they can continue to PM the LW team, although we'll have some limit on how much time we're going to spend negotiating)
Some background:
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I'd be sad to see go.
The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of "we learned new useful things about site governance") there's a limit to how much it's worth moderating or mediating conflict re: two particular users.
So, something pretty significant needs to change.
A thing that sticks out in both the case of Said and Duncan is that they a) are both fairly law abiding (i.e. when the mods have asked them for concrete things, they adhere to our rules, and clearly support rule-of-law and the general principle of Well Kept Gardens), but b) both have a very strong principled sense of what a “good” LessWrong would look like and are optimizing pretty hard for that within whatever constraints we give them.
I think our default rules are chosen to be something that someone might trip accidentally, if you’re trying to mostly be good stereotypical citizen but occasionally end up having a bad day. Said and Duncan are both trying pretty hard to be good citizen in another country that the LessWrong team is consciously not trying to be. It’s hard to build good rules/guidelines that actually robustly deal with that kind of optimization.
I still don’t really know what to do, but I want to flag that the the goal I'll be aiming for here is "make it such that Said and Duncan either have actively (credibly) agreed to stop optimizing in a fairly deep way, or, are somehow limited by site tech such that they can't do the cluster of things they want to do that feels damaging to me."
If neither of those strategies turn out to be tractable, banning is on the table (even though I think both of them contribute a lot in various ways and I'd be pretty sad to resort to that option). I have some hope tech-based solutions can work
(This is not a claim about which of them is more valuable overall, or better/worse/right-or-wrong-in-this-particular-conflict. There's enough history with both of them being above-a-threshold-of-worrisome that it seems like the LW team should just actually resolve the deep underlying issues, regardless of who's more legitimately aggrieved this particular week)
Re: Said:
One of the most common complaints I've gotten about LessWrong, from both new users as well as established, generally highly regarded users, is "too many nitpicky comments that feel like they're missing the point". I think LessWrong is less fragile than it was in 2018 when I last argued extensively with Said about this, but I think it's still an important/valid complaint.
Said seems to actively prefer a world where the people who are annoyed by him go away, and thinks it’d be fine if this meant LessWrong had radically fewer posts. I think he’s misunderstanding something about how intellectual progress actually works, and about how valuable his comments actually are. (As I said previously, I tend to think Said’s first couple comments are worthwhile. The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics)
We've had extensive conversations with Said about changing his approach here. He seems pretty committed to not changing his approach. So, if he's sticking around, I think we'd need some kind of tech solution. The outcome I want here is that in practice Said doesn't bother people who don't want to be bothered. This could involve solutions somewhat specific-to-Said, or (maybe) be a sitewide rule that works out to stop a broader class of annoying behavior. (I'm skeptical the latter will turn out to work without being net-negative, capturing too many false positives, but seems worth thinking about)
Here are a couple ideas:
There's some cluster of ideas surrounding how authors are informed/encouraged to use the banning options. It sounds like the entire topic of "authors can ban users" is worth revisiting so my first impulse is to avoid investing in it further until we've had some more top-level discussion about the feature.
Why is it worth this effort?
You might ask "Ray, if you think Said is such a problem user, why bother investing this effort instead of just banning him?". Here are some areas I think Said contributes in a way that seem important:
Re: Duncan
I've spent years trying to hash out "what exactly is the subtle but deep/huge difference between Duncan's moderation preferences and the LW teams." I have found each round of that exchange valuable, but typically it didn't turn out that whatever-we-thought-was-the-crux was a particularly Big Crux.
I think I care about each of the things Duncan is worried about (i.e. such as things listed in Basics of Rationalist Discourse). But I tend to think the way Duncan goes about trying to enforce such things extremely costly.
Here's this month/year's stab at it: Duncan cares particularly about things strawmans/mischaracterizations/outright-lies getting corrected quickly (i.e. within ~24 hours). See Concentration of Force for his writeup on at least one-set-of-reasons this matters). I think there is value in correcting them or telling people to "knock it off" quickly. But,
a) moderation time is limited
b) even in the world where we massively invest in moderation... the thing Duncan cares most about moderating quickly just doesn't seem like it should necessarily be at the top of the priority queue to me?
I was surprised and updated on You Don't Exist, Duncan getting as heavily upvoted as it did, so I think it's plausible that this is all a bigger deal than I currently think it is. (that post goes into one set of reasons that getting mischaracterized hurts). And there are some other reasons this might be important (that have to do with mischaracterizations taking off and becoming the de-facto accepted narrative).
I do expect most of our best authors to agree with Duncan that these things matter, and generally want the site to be moderated more heavily somehow. But I haven't actually seen anyone but Duncan argue they should be prioritized nearly as heavily as he wants. (i.e. rather than something you just mostly take-in-stride, downvote and then try to ignore, focusing on other things)
I think most high-contributing users agree the site should be moderated more (see the significant upvotes on LW Team is adjusting moderation policy), but don't necessarily agree on how. It'd be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
I don't know that really captured the main thing here. I feel less resolved on what should change on LessWrong re: Duncan. But I (and other LW site moderators), want to be clear that while strawmanning is bad and you shouldn’t do it, we don’t expect to intervene on most individual cases. I recommend strong downvoting, and leaving one comment stating the thing seems false.
I continue to think it's fine for Duncan to moderate his own posts however he wants (although as noted previously I think an exception should be made for posts that are actively pushing sitewide moderation norms)
Some goals I'd have are:
FWIW I do think it's moderately likely that the LW team writes a post taking many concepts from Basics of Rationalist Discourse and integrating it into our overall moderation policy. (It's maybe doable for Duncan to rewrite the parts that some people object to, and to enable commenting on those posts by everyone. but I think it's kinda reasonable for people to feel uncomfortable with Duncan setting the framing, and it's worth the LW team having a dedicated "our frame on what the site norms are" anyway)
In general I think Duncan has written a lot of great posts – many of his posts have been highly ranked in the LessWrong review. I expect him to continue to provide a lot of value to the LessWrong ecosystem one way or another.
I'll note that while I have talked to Duncan for dozens(?) of hours trying to hash out various deep issues and not met much success, I haven't really tried negotiating with him specifically about how he relates to LessWrong. I am fairly hopeful we can work something out here.
Why spend so much time engaging with a single commenter? Well, the answer is that I do think the specific way Said has been commenting on the site had a non-trivial chance of basically just killing the site, in the sense of good conversation and intellectual progress basically ceasing, if not pushed back on and the collateral damage limited by moderator action.
Said has been by far the most complained user on the site, with many top authors citing him as a top reason for why they do not want to post on the site, or comment here, and also I personally (and the LessWrong team more broadly) would have had little interest in further investing in LessWrong if the kind of the kind of culture that Said brings had taken hold here.
So the stakes have been high, and the alternative would have been banning, which I think also in itself requires at least many dozens of hours of effort, and given that Said is a really valuable contributor via projects like greaterwrong and readthesequences.com, a choice I felt appropriately hesitant about.
Now, one might think that it seems weird for one person to be able to derail a comment thread. However, I claim this is indeed the case. As long as you can make comments that do not get reliably downvoted, you can probably successfully cause a whole comment thread to almost exclusively focus on the concern you care about. This is the result of a few different dynamics:
Ultimately I think it's the job of the moderators to remove or at least mark commenters who have lost good standing of this kind, given the current background of social dynamics (or alternatively to rejigger the culture and incentives to remove this exploit of authors thinking non-downvoted comments are made in good standing)
Occupy Wall Street strikes me as another instance of the same kind of popular sneer culture. Occupy Wall Street had no coherent asks, no worldview that was driving their actions. Everyone participating in the movement seemed to have a different agenda of what they wanted to get out of it. The thing that united them was a shared dislike of something in the vague vicinity of capitalism, or government, or the man, not anything that could be used as the basis for any actual shared creations or efforts.
To be clear, I think it's fine and good for people to congratulate each other on getting new jobs. It's a big life change. But of course if your discourse platform approximately doesn't allow anything else, as I expand in the rest of this section, then you run into problems.
And... unfortunately... we are just getting started.
I am here trying to give a high-level gloss that tries to elucidate the central problems. Of course many individual conversations diverge, and there are variations on this, often in positive ways, but I would argue the overall tendency is strong and clear.
I switched to a different comment thread here, as a different thread made it easier to see the dynamics at hand. The Benquo thread also went meta, and you can read it here, but it seemed a bit harder to follow without reading a huge amount of additional context, and was a less clear example of the pattern I am trying to highlight.
See e.g. this comment thread with IMO a usually pretty good commenter who kept extremely strongly insisting that any analysis or evaluation of IMO clearly present subtext is fabricated or imagined.
I am not making a strong claim here that direct aggression is much worse or much better than passive aggression, I feel kind of confused about it, but I am saying that independently of that, there is one argument that passive/hidden aggression requires harsher punishment when prosecution does succeed.
Who to be clear, have contributed to the site in the past and have a bunch of karma.
For some further details on what Said means by "responding" see this comment.
A bit unclear what exactly happened, you can read the thread yourself. Mostly we argued for a long time about what kind of place LessWrong should be and how authors should relate to criticism until we gave an official warning, and then nothing about Said's behavior changed in the future, but we didn't have the energy to prosecute the subtext another time.
The functionality for this had been present earlier, but we hadn't really encouraged people to use it.
Unless we are talking about weird issues that involve doxxing or infohazards or things like that.
Requiring you to read only 15,000 words of summary. :P
The full quote being:
Buuuut what's going on here is that - and this is imo unfortunate - the website you guys have built is such that posting or commenting on it provides me with a fairly low amount of value
This is something I really do find disappointing, but it is what it is (for now? things change, of course)
So again it's not that I disagree with you about anything you've said
But the sort of care / attention / effort w.r.t. tone and wording and tact and so on, that you're asking, raises the cost of participation for me above the benefit
(Another aspect of this is that if I have to NOT say what I actually think, even on e.g. the CFAR thing w.r.t. Double Crux, well, again, what then is the point)
(I can say things I don't really believe anywhere)
[...]
If the takeaway here is that I have to learn things or change my behavior, well - I'm not averse in principle to doing that ever under any circumstances, but it has to be worth my while, if you see what I mean
Currently it is not
I hope to see that change, of course!
The specific moderation message we sent at the end of that exchange was:
The mod team has talked about it, and we're going to insist you comment with the same level of tact you showed while talking with me. If that makes it not worth your while to comment on the new LW that's regrettable and we hope someday the quality makes it worth your while to come back on these terms, but we understand and there are no hard feelings.
To be clear, I think as I point out in the earlier sections of this post, I think there are ways of doing this that would be good for the site, and functions that Said performed that are good, but I would be quite concerned about people doing a cargo-culting thing here.
And to be clear, this is all a pretty tricky topic. It is not rare for whole social groups, including the rationality community, to pretend to misunderstand something. As the moderators it's part of our job to take into account whether the thing that is going on here is some social immune reaction that is exaggerating their misunderstandings, or maybe even genuinely preventing any real understanding from forming at all, and to adjust accordingly. This is hard.
Like, with some reasonable limit around 3000 words or so.