Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.
Recently there's been a series of posts and comment back-and-forth between Said Achmiz and Duncan Sabien, which escalated enough that it seemed like site moderators should weigh in.
For context, a quick recap of recent relevant events as I'm aware of them are. (I'm glossing over many details that are relevant but getting everything exactly right is tricky)
- Duncan posts Basics of Rationalist Discourse. Said writes some comments in response.
- Zack posts "Rationalist Discourse" Is Like "Physicist Motors", which Duncan and Said argue some more and Duncan eventually says "goodbye" which I assume coincides with banning Said from commenting further on Duncan's posts.
- I publish LW Team is adjusting moderation policy. Lionhearted suggests "Basics of Rationalist Discourse" as a standard the site should uphold. Paraphrasing here, Said objects to a post being set as the site standards if not all non-banned users can discuss it. More discussion ensues.
- Duncan publishes Killing Socrates, a post about a general pattern of LW commenting that alludes to Said but doesn't reference him by name. Commenters other than Duncan do bring up Said by name, and the discussion gets into "is Said net positive/negative for LessWrong?" in a discussion section where Said can't comment.
- @gjm publishes On "aiming for convergence on truth", which further discusses/argues a principle from Basics of Rationalist Discourse that Said objected to. Duncan and Said argue further in the comments. I think it's a fair gloss to say "Said makes some comments about what Duncan did, which Duncan says are false enough that he'd describe Said as intentionally lying about them. Said objects to this characterization" (although exactly how to characterize this exchange is maybe a crux of discussion)
LessWrong moderators got together for ~2 hours to discuss this overall situation, and how to think about it both as an object-level dispute and in terms of some high level "how do the culture/rules/moderation of LessWrong work?".
I think we ended up with fairly similar takes, but, getting to the point that we all agree 100% on what happened and what to do next seemed like a longer project, and we each had subtly different frames about the situation. So, some of us (at least Vaniver and I, maybe others) are going to start by posting some top level comments here. People can weigh in the discussion. I'm not 100% sure what happens after that, but we'll reflect on the discussion and decide on whether to take any high-level mod actions.
If you want to weigh in, I encourage you to take your time even if there's a lot of discussion going on. If you notice yourself in a rapid back and forth that feels like it's escalating, take at least a 10 minute break and ask yourself what you're actually trying to accomplish.
I do note: the moderation team will be making an ultimate call on whether to take any mod actions based on our judgment. (I'll be the primary owner of the decision, although I expect if there's significant disagreement among the mod team we'll talk through it a lot). We'll take into account arguments various people post, but we aren't trying to reflect the wisdom of crowds.
So if you may want to focus on engaging with our cruxes rather than what other random people in the comments think.
Preliminary Verdict (but not "operationalization" of verdict)
tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:
(After the two comments they can continue to PM the LW team, although we'll have some limit on how much time we're going to spend negotiating)
Some background:
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I'd be sad to see go.
The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of "we learned new... (read more)
I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.
I note re:
... that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would've been less likely to leave and would be more likely to return with marginal movement in that direction.
I don't know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like "how would you have felt if we had moved 25% in this direction," I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more "what? No, we're well-adapted to the current environment; we're the ones who've been filtered for."
(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)
It's a bad thing to institute policies when missing good proxies. Doesn't matter if the intended objective is good, a policy that isn't feasible to sanely execute makes things worse.
Whether statements about someone's inner state are "unfounded" or whether something is a "strawman" is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don't see a principled difference. People should be allowed to be wrong, that's the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it's not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It's bad on both levels, hence "hair-raisingly alarming".)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I'm not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn't seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I'm not averse to re-injecting the context into their discussion. But I won't necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators' arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related ... (read more)
I think Said and Duncan are clearly channeling this conflict, but the confict is not about them, and doesn't originate with them. So by having them go away or stop channeling the conflict, you leave it unresolved and without its most accomplished voices, shattering the possibility of resolving it in the foreseeable future. The hush-hush strategy of dealing with troubling observations, fixing symptoms instead of researching the underlying issues, however onerous that is proving to be.
(This announcement is also rather hush-hush, it's not a post and so I've only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)
Just want to note that I'm less happy with a lesswrong without Duncan. I very much value Duncan's pushback against what I see as a slow decline in quality, and so I would prefer him to stay and continue doing what he's doing. The fact that he's being complained about makes sense, but is mostly a function of him doing something valuable. I have had a few times where I have been slapped down by Duncan, albeit in comments on his Facebook page, where it's much clearer that his norms are operative, and I've been annoyed, but each of those times, despite being frustrated, I have found that I'm being pushed in the right direction and corrected for something I'm doing wrong.
I agree that it's bad that his comments are often overly confrontational, but there's no way to deliver constructive feedback that doesn't involve a degree of confrontation, and I don't see many others pushing to raise the sanity waterline. In a world where a dozen people were fighting the good fight, I'd be happy to ask him to take a break. But this isn't that world, and it seems much better to actively promote a norm of people saying they don't have energy or time to engage than telling Duncan (and maybe / hopefully others) not to push back when they see thinking and comments which are bad.
I think I want to reiterate my position that I would be sad about Said not being able to discuss Circling (which I think is one of the topics in that fuzzy cluster). I would still like to have a written explanation of Circling (for LW) that is intelligible to Said, and him being able to point out which bits are unintelligible and not feel required to pretend that they are intelligible seems like a necessary component of that.
With regards to Said's 'general pattern', I think there's a dynamic around socially recognized gnosis where sometimes people will say "sorry, my inability/unwillingness to explain this to you is your problem" and have the commons on their side or not, and I would be surprised to see LW take the position that authors decide for that themselves. Alternatively, tech that somehow makes this more discoverable and obvious--like polls or reacts or w/e--does seem good.
I think productive conversations stem from there being some (but not too much) diversity in what gnosis people are willing to recognize, and in the ability for subspaces to have smaller conversations that require participants to recognize some gnosis.
Is there any evidence that either Duncan or Said are actually detrimental to the site in general, or is it mostly in their interactions directly with each other? As far as I can see, 99% of the drama here is in their conflicts directly with each other and heavy moderation team involvement in it.
From my point of view (as an interested reader and commenter), this latest drama appears to have started partly due to site moderation essentially forcing them into direct conflict with each other via a proposal to adopt norms based on Duncan's post while Said and others were and continue to be banned from commenting on it.
From this point of view, I don't see what either of Said or Duncan have done to justify any sort of ban, temporary or not.
This decision is based on mostly on past patterns with both of them, over the course of ~6 years.
The recent conflict, in isolation, is something where I'd kinda look sternly at them and kinda judge them (and maybe a couple others) for getting themselves into a demon thread*, where each decision might look locally reasonable but nonetheless it escalates into a weird proliferating discussion that is (at best) a huge attention sink and (at worst) gets people into an increasingly antagonistic fight that brings out people's worse instincts. If I spent a long time analyzing I might come to more clarity about who was more at fault, but I think the most I might do for this one instance is ban one or both of them for like a week or so and tell them to knock it off.
The motivation here is from a larger history. (I've summarized one chunk of that history from Said here, and expect to go into both a bit more detail about Said and a bit more about Duncan in some other comments soon, although I think I describe the broad strokes in the top-level-comment here)
And notably, my preference is for this not to result in a ban. I'm hoping we can work something out. The thing I'm laying down in this comment is "we do have to actually work something out."
I condemn the restrictions on Said Achmiz's speech in the strongest possible terms. I will likely have more to say soon, but I think the outcome will be better if I take some time to choose my words carefully.
Did we read the same verdict? The verdict says that the end of the ban is conditional on the users in question "credibly commit[ting] to changing their behavior in a fairly significant way", "accept[ing] some kind of tech solution that limits their engagement in some reliable way that doesn't depend on their continued behavior", or "be[ing] banned from commenting on other people's posts".
The first is a restriction on variety of speech. (I don't see what other kind of behavioral change the mods would insist on—or even could insist on, given the textual nature of an online forum where everything we do here is speech.) The third is a restriction of venue, which I claim predictably results in a restriction of variety. (Being forced to relegate your points into a shortform or your own post, won't result in the same kind of conversation as being able to participate in ordinary comment threads.) I suppose the "tech solution" of the second could be mere rate-limiting, but the "doesn't depend on their continued behavior" clause makes me think something more onerous is intended.
(The grandparent only mentions Achmiz because I particularly value his contributions, and because I think many people would prefer that I don't comment on the other case, but I'm deeply suspicious of censorship in general, for reasons that I will likely explain in a future post.)
The tech solution I'm currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I'm leaning towards either "3 comments per post" or "3 comments per post per day". (My ideal world, for Said, is something like "3 comments per post to start, but, if nothing controversial happens and he's not ruining the vibe, he gets to comment more without limit." But that's fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).
I do have a high level goal of "users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so". The question here is "do you want the 'real work' of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can't bother you and?" (which is what's mostly currently happening).
So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he's already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is "sudden... (read more)
We already have a user-level personal ban feature! (Said doesn't like it, but he can't do anything about it.) Why isn't the solution here just, "Users who don't want to receive comments from Said ban him from their own posts"? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.
This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I'm unlikely to guess it; you'll have to clarify.) It's true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by indi... (read more)
Because it's a blank text box, it's not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.
With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.
Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.
I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to.
To use a trivial example: Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught. And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 -> Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.
So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “10... (read more)
YES. I think this is hugely important, and I think it's a pretty good definition of the difference between a confused person and a crank.
Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they're lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or hi... (read more)
This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers.
You're describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.
The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you're right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can't, and no one can, then he might have a point, and the gym gets to learn something new.
If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they're an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren't there in the first place. It's definitely more challenging to jam with dissonant characters like that (especially if they're dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it's important to realize that the problem isn't so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
Ray writes:
For the record, I think the value here is "Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world", and I don't think that comes across in this bullet.
I feel like this incentivizes comments to be short, which doesn't make them less aggravating to people. For example, IIRC people have complained about him commenting "Examples?". This is not going to be hit hard by a rate limit.
'Examples?' is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit "Oh, I don't have any yet, this is speculative, so YMMV".
Spending my last remaining comment here.
I join Ray and Gwern in noting that asking for examples is generically good (and that I've never felt or argued to the contrary). Since my stance on this was called into question, I elaborated:
... (read more)Noting that my very first lesswrong post, back in the LW1 days, was an example of #2. I was wrong on some of the key parts of the intuition I was trying to convey, and ChristianKl corrected me. As an introduction to posting on LW, that was pretty good - I'd hate to think that's no longer acceptable.
At the same time, there is less room for it as the community got much bigger, and I'd probably weak downvote a similar post today, rather than trying to engage with a similar mistake, given how much content there is. Not sure if there is anything that can be done about this, but it's an issue.
fwiw that seems like a pretty great interaction. ChristanKl seems to be usefully engaging with your frame while noting things about it that don't seem to work, seems (to me) to have optimized somewhat for being helpful, and also the conversation just wraps up pretty efficiently. (and I think this is all a higher bar than what I mean to be pushing for, i.e. having only one of those properties would have been fine)
Some evidence for that, also seems likely to get upvoted on the basis of "well written and evocative of a difficult personal experience", or people relate to being outliers and unusual even if they didn't feel alienated and hurt in quite the same way. I'm unsure.
I upvoted it because it made me finally understand what in the world might be going on in Duncan's head to make him react the way he does
i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one - the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!
https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor
there are much more then 3 comments from person there.
from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it's dialog. and there are lot of unproductive examples for that in LW. and it's quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.
but i find rules that prevent the best things from happening as bad in some way that i can't explain clearly. something like, I'm here to try to go higher. if it's impossible, then why bother?
I also think it's V... (read more)
Ray pointing out the level of complaints is informative even without (far more effort) judgement on the merits of each complaint. There being a lot of complaints is evidence (to both the moderation team and the site users) that it's worth putting in effort here to figure out if things could be better.
It is evidence that there is some sort of problem. It's not clear evidence about what should be done about it, about what "better" means specifically. Instituting ways of not talking about the problem anymore doesn't help with addressing it.
Here's a bit of metadata on this: I can recall offhand 7 complaints from users with 2000+ karma who aren't on the mod team (most of whom had significantly more than 2000 karma, and all of them had some highly upvoted comments and/or posts that are upvoted in the annual review). One of them cites you as being the reason they left LessWrong a few years ago, and ~3-4 others cite you as being a central instance of a pattern that means they participate less on LessWrong, or can't have particularly important types of conversations here.
I also think most of the mod team (at least 4 of them? maybe more) of them have had such complaints (as users, rather than as moderators)
I think there's probably at least 5 more people who complained about you by name who I don't think have particularly legible credibility beyond "being some LessWrong users."
I'm thinking about my reply to "are the complaints valid tho?". I have a different ontology here.
There are some problems with this as pointing in a particular direction. There is little opportunity for people to be prompted to express opposite-sounding opinions, and so only the above opinions are available to you.
I have a concern that Said and Zack are an endangered species that I want there to be more of on LW and I'm sad they are not more prevalent. I have some issues with how they participate, mostly about tendencies towards cultivating infinite threads instead of quickly de-escalating and reframing, but this in my mind is a less important concern than the fact that there are not enough of them. Discouraging or even outlawing Said cuts that significantly, and will discourage others.
Warning to Duncan
(See also: Raemon's moderator action on Said)
Since we were pretty much on the same page, Raemon delegated writing this warning to Duncan to me, and signed off on it.
Generally, I am quite sad if, when someone points/objects to bad behavior, they end up facing moderator action themselves. It doesn’t set a great incentive. At the same time, some of Duncan’s recent behavior also feels quite bad to me, and to not respond to it would also create a bad incentive – particularly if the undesirable behavior results in something a person likes.
Here’s my story of what happened, building off of some of Duncan’s own words and some endorsement of something I said previous exchange with him:
Duncan felt that Said engaged in various behaviors that hurt him (confident based on Duncan’s words) and were in general bad (inferred from Duncan writing posts describing why those behaviors are bad). Such bad/hurtful behaviors include strawmanning, psychologizing at length, and failing to put in symmetric effort. For example, Said argued that Duncan banned him from his posts because Said disagreed. I am pretty sympathetic to these accusations against Said (and endorse moderation action agains... (read more)
Just noting as a "for what it's worth"
(b/c I don't think my personal opinion on this is super important or should be particularly cruxy for very many other people)
that I accept, largely endorse, and overall feel fairly treated by the above (including the week suspension that preceded it).
Moderation action on Said
(See also: Ruby's moderator warning for Duncan)
I’ve been thinking for a week, and trying to sanity-check whether there are actual good examples of Said doing-the-thing-I’ve-complained-about, rather than “I formed a stereotype of Said and pattern match to it too quickly”, and such.
I think Said is a pretty confusing case though. I’m going to lay out my current thinking here, in a number of comments, and I expect at least a few more days of discussion as the LessWrong community digests this. I’ve pinned this post to the top of the frontpage for the day so users who weren’t following the discussion can decide whether to weigh in.
Here’s a quick overview of how I think about Said moderation:
- Re: Recent Duncan Conflict.
- I think he did some moderation-worthy things in the recent conflict with Duncan, but a) so did Duncan, and I think there’s a “it takes two-to-tango” aspect of demon threads, b) at most, those’d result in me giving one or both of them a 1-week ban and then calling it a day. I basically endorse Vaniver’s take on some object level stuff. I have a bit more to say but not much.
- Overall pattern.
- I think Said’s overall pattern of commen
... (read more)This sounds drastic enough that it makes me wonder, since the claimed reason was that Said's commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?
Also, is this thing an experiment with a set duration, or a permanent measure? If it's permanent, it has a very rubber room vibe to it, where you don't outright ban someone but continually humiliate them if they keep coming by and wish they'll eventually get the hint.
Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution.
But even assuming we did add such a message, there are many other problems:
- Posting such a message would communicate a level of importance of this specific norm, which does not actually come up very frequently in conversations that don't involve you and a small number of other users, that is not commensurate with its actual importance. We have the standard frontpage commenting guidelines, and they cover what I consider the actually most important things to communicate, and they are approximately the maximum length I expect new users to r
... (read more)First, concerning the first half of your comment (re: importance of this information, best way of communicating it):
I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”
Have you checked that users understand that they don’t have an obligation to respond to comments?
If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)
Second, concerning the second half of your comment:
Frankly, this whole perspective you describe just seems bizarre.
Of course I can’t possibly create a formal obligation to respond to comments. Of course ... (read more)
(I am not planning to engage further at this point.
My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don't think I am saying particularly complicated things, and I think I've communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them.
My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we'll continue to take some moderator actions until things look better by our models. I think we've both gone far beyond our duty of effort to explain where we are coming from and what our models are.)
(I wrote the following before habryka wrote his message)
While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I've been expressing concerns about in this particular discussion.
The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).
But, I think it'd be fairly tractable to have a message like "btw, if this conversation doesn't seem productive to you, consider downvoting it and moving on with your day [link to some background]" appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)
Well, no doubt most or all of what you wrote was important, but by “important” do you specifically mean “forms part of the description of what you take to be ‘the problem’, which this moderation action is attempting to solve”?
For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!
Or, take the links. One of them is cl... (read more)
Is that really the claim? I must object to it, if that’s so. I don’t think I’ve ever made any false claims about what social norms obtain on Less Wrong (and to the extent that some of my comments were interpreted that way, I was quick to clearly correct that misinterpretation).
Certainly the “normatively correct general principles” comment didn’t contain any such false claims. (And Raemon does not seem to be claiming otherwise.) So, the question remains: what exactly is the relevance of the philosophical disagreement? How is it connected to any purported violations of site rules or norms or anything?
I am not sure what this means. I am not a moderator, so it’s not clear to me how I can enforce any norm. (I can exemplify conformance to a norm, of course, but that, in this case, would be me replying to comments on my posts, which is not what we’re talking about here. And I can encourage or even demand conformance to some fa... (read more)
I wonder if you find this comment by Benquo (i.e., the author of the post in question; note that this comment was written just months after that post) relevant, in any way, to your views on the matter?
I could be misunderstanding all sorts of things about this feature that you've just implemented, but…
Why would you want to limit newer users from being able to declare that rate-limited users should be able to post as much as they like on newer users' posts? Shouldn't I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?
Sure, but... I think I don't know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more.
Some quick thoughts:
- LessWrong totally has prerequisites. I don't think you necessarily need to be an atheist to participate in LessWrong, but if you straightforwardly believe in the Christian god, and haven't really engaged with the relevant arguments on the site, and you comment on posts that assume that there is no god, I will likely just ban you or ask you to stop. There are many other dimensions for which this is also true. Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
- Well-Kept Gardens Die by Pacifism is IMO one of the central moderation principles of LessWrong. I have huge warning flags around your language here and feel like it's doing something pretty similar to the outraged calls for "censorship" that Eliezer refers to in that post, but I might just be misunderstanding you. In-general, LessWr
... (read more)I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.
Thanks, to clarify: I don't intend to make a "how dare the moderators moderate Less Wrong" objection. Rather, the objection is, "How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma." (That's why the grandparent specifies "long-time, well-regarded", "many highly-upvoted contributions", "We were here first", &c.) I'm saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don't want to accept literally any speech (which is why the grandparent mentions "removing low-quality [...] comments" as a legitimate moderator duty).
Note that "permanently restrict the account of" is different from "moderate". For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic, and Achmiz complied. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I'm accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz... (read more)
Hmm, I am still not fully sure about the question (your original comment said "I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here", which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said's net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like "purchase him out of his right to use LessWrong" or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in ce... (read more)
I don't know if it's good that there's a positive bias towards karma, but I'm pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don't really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don't really get what this would improve. Also, not everyone cares about donating to charity, and that's fine.
I endorse much of Oliver's replies, and I'm mostly burnt out from this convo at the moment so can't do the followthrough here I'd ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:
Yes, the bar for banning or permanently limiting the speech of a longterm member in Said's reference class is very high, and I'd treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens.
I don't think the Spirit of LessWrong 2009 actually supports you on the specific claims you're making here.
As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer's buy-in, and now we have 6 years of track of reco... (read more)
In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.
It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the... (read more)
(Tangentially) If users are allowed to ban other users from commenting on their posts, how can I tell when the lack of criticism in the comments of some post means that nobody wanted to criticize it (which is a very useful signal that I would want to update on), or that the author has banned some or all of their most prominent/frequent critics? In addition, I think many users may be mislead by lack of criticism if they're simply not aware of the second possibility or have forgotten it. (I think I knew it but it hasn't entered my conscious awareness for a while, until I read this post today.)
(Assuming there's not a good answer to the above concerns) I think I would prefer to change this feature/rule to something like allowing the author of a post to "hide" commenters or individual comments, which means that those comments are collapsed by default (and marked as "hidden by the post author") but can be individually expanded, and each user can set an option to always expand those comments for themselves.
This may be true in some cases, but not all. My experience here comes from cryptography where it often takes hundreds of person-hours to find a flaw in a new idea (which can sometimes be completely fatal), and UDT, where I found a couple of issues in my own initial idea only after several months/years of thinking (hence going to UDT1.1 and UDT2). I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.
Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques. I have this impression for Duncan myself which may explain why I've rarely commented on any of his posts. I seem to remember once trying to talk him out of (what seemed to me like) overreacting to a critique and banning the critic on Faceb... (read more)
Some UI thoughts as I think about this:
Right now, you see total karma for posts and comments, and total vote count, but not the number of upvotes/downvotes. So you can't actually tell when something is controversial.
One reason for this is because we (once) briefly tried turning this on, and immediately found it made the site much more stressful and anxiety inducing. Getting a single downvote felt like "something is WRONG!" which didn't feel productive or useful. Another reason is that it can de-anonymize strong-votes because their voting power is a less common number.
But, an idea I just had was that maybe we should expose that sort of information
once a post becomes popular enough. Like maybe over 75 karma. [Better idea: once a post has a certain number of votes. Maybe at least 25]. At that point you have more of a sense of the overall karma distribution so individual votes feel less weighty, and also hopefully it's harder to infer individual voters.Tagging @jp who might be interested.
I support exposing the number of upvotes/downvotes. (I wrote a userscript for GW to always show the total number of votes, which allows me to infer this somewhat.) However that doesn't address the bulk of my concerns, which I've laid out in more detail in this comment. In connection with karma, I've observed that sometimes a post is initially upvoted a lot, until someone posts a good critique, which then causes the karma of the post to plummet. This makes me think that the karma could be very misleading (even with upvotes/downvotes exposed) if the critique had been banned or disincentivized.
I don't keep track of people's posting styles and correlate them with their names very well. Most people who post on LW, even if they do it a lot, I have negligible associations beyond "that person sounds vaguely familiar" or "are they [other person] or am I mixing them up?".
I have persistent impressions of both Said and Duncan, though.
I am limited in my ability to look up any specific Said comment or things I've said elsewhere about him because his name tragically shares a spelling with a common English word, but my model of him is strongly positive. I don't think I've ever read a Said comment and thought it was a waste of time, or personally bothersome to me, or sneaky or pushy or anything.
Meanwhile I find Duncan vaguely fascinating like he is a very weird bug which has not, yet, sprayed me personally with defensive bug juice or bitten me with its weird bug pincers. Normally I watch him from a safe distance and marvel at how high a ratio of "incredibly suspicious and hackle-raising" to "not often literally facially wrong in any identifiable ways" he maintains when he writes things. It's not against any rules to be incredibly suspicious and hackle-raising in a pu... (read more)
I don't know[1] for sure what purpose this analogy is serving in this comment, and without it the comment would have felt much less like it was trying to hijack me into associating Duncan with something viscerally unpleasant.
My guess is that it's meant to convey something like your internal emotional experience, with regards to Duncan, to readers.
I think weird bugs are neat.
I've tried for a bit to produce a useful response to the top-level comment and mostly failed, but I did want to note that
"Oh, it sort of didn't occur to me that this analogy might've carried a negative connotation, because when I was negatively gossiping about Duncan behind his back with a bunch of other people who also have an overall negative opinion of him, the analogy was popular!"
is a hell of a take. =/
It is only safe for you to have opinions if the other people don't dislike them?
I think you're trying to set up a really mean dynamic where you get to say mean things about me in public, but if I point out anything frowny about that fact you're like "ah, see, I knew that guy was Bad; he's making it Unsafe for me to say rude stuff about him in the public square."
(Where "Unsafe" means, apparently, "he'll respond with any kind of objection at all." Apparently the only dynamic you found acceptable was "I say mean stuff and Duncan just takes it.")
*shrug
I won't respond further, since you clearly don't want a big back-and-forth, but calling people a weird bug and then pretending that doesn't in practice connote disgust is a motte and bailey.
I kind of doubt you care at all, but here for interested bystanders is more information on my stance.
- I suspect you of brigading-type behavior wrt conflicts you get into. Even if you make out like it's a "get out the vote" campaign where the fact that rides to the polls don't require avowing that you're a Demoblican is important to your reception, when you're the sort who'll tell all your friends someone is being mean to you and then the karma swings around wildly I make some updates. This social power with your clique of admirers in combination with your contagious lens on the world that they pick up from you is what unnerves me.
- I experience a lot of your word choices (e.g. "gossiping behind [your] back") as squirrelly[1] , manipulative, and more rhetoric than content. I would not have had this experience in this particular case if, for example, you'd said "criticizing [me] to an unsympathetic audience". Gossip behind one's back is a social move for a social relationship. One doesn't clutch one's pearls about random people gossiping about Kim Kardashian behind her back. We have never met. I'd stand a better chance of recognizing Ms. Ka
... (read more)For what it's worth, I had a very similar reaction to yours. Insects and arthropods are a common source of disgust and revulsion, and so comparing anyone to an insect or an arthropod, to me, shows that you're trying to indicate that this person is either disgusting or repulsive.
Poisonous frogs often have bright colors to say "hey don't eat me", but there are also ones that use a "if you don't notice me you won't eat me" strategy. Ex: cane toad, pickerel frog, black-legged poison dart frog.
Welp, guess I shouldn't pick up frogs. Not what I expected to be the main takeaway from this thread but still good to know.
First, my read of both Said and Duncan is that they appreciate attention to the object level in conflicts like this. If what's at stake for them is a fact of the matter, shouldn't that fact get settled before considering other issues? So I will begin with that. What follows is my interpretation (mentioned here so I can avoid saying "according to me" each sentence).
In this comment, Said describes as bad "various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on", without specifically identifying Duncan as proposing that norm (tho I think it's heavily implied).
Then gjm objects to that characterization as a straw man.
In this comment Said defends it, pointing out that Duncan's standard of "critics should do some of the work of crossing the gap" is implicitly a rule against "asking people for examples of their claims [without anything else]", given that Duncan thinks asking for examples doesn't count as doing the work of crossing the gap. (Earlier in the conversation Duncan calls it 0% of the work.) I think the point as I have written it here is correct and uncontroversial; I think there is an important difference between the point as I wrot... (read more)
Vaniver privately suggested to me that I may want to offer some commentary on what I could’ve done in this situation in order for it to have gone better, which I thought was a good and reasonable suggestion. I’ll do that in this comment, using Vaniver’s summary of the situation as a springboard of sorts.
So, first of all, yes, I was clearly referring to Duncan. (I didn’t expect that to be obscure to anyone who’d bother to read that subthread in the first place, and indeed—so far as I can tell—it was not. If anyone had been confused, they would presumably have asked “what do you mean?”, and then I’d have linked what I mean—which is pretty close to what happened anyway. This part, in any case, is not the problem.)
The obvious problem here is that “don’t ask people for examples of their claims”—taken literally—is, indeed, a strawman.
The question is, whose problem (to solve) is it?
There a... (read more)
I agree that the hypothetical comment you describe as better is in fact better. I think something like ... twenty-or-so exchanges with Said ago, I would have written that comment? I don't quite know how to weigh up [the comment I actually wrote is worse on these axes of prosocial cooperation and revealing cruxes and productively clarifying disagreement and so forth] with [having a justified true belief that putting forth that effort with Said in particular is just rewarded with more branches being created].
(e.g. there was that one time recently where Said said I'd blocked people due to disagreeing with me/criticizing me, and I s... (read more)
At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you're [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You've spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse. You've written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.
I don't think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you'd be much happier with my ideal, you'd think it was pretty good if not perfect. Respectable, maybe adequate. A garden.
And I'm really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a "this place is really under control" kind of ideal. I tak... (read more)
This is fair, and I apologize; in that line I was speaking from despair and not particularly tracking Truth.
A [less straightforwardly wrong and unfair] phrasing would have been something like "this is not a Japanese tea garden; it is a British cottage garden."
I probably rushed this comment out the door in a "defend my honor, set the record straight" instinct that I don't think reliably leads to good discourse and is not what I should be modeling on LessWrong.
I did, thanks.
I think gjm's comment was missing the observation that "comment that just ask for examples" are themselves an example of "unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself", and so it wasn't cleanly about "balance: required or not?". I think a reasonable reader could come away from that comment of gjm's uncertain whether or not Said simply saying "examples?" would count as an example.
My interpretation of this section is basically the double crux dots arguing over the labels they should have, with Said disagreeing strenuously with calling his mode "unproductive" (and elsewhere over whether labor is good or bad, or how best to minimize it) and moving from the concrete examples to an abstract pattern (I suspect because he thinks the former is easier to defend than the latter).
I should also note here that I don't think you have explici... (read more)
But why should this be a problem?
Why should people say “hey, could you not, or even just a little less”? If you do something that isn’t bad, that isn’t not a problem, why should people ask you to stop? If it’s a good thing to do, why wouldn’t they instead ask you to do it more?
And why, indeed, are you still speaking in this transactional way?
If you write a post about some abstract concept, without any examples of it, and I write a post that says “What are some examples?”, I am not asking you to do labor on my behalf, I am not asking for a favor (which must be justified by some “favor credit”, some positive account of favors in the bank of Duncan). Quite frankly, I find that claim ridiculous to the point of offensiveness. What I am doing, in that scenario, is making a positive contribution to the discussion, both for your benefit and (even more importantly) for the benefit of other readers and com... (read more)
Maybe "resent" is doing most work here, but an excellent reason to not respond is that it takes work. To the extent that there are norms in place that urge response, they create motivation to suppress criticism that would urge response. An expectation that it's normal for criticism to be a request for response that should normally be granted is pressure to do the work of responding, which is costly, which motivates defensive action in the form of suppressing criticism.
A culture could make it costless (all else equal) to ignore the event of a criticism having been made. This is an inessential reason for suppressing criticism that can be removed, and therefore should, to make criticism cheaper and more abundant.
The content of criticism may of course motivate the author of a criticized text to make further statements, but the fact of criticism's posting by itself should not. The fact of not responding to criticism is some sort of noisy evidence of not having a good response that is feasible or hedonic to make, but that's Law, not something that can change for the sake of mechanism design.
I just want to highlight this link (to one of Duncan’s essays on his Medium blog), which I think most people are likely to miss otherwise.
That is an excellent post! If it was posted on Less Wrong (
I understand why it wasn’t, of courseEDIT: I was mistaken about understanding this; see replies), I’d strong-upvote it without reservation. (I disagree with some parts of it, of course, such as one of the examples—but then, that is (a) an excellent reason to provide specific examples, and part of what makes this an excellent post, and (b) the reason why top-level posts quite rightly don’t have agree/disagree voting. On the whole, the post’s thesis is simply correct, and I appreciate and respect Duncan for having written it.)There are some things which cannot be expressed in a non-insulting manner (unless we suppose that the target is such a saint that no criticism can affect their ego; but who among us can pretend to that?).
I did not intend insult, in the sense that insult wasn’t my goal. (I never intend insult, as a rule. What few exceptions exist, concern no one involved in this discussion.)
But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.
So, you ask:
The choice was between writing something that was necessary for the purpose of fulfilling appropriate and reasonable conversational goals, but could be written only in such a way that anyone but a saint would be insulted by it—or writing nothing.
I chose the former because I judged it to be the correct choice: writing nothing, simply in order to to avoid insult, would have been worse than writing the comment which I wrote.
(This explanation is also quite likely to apply to any past or future comments I write which seem to be insulting in similar fashion.)
I want to register that I don't believe you that you cannot, if we're using the ordinary meaning of "cannot". I believe that it would be more costly for you, but it seems to me that people are very often able to express content like that in your comment, without being insulting.
I'm tempted to try to rephrase your comment in a non-insulting way, but I would only be able to convey its meaning-to-me, and I predict that this is different enough from its meaning-to-you that you would object on those grounds. However, insofar as you communicated a thing to me, you could have said that thing in a non-insulting way.
Indeed, they are not—or so it would seem. So why would my comment be insulting?
After all, I didn’t write “your stated reason is bizarre”, but “I find your stated reason bizarre”. I didn’t write “it seems like your thinking here is incoherent”, but “I can’t form any coherent model of your thinking here”. I didn’t… etc.
So what makes my comment insulting?
Please note, I am not saying “my comment isn’t insulting, and anyone who finds it so is silly”. It is insulting! And it’s going to stay insulting no matter how you rewrite it, unless you either change what it actually says or so obfuscate the meaning that it’s not possible to tell what it actually says.
The thing I am actually saying—the meaning of the words, the communicated claims—imply unflattering facts about Duncan.[1] There’s no getting around that.
The only defensible recourse, for someone who objects to my comment, is to say that one should simply not say insulting things; and if there are relevant things to say which cannot be said non-insultingly, then they oughtn’t be said… and if anything is lost thereby, well, too bad.
And that would be a consistent point of view, certainly. Bu... (read more)
I think it's pretty rough for me to engage with you here, because you seem to be consistently failing to read the things I've written. I did not say it was low-effort. I said that it was possible. Separately, you seem to think that I owe you something that I just definitely do not owe you. For the moment, I don't care whether you think I'm arguing in bad faith; at least I'm reading what you've written.
Here's a potential alternative wording of your previous statement.
Original: (I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.)
New version: I am very confused by your stated reason, and I'm genuinely having trouble seeing things from your point of view. But I would genuinely like to. Here's a version that makes a little more sense to me [give it your best shot]... but here's where that breaks down [explain]. What am I missing?
I claim with very high confidence that this new version is much less insulting (or is not insulting at all). It took me all of 15 seconds to come up with, and I claim that it either conveys the same thing as your original comment (plus added extras), or that the difference is negligible and could be overcome with an ongoing and collegial dialog of a kind that the original, insulting version makes impossible. If you have an explanation for what of value is lost in translation here, I'm listening.
If you care more about not making social attacks than telling the truth, you will get an environment which does not tell the truth when it might be socially inconvenient. And the truth is almost always socially inconvenient to someone.
So if you are a rationalist, i.e. someone who strongly cares about truth-seeking, this is highly undesirable.
Most people are not capable of executing on this obvious truth even when they try hard; the instinct to socially-smooth is too strong. The people who are capable of executing on it are, generally, big-D Disagreeable, and therefore also usually little-d disagreeable and often unpleasant. (I count myself as all three, TBC. I'd guess Said would as well, but won't put words in his mouth.)
I'm sure there is an amount of rudeness which generates more optimization-away-from-truth than it prevents. I'm less sure that this is a level of rudeness achievable in actual human societies. And for whether LW could attain that level of rudeness within five years even if it started pushing for rudeness as normative immediately and never touched the brakes - well, I'm pretty sure it couldn't. You'd need to replace most of the mod team (stereotypically, with New Yorkers, which TBF seems both feasible and plausibly effective) to get that to actually stick, probably, and it'd still be a large ship turning slowly.
A monoculture is generally bad, so having a diversity of permitted conduct is probably a good idea regardless. That's extremely hard to measure, so as a proxy, ensuring there are people representing both extremes who are prolific and part of most important conversations will do well enough.
The concern is with requiring the kind of politeness that induces substantive self-censorship. This reduces efficiency of communicating dissenting observations, sometimes drastically. This favors beliefs/arguments that fit the reigning vibe.
The problems with (tolerating) rudeness don't seem as asymmetric, it's a problem across the board, as you say. It's a price to consider for getting rid of the asymmetry of over-the-top substantive-self-censorship-inducing politeness.
I have (it would seem) a reputation for making certain sorts of comments, which are of course not intended as “attacks” of any sort (social, personal, etc.), but which are sometimes perceived as such—and which perception, in my view, reflects quite poorly on those who thus perceive said comments.
Certainly I would prefer that things were otherwise. (Isn’t this often the case, for all of us?) But this cannot be a reason to avoid making such comments; to do so would be even more blameworthy, morally speaking, than is the habit on the part of certain interlocutors to take those comments as attacks in the first place. (See also this old comment thread, which deals with the general questions of whether, and how, to alter one’s behavior in response to purported offense experienced by some person.)
... (read more)No, absolutely not.
Yeah.
My view is that first it’s important to get clear on what was meant by some claim or statement or what have you. Then we can discuss whatever. (If that “whatever” includes some hypothetical interpretation of the original (ambiguous) claim, which someone in the conversation found interesting—sure, why not.) Or, at the very least, it’s important to get that clarity regardless—the tangent can proceed in parallel, if it’s something the participants wish.
EDIT: More than anything, what I don’t endorse is a norm that says that someone asking “what did you mean by that word/phrase/sentence/etc.?” must provide some intepretation of their own, whether that be a guess at the OP’s meaning, or some hypothetical, or what have you. Just plain asking “what did you mean by that?” should be ok!
Totally agreed.
Again, just chiming in, leaving the actual decision up to Ray:
My current take here is indeed that Said's hypothesis, taking fully literal and within your frame was quite confused and bad.
But also, like, people's frames, especially in the domain of adversarial actions, hugely differ, and I've in the past been surprised by the degree to which some people's frames, despite seeming insane and gaslighty to me at first turned out to be quite valuable. Most concretely I have in my internal monologue indeed basically fully shifted towards using "lying" and "deception" the way Zack, Benquo and Jessica are using it, because their concept seems to carve reality at its joints much better than my previous concept of lying and deception. This despite me telling them many times that their usage of those terms is quite adversarial and gaslighty.
My current model is that when Said was talking about the preference he ascribes to you, there is a bunch of miscommunication going on, and I probably also have deep disagreements with his underlying model, but I have updated against trying to stamp down on that kind of stuff super hard, even if it sounds quite adversarial to me on first gl... (read more)
I think you are mistaken about the process that generated my previous comment; I would have preferred a response that engaged more with what I wrote.
In particular, it looks to me like you think the core questions are "is the hypothesis I quote correct? Is it backed up by the four examples?", and the parent comment looks to me like you wrote it thinking I thought the hypothesis you quote is correct and backed up by the examples. I think my grandparent comment makes clear that I think the hypothesis you quote is not correct and is not backed up by the four examples.
Why does the comment not just say "Duncan is straightforwardly right"? Well, I think we disagree about what the core questions are. If you are interested in engaging with that disagreement, so am I; I don't think it looks like your previous comment.
I have not read all the words in this comment section, let alone in all the linked posts, let alone in their comments sections, but/and - it seems to me like there's something wrong with a process that generates SO MANY WORDS from SO MANY PEOPLE and takes up SO MUCH PERSON-TIME for what is essentially two people not getting along. I get that an individual social conflict can be a microcosm of important broader dynamics, and I suspect that Duncan and/or Said might find my "not getting along" summary trivializing, which may even be true, as noted I haven't read all the words - just, still, is this really the best thing for everyone involved to be doing with their time?
This seems like a situation that is likely to end up ballooning into something that takes up a lot of time and energy. So then, it seems worth deciding on an "appetite" up front. Is this worth an additional two hours of time? Six? Sixty? Deciding on that now will help avoid a scenario where (significantly) more time is spent than is desirable.
Skimmed all the comments here and wanted to throw in my 2c (while also being unlikely to substantively engage further, take that into account if you're thinking about responding):
- It seems to me that people should spend less time litigating this particular fight and more time figuring out the net effects that Duncan and Said have on LW overall. It seems like mods may be dramatically underrating the value of their time and/or being way too procedurally careful here, and I would like to express that I'd support them saying stuff like "idk exactly what went wrong but you are causing many people on our site (including mods) to have an unproductive time, that's plenty of grounds for a ban".
- It seems to me that many (probably most) people who engage with Said will end up having an unproductive and unpleasant time. So then my brain started generating solutions like "what if you added a flair to his comments saying 'often unproductive to engage'" and then I was like "wait this is clearly a missing stair situation (in terms of the structural features not the severity of the misbehavior) and people are in general way too slow to act on those; at the point where this seems like a plausibly-net-
... (read more)But how do you find the rare good stuff amidst all the bad stuff? I tend to do it with a combination of looking at karma, checking the comments to see whether or not there’s good criticism, and finally reading it myself if it passes the previous two filters. But if a potentially good criticism was banned or disincentivized, then that 1) causes me to waste time (since it distorts both signals I rely on), and 2) potentially causes me to incorrectly judge the post as "good" because I fail to notice the flaw myself. So what do you do such that it doesn't matter whether or not there's criticism?
Thanks for weighing in! Fwiw I've been skimming but not particularly focused on the litigation of the current dispute, and instead focusing on broader patterns. (I think some amount of litigation of the object level was worth doing but we're past the point where I expect marginal efforts there to help)
One of the things that's most cruxy to me is what people who contribute a lot of top content* feel about the broader patterns, so, I appreciate you chiming in here.
*roughly operationalized as "write stuff that ends up in the top 20 or top 50 of the annual review"
Here is some information about my relationship with posting essays and comments to LessWrong. I originally wrote it for a different context (in response to a discussion about how many people avoid LW because the comments are too nitpicky/counterproductive) so it's not engaging directly with anything in the OP, but @Raemon mentioned it would be useful to have here.
*
I *do* post on LW, but in a very different way than I think I would ideally. For example, I can imagine a world where I post my thoughts piecemeal pretty much as I have them, where I have a research agenda or a sequence in mind and I post each piece *as* I write it, in the hope that engagement with my writing will inform what I think, do, and write next. Instead, I do a year's worth of work (or more), make a 10-essay sequence, send it through many rounds of editing, and only begin publishing any part of it when I'm completely done, having decided in advance to mostly ignore the comments.
It appears to me that what I write is strongly in line with the vision of LW (as I understand it; my understanding is more an extrapolation of Eliezer's founding essays and the name of the site than a reflection of discussion with current ... (read more)
Okay, overall outline of thoughts on my mind here:
- What actually happened in the recent set of exchanges? Did anyone break any site norms? Did anyone do things that maybe should be site norms but we hadn't actually made it an explicit rule and we should take the opportunity to develop some case law and warn people not to do it in the future?
- 5 years ago, the moderation team has issued Said a mod warning about a common pattern of engagment he does that a lot of people have complained about (this was operationalized as "demanding more interpretive labor than he has given"). We said if he did it again we'd ban him for a month. My vague recollection is he basically didn't do it for a couple years after the warning, but maybe started to somewhat over the past couple years, but I'm not sure, (I think he may have not done the particular thing we asked him not to, but I've had a growing sense his commenting making me more wary of how I use the site). What are my overall thoughts on that?
- Various LW team members have concerns about how Duncan handles conflict. I'm a bit confused about how to think about it in this case. I think a number of other users are worried about this too. We should prob
... (read more)Maybe explicit rules against blocking users from "norm-setting" posts.
On blocking users from commenting
I still endorse authors being able to block other users (whether for principles reasons, or just "this user is annoying"). I think a) it's actually really important for authors for the site to be fun to use, b) there's a lot of users who are dealbreakingly annoying to some people but not others. Banning them from the whole site would be overkill. c) authors aren't obligated to lend their own karma/reputation to give space to other people's content. If an author doesn't want your comments on his post, whether for defensible reasons or not, I think it's an okay answer that those commenters make their own post or shortform arguing the point elsewhere.
Yes, there are some trivial inconveniences to posting that criticism. I do track that in the cost. But I think that is outweighed by the effect on authors being motivated to post.
That all said...
Blocking users on "norm-setting posts"
I think it's more worrisome to block users on posts that are making major momentum towards changing site norms/culture. I don't think the censorship effects are that strong or distorting in most c... (read more)
Recap of mod team history with Said Achmiz
First, some background context. When LW2.0 was first launched, the mod team had several back-and-forth with Said over complaints about his commenting style. He was (and I think still is) the most-complained-about LW user. We considered banning him.
Ultimately we told him this:
... (read more)I think some additional relevant context is this discussion from three years ago, which I think was 1) an example of Said asking for definitions without doing any interpretive labor, 2) appreciated by some commenters (including the post author, me), and 3) reacted to strongly by people who expected it to go poorly, including some mods. I can't quickly find any summaries we posted after the fact.
Death by a thousand cuts and "proportionate"(?) response
A way this all feels relevant to current disputes with Duncan is that thing that is frustrating about Said is not any individual comment, but an overall pattern that doesn't emerge as extremely costly until you see the whole thing. (i.e. if there's a spectrum of how bad behavior is, from 0-10, and things that are a "3" are considered bad enough to punish, someone who's doing things that are bad at a "2.5" or "2.9" level don't quite feel worth reacting to. But if someone does them a lot it actually adds up to being pretty bad.
If you point this out, people mostly shrug and move on with their day. So, to point it out in a way that people actually listen to, you have to do something that looks disproportionate if you're just paying attention to the current situation. And, also, the people who care strongly enough to see that through tend to be in an extra-triggered/frustrated state, which means they're not at their best when they're dong it.
I think Duncan's response looks very out-of-proportion. I think Duncan's response is out of proportion to some degree (see Vaniver thread for some reasons why. I have some more reasons I ... (read more)
Personally, the thing I think should change with Said is that we need more of him, preferably a dozen more people doing the same thing. If there were a competing site run according to Said's norms, it would be much better for pursuing the art of rationality than modern LessWrong is; disagreeable challenges to question-framing and social moves are desperately necessary to keep discussion norms truth-tracking rather than convenience-tracking.
But this is not an argument I expect to be able to win without actually trying the experiment. And even then I would expect at least five years would be required to get unambiguous results.
I am not sure what you mean, didn't Ray respond on the same day that you tagged him?
I haven't read the details of all of the threads, but I interpreted your comment here as "the mod team ignored your call for clarification" as opposed to "the mod team did respond to your call for clarification basically immediately, but there was some <unspecified issue> with it".