Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.


Recently there's been a series of posts and comment back-and-forth between Said Achmiz and Duncan Sabien, which escalated enough that it seemed like site moderators should weigh in.

For context, a quick recap of recent relevant events as I'm aware of them are. (I'm glossing over many details that are relevant but getting everything exactly right is tricky)

  1. Duncan posts Basics of Rationalist Discourse. Said writes some comments in response. 
  2. Zack posts "Rationalist Discourse" Is Like "Physicist Motors", which Duncan and Said argue some more and Duncan eventually says "goodbye" which I assume coincides with banning Said from commenting further on Duncan's posts. 
  3. I publish LW Team is adjusting moderation policy. Lionhearted suggests "Basics of Rationalist Discourse" as a standard the site should uphold. Paraphrasing here, Said objects to a post being set as the site standards if not all non-banned users can discuss it. More discussion ensues.
  4. Duncan publishes Killing Socrates, a post about a general pattern of LW commenting that alludes to Said but doesn't reference him by name. Commenters other than Duncan do bring up Said by name, and the discussion gets into "is Said net positive/negative for LessWrong?" in a discussion section where Said can't comment.
  5. @gjm publishes On "aiming for convergence on truth", which further discusses/argues a principle from Basics of Rationalist Discourse that Said objected to. Duncan and Said argue further in the comments. I think it's a fair gloss to say "Said makes some comments about what Duncan did, which Duncan says are false enough that he'd describe Said as intentionally lying about them. Said objects to this characterization" (although exactly how to characterize this exchange is maybe a crux of discussion)

LessWrong moderators got together for ~2 hours to discuss this overall situation, and how to think about it both as an object-level dispute and in terms of some high level "how do the culture/rules/moderation of LessWrong work?". 

I think we ended up with fairly similar takes, but, getting to the point that we all agree 100% on what happened and what to do next seemed like a longer project, and we each had subtly different frames about the situation. So, some of us (at least Vaniver and I, maybe others) are going to start by posting some top level comments here. People can weigh in the discussion. I'm not 100% sure what happens after that, but we'll reflect on the discussion and decide on whether to take any high-level mod actions.

If you want to weigh in, I encourage you to take your time even if there's a lot of discussion going on. If you notice yourself in a rapid back and forth that feels like it's escalating, take at least a 10 minute break and ask yourself what you're actually trying to accomplish. 

I do note: the moderation team will be making an ultimate call on whether to take any mod actions based on our judgment. (I'll be the primary owner of the decision, although I expect if there's significant disagreement among the mod team we'll talk through it a lot). We'll take into account arguments various people post, but we aren't trying to reflect the wisdom of crowds. 

So if you may want to focus on engaging with our cruxes rather than what other random people in the comments think.


New Comment
564 comments, sorted by Click to highlight new comments since: Today at 1:00 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Raemon5moModerator Comment395
Pinned by Raemon

Preliminary Verdict (but not "operationalization" of verdict)

tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:

  • credibly commit to changing their behavior in a fairly significant way,
  • or, accept some kind of tech solution that limits their engagement in some reliable way that doesn't depend on their continued behavior.
  • or, be banned from commenting on other people’s posts (but still allowed to make new top level posts and shortforms)

(After the two comments they can continue to PM the LW team, although we'll have some limit on how much time we're going to spend negotiating)

Some background:

Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I'd be sad to see go. 

The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of "we learned new... (read more)

I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.

I note re:

It'd be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.

... that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would've been less likely to leave and would be more likely to return with marginal movement in that direction.

I don't know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like "how would you have felt if we had moved 25% in this direction," I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more "what? No, we're well-adapted to the current environment; we're the ones who've been filtered for."

(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)

Nod. I want to clarify, the diff I'm asking about and being skeptical about is "assuming, holding constant, that LessWrong generally tightens moderation standards along many dimensions, but doesn't especially prioritize the cluster of areas around 'strawmanning being considered especially bad' and 'making unfounded statements about a person's inner state'" i.e. the LessWrong team is gearing up to invest a lot more in moderation one way or another. I expect you to be glad that happened, but still frequently feel in pain on the site and feel a need to take some kind of action regarding it. So, the poll I'd want is something like "given overall more mod investment, are people still especially concerned about the issues I associate with Duncan-in-particular". I agree some manner of poll in this space would be good, if we could implement it.
FWIW, I don't avoid posting because of worries of criticism or nitpicking at all. I can't recall a moment that's ever happened. But I do avoid posting once in a while, and avoid commenting, because I don't always have enough confidence that, if things start to move in an unproductive way, there will be any *resolution* to that. If I'd been on Lesswrong a lot 10 years ago, this wouldn't stop me much. I used to be very... well, not happy exactly, but willing, to spend hours fighting the good fight and highlighting all the ways people are being bullies or engaging in bad argument norms or polluting the epistemic commons or using performative Dark Arts and so on. But moderators of various sites (not LW) have often failed to be able to adjudicate such situations to my satisfaction, and over time I just felt like it wasn't worth the effort in most cases. From what I've observed, LW mod team is far better than most sites at this. But when I imagine a nearer-to-perfect-world, it does include a lot more "heavy handed" moderation in the form of someone outside of an argument being willing and able to judge and highlight whether someone is failing in some essential way to be a productive conversation partner. I'm not sure what the best way to do this would be, mechanically, given realistic time and energy constraints. Maybe a special "Flag a moderator" button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate? Maybe even that would be too onerous, but *shrugs* There's probably a scale at which it is valuable for most people while still being insufficient for someone like Duncan. Maybe the amount decreases each time you're ruled against. Overall I don't want to overpromise something like "if LW has a stronger concentration of force expectation for good conversation norms I'd participate 100x more instead of just reading." But 10x more to begin with, certainly, and maybe more than that over t
This is similar to the idea for the Sunshine Regiment from the early days of LW 2.0, where the hope was that if we have a wide team of people who were sometimes called on to do mod-ish actions (like explaining what's bad about a comment, or how it could have been worded, or linking to the relevant part of The Sequences, or so on), we could get much more of it. (It both would be a counterspell to bystander effect (when someone specific gets assigned a comment to respond to), a license to respond at all (because otherwise who are you to complain about this comment?), a counterfactual matching incentive to do it (if you do the work you're assigned, you also fractionally encourage everyone else in your role to do the work they're assigned), and a scheme to lighten the load (as there might be more mods than things to moderate).) It ended up running into the problem that, actually there weren't all that many people suited to and interested in doing moderator work, and so there was the small team of people who would do it (which wasn't large enough to reliably feel on top of things instead of needing to prioritize to avoid scarcity). I also don't think there's enough uniformity of opinion among moderators or high-karma-users or w/e that having a single judge evaluate whole situations will actually resolve them. (My guess is that if I got assigned to this case Duncan would have wanted to appeal, and if RobertM got assigned to this case Said would have wanted to appeal, as you can see from the comments they wrote in response. This is even tho I think RobertM and I agree on the object-level points and only disagree on interpretations and overall judgments of relevance!) I feel more optimistic about something like "a poll" of a jury drawn from some limited pool, where some situations go 10-0, others 7-3, some 5-5; this of course 10xs the costs compared to a single judge. (And open-access polls both have the benefit and drawback of volunteer labor.)
All good points, and yeah I did consider the issue of "appeals" but considered "accept the judgement you get" part of the implicit (or even explicit if necessary) agreeement made when raising that flag in the first place. Maybe it would require both people to mutually accept it. But I'm glad the "pool of people" variation was tried, even if it wasn’t sustainable as volunteer work.
  I'm not sure that's true? I was asked at the time to be Sunshine mod, I said yes, and then no one ever followed up to assign me any work. At some point later I was given an explanation, but I don't remember it.
You mean it's considered a reasonable thing to aspire to, and just hasn't reached the top of the list of priorities? This would be hair-raisingly alarming if true.
I'm not sure I parse this. I'd say yes, it's a reasonable thing to aspire to and hasn't reached the top of (the moderator/admins) priorities. You say "that would be alarming", and infer... something? I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does? (I'm about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I'm wrong) I think Duncan thinks "Rationalist Discourse" Is Like "Physicist Motors" strawmans his position, and still gets mostly upvoted and if he wasn't going out of his way to make this obvious, people wouldn't notice. And when he does argue that this is happening, his comment doesn't get upvoted much-at-all. You might just say "well, Duncan is wrong about whether this is strawmanning". I think it is [edit for clarity: somehow] strawmanning, but Zack's post still has some useful frames and it's reasonable for it to be fairly upvoted. I think if I were to try say "knock it off, here's a warning" the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don't do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment)

It's a bad thing to institute policies when missing good proxies. Doesn't matter if the intended objective is good, a policy that isn't feasible to sanely execute makes things worse.

Whether statements about someone's inner state are "unfounded" or whether something is a "strawman" is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.

But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don't see a principled difference. People should be allowed to be wrong, that's the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).

(So I think it's not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It's bad on both levels, hence "hair-raisingly alarming".)

I'm actually still kind of confused about what you're saying here (and in particular whether you think the current moderator policy of "don't get involved most of the time" is correct)

You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).

(In the above two comments, I'm not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn't seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I'm not averse to re-injecting the context into their discussion. But I won't necessarily find that interesting or have things to say on.)

So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators' arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related ... (read more)

Okay, gotcha, I had not understood that. (Vaniver's comment elsethread had also cleared this up for me I just hadn't gotten around to replying to it yet) One thing about "not close to the top of our list of priorities" means is that I haven't actually thought that much about the issue in general. On the issue of "do LessWrong moderators think they should respond to strawmanning?" (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I'd say something like: I don't think it makes sense for moderators to have a "policy against strawmanning", in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is "when we notice someone strawmanning, make a comment saying 'hey, this seems like strawmanning to me?'" (which we aren't treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like "proactively noticing and responding to various fallacious arguments at scale."
(FYI @Vladimir_Nesov I'm curious if this sort of thing still feels 'hair raisingly alarming' to you)
(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)
Why do you think it's strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer! As I've explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment, I gave two examples illustrating what I thought the relevant evidentiary standard looks like. If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I'm willing to do your work for you. When I imagine being a lawyer hired to argue that "'Rationalist Discourse' Is Like 'Physicist Motors'" engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that "if someone did [speak of 'physicist motors'], you might quietly begin to doubt how much they really knew about physics", and (b) the part where the author characterizes Bensinger's "defeasible default" of "role-playing being on the same side as the people who disagree with you" as being what members of other intellectual communities would call "concern trolling." However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published. In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger's knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), "concern-trolling" is pejorative term; it's certainly true that Bensinger would not self-identify as engaging in concern-trolling.
I meant the primary point of my previous comment to be "Duncan's accusation in that thread is below the threshold of 'deserves moderator response' (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don't plan to do that, because I don't think it's that big a deal. (I edited the previous comment to say "kinda" strawmanning, to clarify the emphasis more) My point here was just explaining to Vladimir why I don't find it alarming that the LW team doesn't prioritize strawmanning the way Duncan wants (I'm still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)
I see Vlad as saying "that it's even on your priority list, given that it seems impossible to actually enforce, is worrying" not "it is worrying that it is low instead of high on your priority list."
I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable. I don't think moderators showing up and making and judgment and proclamation is the right answer. I'm more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts. 
4[DEACTIVATED] Duncan Sabien5mo
Just noting that "What specifically did it get wrong?" is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length. That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted). Given that public retraction, I'm considering going back and in fact answering the "what specifically" question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it's just a question of whether it's worth taking the time to write it out months later.)
I'm very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post? The author can say that a reader's post is an inaccurate representation of the author's ideas, but how can the author possibly read the reader's mind and conclude that the reader is doing it on purpose? Isn't that a claim that requires exceptional evidence? Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won't matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway). I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author's intent or the majority of readers' understanding), rather than their intent (e.g. saying someone is strawmanning). To be against both strawmanning (with weak evidence) and 'making unfounded statements about a person's inner state' seems to me like a self-contradictory and inconsistent stance.

I think Said and Duncan are clearly channeling this conflict, but the confict is not about them, and doesn't originate with them. So by having them go away or stop channeling the conflict, you leave it unresolved and without its most accomplished voices, shattering the possibility of resolving it in the foreseeable future. The hush-hush strategy of dealing with troubling observations, fixing symptoms instead of researching the underlying issues, however onerous that is proving to be.

(This announcement is also rather hush-hush, it's not a post and so I've only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)

It's an update to me that you hadn't seen it (I figured since you had replied to a bunch of other comments you were tracking the thread, and more generally figured that since there's 360 comments on this thing it wasn't suffering from lack-fo-scrutiny). But, plausible that we should pin it for a day when we make our next set of announcement comments (which are probably coming sometime this weekend, fwiw)
I meant this thread specifically, with the action announcement, not the post. The thread was started 4 days after the post, so everyone who wasn't tracking the post had every opportunity to miss it. (It shouldn't matter for the point about scrutiny that I in particular might've been expected to not miss it.)

Just want to note that I'm less happy with a lesswrong without Duncan. I very much value Duncan's pushback against what I see as a slow decline in quality, and so I would prefer him to stay and continue doing what he's doing. The fact that he's being complained about makes sense, but is mostly a function of him doing something valuable. I have had a few times where I have been slapped down by Duncan, albeit in comments on his Facebook page, where it's much clearer that his norms are operative, and I've been annoyed, but each of those times, despite being frustrated, I have found that I'm being pushed in the right direction and corrected for something I'm doing wrong.

I agree that it's bad that his comments are often overly confrontational, but there's no way to deliver constructive feedback that doesn't involve a degree of confrontation, and I don't see many others pushing to raise the sanity waterline. In a world where a dozen people were fighting the good fight, I'd be happy to ask him to take a break. But this isn't that world, and it seems much better to actively promote a norm of people saying they don't have energy or time to engage than telling Duncan (and maybe / hopefully others) not to push back when they see thinking and comments which are bad. 

The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics

I think I want to reiterate my position that I would be sad about Said not being able to discuss Circling (which I think is one of the topics in that fuzzy cluster). I would still like to have a written explanation of Circling (for LW) that is intelligible to Said, and him being able to point out which bits are unintelligible and not feel required to pretend that they are intelligible seems like a necessary component of that.

With regards to Said's 'general pattern', I think there's a dynamic around socially recognized gnosis where sometimes people will say "sorry, my inability/unwillingness to explain this to you is your problem" and have the commons on their side or not, and I would be surprised to see LW take the position that authors decide for that themselves. Alternatively, tech that somehow makes this more discoverable and obvious--like polls or reacts or w/e--does seem good.

I think productive conversations stem from there being some (but not too much) diversity in what gnosis people are willing to recognize, and in the ability for subspaces to have smaller conversations that require participants to recognize some gnosis.

Is there any evidence that either Duncan or Said are actually detrimental to the site in general, or is it mostly in their interactions directly with each other? As far as I can see, 99% of the drama here is in their conflicts directly with each other and heavy moderation team involvement in it.

From my point of view (as an interested reader and commenter), this latest drama appears to have started partly due to site moderation essentially forcing them into direct conflict with each other via a proposal to adopt norms based on Duncan's post while Said and others were and continue to be banned from commenting on it.

From this point of view, I don't see what either of Said or Duncan have done to justify any sort of ban, temporary or not.

This decision is based on mostly on past patterns with both of them, over the course of ~6 years.

The recent conflict, in isolation, is something where I'd kinda look sternly at them and kinda judge them (and maybe a couple others) for getting themselves into a demon thread*, where each decision might look locally reasonable but nonetheless it escalates into a weird proliferating discussion that is (at best) a huge attention sink and (at worst) gets people into an increasingly antagonistic fight that brings out people's worse instincts. If I spent a long time analyzing I might come to more clarity about who was more at fault, but I think the most I might do for this one instance is ban one or both of them for like a week or so and tell them to knock it off.

The motivation here is from a larger history. (I've summarized one chunk of that history from Said here, and expect to go into both a bit more detail about Said and a bit more about Duncan in some other comments soon, although I think I describe the broad strokes in the top-level-comment here)

And notably, my preference is for this not to result in a ban. I'm hoping we can work something out. The thing I'm laying down in this comment is "we do have to actually work something out."

I condemn the restrictions on Said Achmiz's speech in the strongest possible terms. I will likely have more to say soon, but I think the outcome will be better if I take some time to choose my words carefully.

4the gears to ascension5mo
his speech is not being restricted in variety, it's being ratelimited. the difference there is enormous.

Did we read the same verdict? The verdict says that the end of the ban is conditional on the users in question "credibly commit[ting] to changing their behavior in a fairly significant way", "accept[ing] some kind of tech solution that limits their engagement in some reliable way that doesn't depend on their continued behavior", or "be[ing] banned from commenting on other people's posts".

The first is a restriction on variety of speech. (I don't see what other kind of behavioral change the mods would insist on—or even could insist on, given the textual nature of an online forum where everything we do here is speech.) The third is a restriction of venue, which I claim predictably results in a restriction of variety. (Being forced to relegate your points into a shortform or your own post, won't result in the same kind of conversation as being able to participate in ordinary comment threads.) I suppose the "tech solution" of the second could be mere rate-limiting, but the "doesn't depend on their continued behavior" clause makes me think something more onerous is intended.

(The grandparent only mentions Achmiz because I particularly value his contributions, and because I think many people would prefer that I don't comment on the other case, but I'm deeply suspicious of censorship in general, for reasons that I will likely explain in a future post.)

The tech solution I'm currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I'm leaning towards either "3 comments per post" or "3 comments per post per day". (My ideal world, for Said, is something like "3 comments per post to start, but, if nothing controversial happens and he's not ruining the vibe, he gets to comment more without limit." But that's fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).

I do have a high level goal of "users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so". The question here is "do you want the 'real work' of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can't bother you and?" (which is what's mostly currently happening). 

So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he's already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is "sudden... (read more)

a high level goal of "users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so".

We already have a user-level personal ban feature! (Said doesn't like it, but he can't do anything about it.) Why isn't the solution here just, "Users who don't want to receive comments from Said ban him from their own posts"? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.

the concrete outcome here is Said not getting to comment everywhere he wants, but he's already not getting to do that, because the relevant content + associated usage-building happens off lesswrong

This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I'm unlikely to guess it; you'll have to clarify.) It's true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by indi... (read more)

Stipulating that votes on this comment are more than negligibly informative on this question... it seems bizarre to count karma rather than agreement votes (currently 51 agreement from 37 votes). But also anyone who downvoted (or disagreed) here is someone who you're counting as not being taken into account, which seems exactly backwards.
Some other random notes (probably not maximally cruxy for you but 1. If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I'd be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation. But we've had tons of conversations with Said asking him to adjust his behavior, and he seems pretty committed to sticking to his current behavior. At best he seems grudgingly willing to avoid some threads if there are clear-cut rules we can spell out, but I don't trust him to actually tell the difference in many edge cases. We've spent a hundred+ person hours over the years thinking about how to limit Said's damage, have a lot of other priorities on our plate. I consider it a priority to resolve this in a way that won't continue to eat up more of our time.  2. I did list "actually just encourage people to use the ban tool more" is an option. (DirectedEvolution didn't even know it was an option until pointed out to him recently). If you actually want to advocate for that over a Said-specific-rate-limit, I'm open to that (my model of you thinks that's worse). (Note, I and I think several other people on the mod team would have banned him from my comment sections if I didn't feel an obligation as a mod/site-admin to have a more open comment section) 3. I will probably build something that let's people Opt Into More Said. I think it's fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a "let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way." (I don't expect th
I am a little worried that this is a generalization that doesn't line up with actual evidence on the ground, and instead is caused by some sort of vibe spiral. (I'm reluctant to suggest a lengthy evidence review, both because of the costs and because I'm somewhat uncertain of the benefits--if the problem is that lots of authors find Said annoying or his reactions unpredictable, and we review the record and say "actually Said isn't annoying", those authors are unlikely to find it convincing.) In particular, I keep thinking about this comment (noting that I might be updating too much on one example). I think we have evidence that "Said can engage with open/curious/interpretative topics/posts in a productive way", and should maybe try to figure out what was different that time.
I think in the sense of the general garden-style conflict (rather than Said/Duncan conflict specifically) this is the only satisfactory solution that's currently apparent, users picking the norms they get to operate under, like Commenting Guidelines, but more meaningful in practice. There should be for a start just two options, Athenian Garden and Socratic Garden, so that commenters can cheaply make decisions about what kinds of comments are appropriate for a particular post, without having to read custom guidelines. Excellent. I predict that Said wouldn't be averse to voluntarily not commenting on "open/curious/cooperative" posts, or not commenting there in the kind of style that adherents of that culture dislike, so that "specifically banning Said" from that is an unnecessary caveat.
Well, I'm glad you're telling actual-me this rather than using your model of me. I count the fact your model of me is so egregiously poor (despite our having a number of interactions over the years) as a case study in favor of Said's interaction style (of just asking people things, instead of falsely imagining that you can model them). Yes, I would, actually, want to advocate for informing users about a feature that already exists that anyone can use, rather than writing new code specifically for the purpose of persecuting a particular user that you don't like. Analogously, if the town council of the city I live in passes a new tax increase, I might grumble about it, but I don't regard it as a direct personal threat. If the town council passes a tax increase that applies specifically to my friend Said Achmiz, and no one else, that's a threat to me and mine. A government that does that is not legitimate. So, usually when people make this kind of "hostile paraphrase" in an argument, I tend to take it in stride. I mostly regard it as "part of the game": I think most readers can tell the difference between an attempted fair paraphrase (which an author is expected to agree with) and an intentional hostile paraphrase (which is optimized to highlight a particular criticism, without the expectation that the author will agree with the paraphrase). I don't tell people to be more charitable to me; I don't ask them to pass my ideological Turing test; I just say, "That's not what I meant," and explain the idea again; I'm happy to do the extra work. In this particular situation, I'm inclined to try out a different commenting style that involves me doing less interpretive labor. I think you know very well that "criticize without trying to figure out what the OP is about" is not what Said and I think is at issue. Do you think you can rephrase that sentence in a way that would pass Said's ideological Turing test? Right, so if someone complains about Said, point out that they're

We already let authors write their own moderation guidelines! It's a blank text box!

Because it's a blank text box, it's not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.

With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.

Also, moderation guidelines aren't visible on GreaterWrong at all, afaict. So Said specifically is unlikely to adjust his commenting in response to those guidelines, unless that changes. (I assume Said mostly uses GW, since he designed it.)
I've been busy, so hadn't replied to this yet, but specifically wanted to apologize for the hostile paraphrase (I notice I've done that at least twice now in this thread, I'm trying to better but seems important for me to notice and pay attention to). I think I the corrigible about actually integrating the spirit-of-our-models into his commenting style" line pretty badly, Oliver and Vaniver also both thought it was pretty alarming. The thing I was trying to say I eventually reworded in my subsequent mod announcement as: i.e. this isn't about Said changing this own thought process, but, like, there is a spirit-of-the-law relevant in the mod decision here, and whether I need to worry about specification-gaming. I expect you to still object to that for various reasons, and I think it's reasonable to be pretty suspicious of me for phrasing it the way I did the first time. (I think it does convey something sus about my thought process, but, fwiw I agree it is sus and am reflecting on it)
2Said Achmiz5mo
FYI, my response to this is is waiting for an answer to my question in the first paragraph of this comment.
I'm still uncertain how I feel about a lot of the details on this (and am enough of a lurker rather than poster that I suspect it's not worth my time to figure that out / write it publicly), but I just wanted to say that I think this is an extremely good thing to include: This strikes me basically as a way to move the mod team's role more into "setting good defaults" and less "setting the only way things work". How much y'all should move in that direction seems an open question, as it does limit how much cultivation you can do, but it seems like a very useful tool to make use of in some cases.
How technically troublesome would an allow list be? Maybe the default is everyone gets three comments on a post. People the author has banned get zero, people the author has opted in for get unlimited, the author automatically gets unlimited comments on their own post, mods automatically get unlimited comments. (Or if this feels more like a Said and/or Duncan specific issue, make the options "Unlimited", "Limited", and "None/Banned" then default to everyone at Unlimited except for Said and/or Duncan at Limited.)
My prediction is that those users are primarily upvoting it for what it's saying about Duncan rather than about Said.
To spell out what evidence I'm looking at: There is definitely some term in the my / the mod team's equation for "this user is providing a lot of valuable stuff that people want on the site". But the high level call the moderation team is making is something like "maximize useful truths we're figuring out". Hearing about how many people are getting concrete value out of Said or Duncan's comments is part of that equation, hearing about how many people are feeling scared or offput enough that they don't comment/post much is also part of that equation. And there are also subtler interplays that depend on our actual model of how progress gets made.
I wonder how much of the difference in intuitions about Duncan and Said come from whether people interact with LW primarily as commenters or as authors.  The concerns about Said seem to be entirely from and centered around the concerns of authors. He makes posting mostly costly, he drives content away. Meanwhile many concerns about Duncan could be phrased as being about how he interacts with commenters. If this trend exists it is complicated. Said gets >0 praise from author for his comments on their own post (e.g. Raemon here), and major Said defender Zack has written lots of well-regarded posts, Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts. Duncan also generates a fair amount of concern for attempts to set norms outside his own posts. But I think there might be a thread here
Thank you for the complement! With writing science commentary, my participation is contingent on there being a specific job to do (often, "dig up quotes from links and citations and provide context") and a lively conversation. The units of work are bite-size. It's easy to be useful and appreciated. Writing posts is already relatively speaking not my strong suit. There's no preselection on people being interested enough to drive a discussion, what makes a post "interesting" is unclear, and the amount of work required to make it good is large enough that it feels like work more than play. When I do get a post out, it often fails to attract much attention. What attention it does receive is often negative, and Said is one of the more prolific providers of negative attention. Hence, I ban Said because he further inhibits me from developing in my areas of relative weakness.  My past conflict with Duncan arose when I would impute motives to him, or blur the precise distinctions in language he was attempting to draw - essentially failing to adopt the "referee" role that works so well in science posts, and putting the same negative energy I dislike receiving into my responses to Duncan's posts. When I realized this was going on, I apologized and changed my approach, and now I no longer feel a sense of "danger" in responding to Duncan's posts or comments. I feel that my commenting strong suit is quite compatible with friendly discourse with Duncan, and Duncan is good at generating lively discussions where my refereeing skillset may be of use. So if I had to explain it, some people (me, Duncan) are sensitive about posting, while others are sharp in their comments (Said, anonymousaisafety). Those who are sensitive about posting will get frustrated by Said, while those who write sharp comments will often get in conflict with Duncan.
I'm not sure what other user you're referring to besides Achmiz—it looks like there's supposed to be another word between "about" and "and" in your first sentence, and between "about" and "could" in the last sentence of your second paragraph, but it's not rendering correctly in my browser? Weird. Anyway, I think the pattern you describe could be generated by a philosophical difference about where the burden of interpretive labor rests. A commenter who thinks that authors have a duty to be clear (and therefore asks clarifying questions, or makes attempted criticisms that miss the author's intended point) might annoy authors who think that commenters have a duty to read charitably. Then the commenter might be blamed for driving authors away, and the author might be blamed for getting too angrily defensive with commenters. I interact with this website as an author more than a commenter these days, but in terms of the dichotomy I describe above, I am very firmly of the belief that authors have a duty to be clear. (To the extent that I expect that someone who disagrees with me, also disagrees with my proposed dichotomy; I'm not claiming to be passing anyone's ideological Turing test.) The other month I published a post that I was feeling pretty good about, quietly hoping that it might break a hundred karma. In fact, the comment section was very critical (in ways that I didn't have satisfactory replies to), and the post only got 18 karma in 26 votes, an unusually poor showing for me. That made me feel a little bit sad that day, and less likely to write future posts that I could anticipate being disliked by commenters in the way that this post was disliked. In my worldview, this is exactly how things are supposed to work. I didn't have satisfactory replies to the critical comments. Of course that's going to result in downvotes! Of course it made me a little bit sad that day! (By "conservation of expected feelings": I would have felt a little bit happy if the post did w

Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.

I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to. 

To use a trivial example:  Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught.  And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 -> Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.

So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “10... (read more)

You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.


YES. I think this is hugely important, and I think it's a pretty good definition of the difference between a confused person and a crank.

Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they're lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.

Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or hi... (read more)

This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers. 

9[DEACTIVATED] Duncan Sabien5mo
And this attitude is particularly corrosive to feelings of trust, collaboration, "jamming together," etc. ... it's like walking into a martial arts academy and finding a person present who scoffs at both the instructors and the other students alike, and who doesn't offer sufficient faith to even try a given exercise once before first a) hearing it comprehensively justified and b) checking the sparring records to see if people who did that exercise win more fights. Which, yeah, that's one way to zero in on the best martial arts practices, if the other people around you also signed up for that kind of culture and have patience for that level of suspicion and mistrust! (I choose martial arts specifically because it's a domain full of anti-epistemic garbage and claims that don't pan out.) But in practice, few people will participate in such a martial arts academy for long, and it's not true that a martial arts academy lacking that level of rigor makes no progress in discovering and teaching useful things to its students.

You're describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.

The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you're right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can't, and no one can, then he might have a point, and the gym gets to learn something new.

If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they're an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren't there in the first place. It's definitely more challenging to jam with dissonant characters like that (especially if they're dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it's important to realize that the problem isn't so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.

-1[DEACTIVATED] Duncan Sabien5mo
Strong disagree that I'm describing a deeply dysfunctional gym; I barely described the gym at all and it's way overconfident/projection-y to extrapolate "deeply dysfunctional" from what I said. There's a difference between "hey, I want to understand the underpinnings of this" and the thing I described, which is hostile to the point of "why are you even here, then?" Edit: I view the votes on this and the parent comment as indicative of a genuine problem; jimmy above is exhibiting actually bad reasoning (à la representativeness) and the LWers who happen to be hanging around this particular comment thread are, uh, apparently unaware of this fact. Alas.
Well, you mentioned the scenario as an illustration of a "particularly corrosive" attitude.  It therefore seems reasonable to fill in the unspecified details (like just how disruptive the guy's behavior is, how much of everyone's time he wastes, how many instructors are driven away in shame or irritation) with pretty negative ones—to assume the gym has in fact been corroded, being at least, say, moderately dysfunctional as a result. Maybe "deeply dysfunctional" was going too far, but I don't think it's reasonable to call that "way overconfident/projection-y".  Nor does the difference between "deeply dysfunctional" and "moderately dysfunctional" matter for jimmy's point. FYI, I'm inclined to upvote jimmy's comment because of the second paragraph: it seems to be the perfect solution to the described situation (and to all hypothetical dysfunction in the gym, minor or major), and has some generalizability (look for cheap tests of beliefs, challenge people to do them).  And your comment seems to be calling jimmy out inappropriately (as I've argued above), so I'm inclined to at least disagree-vote it.
-2[DEACTIVATED] Duncan Sabien5mo
"Let's imagine that these unspecified details, which could be anywhere within a VERY wide range, are specifically such that the original point is ridiculous, in support of concluding that the original point is ridiculous" does not seem like a reasonable move to me. Separately: https://www.lesswrong.com/posts/WsvpkCekuxYSkwsuG/overconfidence-is-deceit
I think my feeling here is: * Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great. * But it's not clear how important that mistake is to his comment. I expect people were mostly reacting to paragraphs 2 and 3, and you could cut paragraph 1 out and they'd stand by themselves. * Do the more-interesting parts of the comment implicitly rely on the projection/unjustified-claim? Also not clear to me. I do think the comment is overstated. ("The way to jam"?) But e.g. "the problem isn’t so much the difficulty as the inability to overcome the difficulty" seems... well, I'd say this is overstated too, but I do think it's pointing at something that seems valuable to keep in mind even if we accept that the gym is functional. * So I don't think it's unreasonable that the parent got significantly upvoted, though I didn't upvote it myself; and I don't think it's unreasonable that your correction didn't, since it looks correct to me but like it's not responding to the main point. * Maybe you think paragraphs 2 and 3 were relying more on the projection than it currently seems to me? In that case you actually are responding to what-I-see-as the main point. But if so I'd need it spelled out in more detail.
FWIW, that is a claim I'm fully willing and able to justify. It's hard to disclaim all the possible misinterpretations in a brief comment (e.g. "deeply" != "very"), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.  
Yes, and that's why I described the attitude as "dysfunctionally dissonant" (emphasis in original). It's not a good way of challenging the instructors, and not the way I recommend behaving. What I'm talking about is how a healthy gym environment is robust to this sort of dysfunctional dissonance, and how to productively relate to unskilled dissonance by practicing skillfully enough yourself that the system's combined dysfunction never becomes supercritical and instead decays towards productive cooperation. That's certainly one possibility. But isn't it also conceivable though that I simply see underlying dynamics (and lack thereof) which you don't see, and which justify the confidence level I display? It certainly makes sense to track the hypothesis that I am overconfident here, but ironically it strikes me as overconfident to be asserting that I am being overconfident without first checking things like "Can I pass his ITT"/"Can I point to a flaw in his argument that makes him stutter if not change his mind"/etc. To be clear, my view here is based on years of thinking about this kind of problem and practicing my proposed solutions with success, including in a literal martial arts gym for the last eight years. Perhaps I should have written more about these things on LW so my confidence doesn't appear to come out of nowhere, but I do believe I am able to justify what I'm saying very well and won't hesitate to do so if anyone wants further explanation or sees something which doesn't seem to fit. And hey, if it turns out I'm wrong about how well supported my perspective is, I promise not to be a poor sport about it. In absence of an object level counterargument, this is textbook ad hominem. I won't argue that there isn't a place for that (or that it's impossible that my reasoning is flawed), but I think it's hard to argue that it isn't premature here. As a general rule, anyone that disagrees with anyone can come up with a million accusations of this sort, and it is
I thought it was a reference to, among other things, this exchange where Said says one of Duncan's Medium posts was good, and Duncan responds that his decision to not post it on LW was because of Said. If you're observing that Said could just comment on Medium instead, or post it as a linkpost on LW and comment there, I think you're correct. [There are, of course, other things that are not posted publicly, where I think it then becomes true.]
I do want to acknowledge that based on various comments and vote patterns, I agree it seems like a pretty controversial call, and I model is as something like "spending down and or making a bet with a limited resource (maybe two specific resources of "trust in the mods" and "some groups of people's willingness to put up with the site being optimized a way they think is wrong.")  Despite that, I think it is the right call to limit Said significantly in some way, but I don't think we can make that many moderation calls on users this established that there this controversial without causing some pretty bad things to happen.
Indeed. I would encourage you to ask yourself whether the number referred to by "that many" is greater than zero.
I don't remember this. I feel like Aella's post introduced the term? A better example might be Circling, though I think Said might have had a point of it hadn't been carefully scrutinized, a lot of people had just been doing it.
Frame control was a pretty central topic on "what's going on with Brent?" two years prior, as well as some other circumstances. We'd been talking about it internal at Lightcone/LessWrong during that time.
Hmm, yeah, I can see that. Perhaps just not under that name.
I think the term was getting used, but makes sense if you weren't as involved in those conversations. (I just checked and there's only one old internal lw-slack message about it from 2019, but it didn't feel like a new term to me at the time and pretty sure it came up a bunch on FB and in moderation convos periodically under that name)

Ray writes:

Here are some areas I think Said contributes in a way that seem important:

  • Various ops/dev work maintaining sites like readthesequences.com, greaterwrong.com, and gwern.com. 

For the record, I think the value here is "Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world", and I don't think that comes across in this bullet.

Yeah I agree with this, and agree it's worth emphasizing more. I'm updating the most recent announcement to indicate this more, since not everyone's going to read everything in this thread.
2Ben Pace5mo

I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day.

I feel like this incentivizes comments to be short, which doesn't make them less aggravating to people. For example, IIRC people have complained about him commenting "Examples?". This is not going to be hit hard by a rate limit.

'Examples?' is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit "Oh, I don't have any yet, this is speculative, so YMMV".

Spending my last remaining comment here.

I join Ray and Gwern in noting that asking for examples is generically good (and that I've never felt or argued to the contrary). Since my stance on this was called into question, I elaborated:

If one starts out looking to collect and categorize evidence of their conversational partner not doing their fair share of the labor, then a bunch of comments that just say "Examples?" would go into the pile. But just encountering a handful of comments that just say "Examples?" would not be enough to send a reasonable person toward the hypothesis that their conversational partner reliably doesn't do their fair share of the labor.

"Do you have examples?" is one of the core, common, prosocial moves, and correctly so. It is a bid for the other person to put in extra work, but the scales of "are we both contributing?" don't need to be balanced every three seconds, or even every conversation. Sometimes I'm the asker/learner and you're the teacher/expounder, and other times the roles are reversed, and other times we go back and forth.

The problem is not in asking someone to do a little labor on your behalf. It's having 85+% of your engagement be asking other pe

... (read more)

Noting that my very first lesswrong post, back in the LW1 days, was an example of #2. I was wrong on some of the key parts of the intuition I was trying to convey, and ChristianKl corrected me. As an introduction to posting on LW, that was pretty good - I'd hate to think that's no longer acceptable.

At the same time, there is less room for it as the community got much bigger, and I'd probably weak downvote a similar post today, rather than trying to engage with a similar mistake, given how much content there is. Not sure if there is anything that can be done about this, but it's an issue.

fwiw that seems like a pretty great interaction. ChristanKl seems to be usefully engaging with your frame while noting things about it that don't seem to work, seems (to me) to have optimized somewhat for being helpful, and also the conversation just wraps up pretty efficiently. (and I think this is all a higher bar than what I mean to be pushing for, i.e. having only one of those properties would have been fine)

I agree - but think that now, if and when similarly initial thoughts on a conceptual model are proposed, there is less ability or willingness to engage, especially with people who are fundamentally confused about some aspect of the issue. This is largely, I believe, due to the volume of new participants, and the reduced engagement for those types of posts.
I want to reiterate that I actually think the part where Said says "examples?" is basically just good (and is only bad insofar as it creates a looming worry of particular kinds of frustrating, unproductive and time-consuming conversations that are likely to follow in some subsets of discussions) (edit: I actually am pretty frustrated that "examples?" became the go-to example people talked about and reified as a kinda rude thing Said did. I think I basically agree this process is good: 1. Alice -> writes confident posts without examples 2. Bob -> says "examples?" 3. Alice -> either gives (at least one, and yeah ideally 3) examples, or says "Oh, I don't have any yet, this is speculative, so YMMV", or doesn't reply but feels a bit chagrined.  )
Oops, sorry for saying something that probabilistically implied a strawman of you.
I'm not sure what you think this is strong evidence of?
I don't think it's "strong" evidence per se, but, it was evidence that something I'd previously thought was more of a specific pet-peeve of Duncan's, was more objected to by more LessWrongfolk.  (Where the thing in question is something like "making sweeping ungrounded claims about other people... but in a sort of colloquial/hyperbolic way which most social norms don't especially punish)

Some evidence for that, also seems likely to get upvoted on the basis of "well written and evocative of a difficult personal experience", or people relate to being outliers and unusual even if they didn't feel alienated and hurt in quite the same way. I'm unsure.

I upvoted it because it made me finally understand what in the world might be going on in Duncan's head to make him react the way he does

If the lifeguard isn't on duty, then it's useful to have the ability to be your own lifeguard. I wanted to say that I appreciate the moderation style options and authors being able to delete and ban for their posts. While we're talking about what to change and what isn't working, I'd like to weigh in on the side of that being a good set of features that should be kept. Raemon, you've mentioned those features are there to be used. I've never used the capability and I'm still glad it exists. (I can barely use it actually.) Since site wide moderators aren't going to intervene everywhere quickly (which I don't think they should or even can, moderators are heavily outnumbered) then I think letting people moderate their local piece is good. If I ran into lots of negative feedback I didn't think was helpful and it wasn't getting moderated by me or the site admins, I'd just move my writing to a blog on a different website where I could control things. Possibly I'd set up crossposting like Zvi or Jefftk and then ignore the LessWrong comment section. If lots of people do that, then we get the diaspora effect from late LessWrong 1.0. Having people at least crossposting to LessWrong seems good to me, since I like tools like the agreement karma and the tag upvotes. Basically, the BATNA for a writer who doesn't like LessWrong's comment section is Wordpress or Substack. Some writers you'd rather go elsewhere obviously, but Said and Duncan's top level posts seem mostly a good fit here.  I do have a question about norm setting I'm curious about. If Duncan had titled his post "Duncan's Basics of Rationalist Discourse" would that have changed whether it merited the exception around pushing site wide norms? What if lots of people started picking Norm Enforcing for the moderation guidelines and linking to it?
Yeah I think this'd be much less cause for concern. (I haven't checked whether the rest of the post has anything else that felt LW-wide-police-y about it, I'd maybe have wanted a slightly different opening paragraph or something)
I think Duncan also posts all his articles on his own website, is this correct? In that case, would it be okay to replace the article on LW with a link to Duncan's website? So that the articles stay there, the comments stay here, the page with comments links the article, but the article does not link the page with comments. I am not suggesting to do this. I am asking that if Duncan (or anyone else) hypothetically at some moment decided for whatever reason that he is uncomfortable with his articles being on LW, whether doing this (moving the articles elsewhere and replacing them with the links towards the new place) would be acceptable for you? Like, whether this could be a policy "if you decide to move away from LW, this is our preferred way to do it".
3Drake Morrison5mo
Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking.  Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market) Inspired by this post I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over) Obviously the exact ratio doesn't have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed. 1. ^ I'm not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post. 
8Jasnah Kholin5mo
i find the fact that you see comments as criticism, and not expanding and continuing the building, is indicative of what i see as problematic. good comments should most of the time not be critisim. be part of the building.  the dynamic that is good in my eyes, is one when comments are making the post better not by criticize it, but by sharing examples, personal experiences, intuitions, and the relations of those with the post.  counting all comments as prune instead of bubble disincentivize bubble-comments. this is what you want?
3Drake Morrison5mo
I don't see all comments as criticism. Many comments are of the building up variety! It's that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times.  Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations. The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post. 

i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one - the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!


there are much more then 3 comments from person there.

from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are  - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it's dialog. and there are lot of unproductive examples for that in LW. and it's quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.

but i find rules that prevent the best things from happening as bad in some way that i can't explain clearly. something like, I'm here to try to go higher. if it's impossible, then why bother? 

I also think it's V... (read more)

Yeah this is the sort of solution I'm thinking of (although it sounds like you're maybe making a more sweeping assumption than me?) My current rough sense is that a rate limit of 3 comments per post per day (maybe with an additional wordcount based limit per post per day), would actually be pretty reasonable at curbing the things I'm worried about (for users that seem particularly prone to causing demon threads)
-3Said Achmiz5mo
Complaints by whom? And why are these complaints significant? Are you taking the stance that all or most of these complaints are valid, i.e. that the things being complained about are clearly bad (and not merely dispreferred by this or that individual LW member)? (See also this recent comment, where I argue that at least one particular characterization of my commenting activity is just demonstrably inconsistent with reality.)

Ray pointing out the level of complaints is informative even without (far more effort) judgement on the merits of each complaint. There being a lot of complaints is evidence (to both the moderation team and the site users) that it's worth putting in effort here to figure out if things could be better.

There being a lot of complaints is evidence [...] that it's worth putting in effort here to figure out if things could be better.

It is evidence that there is some sort of problem. It's not clear evidence about what should be done about it, about what "better" means specifically. Instituting ways of not talking about the problem anymore doesn't help with addressing it.

It didn't seem like Said was complaining about the reports being seen as evidence that it is worth figuring out whether thing could be better. Rather, he was complaining about them being used as evidence that things could be better.
If we speak precisely... in what way would they be the former without being the latter? Like, if I now think it's more worth figuring out whether things could be better, presumably that's because I now think it's more likely that things could be better? (I suppose I could also now think the amount-they-could-be-better, conditional on them being able to be better, is higher; but the probability that they could be better is unchanged. Or I could think that we're currently acting under the assumption that things could be better, I now think that's less likely so more worth figuring out whether the assumption is wrong. Neither seems like they fit in this case.) Separately, I think my model of Said would say that he was not complaining, he was merely asking questions (perhaps to try to decide whether there was something to complain about, though "complain" has connotations there that my model of Said would object to). So, if you think the mods are doing something that you think they shouldn't be, you should probably feel free to say that (though I think there are better and worse ways to do so). But if you think Said thinks the mods are doing something that Said thinks they shouldn't be... idk, it feels against-the-spirit-of-Said to try to infer that from his comment? Like you're doing the interpretive labor that he specifically wants people not to do.
My comment wasn't well written, I shouldn't have used the word "complaining" in reference to what Said was doing. To clarify: As I see it, there are two separate claims: 1. That the complaints prove that Said has misbehaved (at least a little bit) 2. That the complaints increase the probability that Said has misbehaved  Said was just asking questions - but baked into his questions is the idea of the significance of the complaints, and this significance seems to be tied to claim 1.  Jefftk seems to be speaking about claim 2. So, his comment doesn't seem like a direct response to Said's comment, although the point is still a relevant one. 

Here's a bit of metadata on this: I can recall offhand 7 complaints from users with 2000+ karma who aren't on the mod team (most of whom had significantly more than 2000 karma, and all of them had some highly upvoted comments and/or posts that are upvoted in the annual review). One of them cites you as being the reason they left LessWrong a few years ago, and ~3-4 others cite you as being a central instance of a pattern that means they participate less on LessWrong, or can't have particularly important types of conversations here.

I also think most of the mod team (at least 4 of them? maybe more) of them have had such complaints (as users, rather than as moderators)

I think there's probably at least 5 more people who complained about you by name who I don't think have particularly legible credibility beyond "being some LessWrong users." 

I'm thinking about my reply to "are the complaints valid tho?". I have a different ontology here.

There are some problems with this as pointing in a particular direction. There is little opportunity for people to be prompted to express opposite-sounding opinions, and so only the above opinions are available to you.

I have a concern that Said and Zack are an endangered species that I want there to be more of on LW and I'm sad they are not more prevalent. I have some issues with how they participate, mostly about tendencies towards cultivating infinite threads instead of quickly de-escalating and reframing, but this in my mind is a less important concern than the fact that there are not enough of them. Discouraging or even outlawing Said cuts that significantly, and will discourage others.

(fyi I do plan to respond to this, although don't know how satisfying it'll be when I do)
[-]Ruby5moModerator Comment374
Pinned by Ruby

Warning to Duncan

(See also: Raemon's moderator action on Said)

Since we were pretty much on the same page, Raemon delegated writing this warning to Duncan to me, and signed off on it.

Generally, I am quite sad if, when someone points/objects to bad behavior, they end up facing moderator action themselves. It doesn’t set a great incentive. At the same time, some of Duncan’s recent behavior also feels quite bad to me, and to not respond to it would also create a bad incentive – particularly if the undesirable behavior results in something a person likes.

Here’s my story of what happened, building off of some of Duncan’s own words and some endorsement of something I said previous exchange with him:

Duncan felt that Said engaged in various behaviors that hurt him (confident based on Duncan’s words) and were in general bad (inferred from Duncan writing posts describing why those behaviors are bad). Such bad/hurtful behaviors include strawmanning, psychologizing at length, and failing to put in symmetric effort. For example, Said argued that Duncan banned him from his posts because Said disagreed. I am pretty sympathetic to these accusations against Said (and endorse moderation action agains... (read more)

Just noting as a "for what it's worth"

(b/c I don't think my personal opinion on this is super important or should be particularly cruxy for very many other people)

that I accept, largely endorse, and overall feel fairly treated by the above (including the week suspension that preceded it).

[-]Raemon5moModerator Comment2611
Pinned by Raemon

Moderation action on Said

(See also: Ruby's moderator warning for Duncan)

I’ve been thinking for a week, and trying to sanity-check whether there are actual good examples of Said doing-the-thing-I’ve-complained-about, rather than “I formed a stereotype of Said and pattern match to it too quickly”, and such. 

I think Said is a pretty confusing case though. I’m going to lay out my current thinking here, in a number of comments, and I expect at least a few more days of discussion as the LessWrong community digests this. I’ve pinned this post to the top of the frontpage for the day so users who weren’t following the discussion can decide whether to weigh in.

Here’s a quick overview of how I think about Said moderation:

  • Re: Recent Duncan Conflict. 
    • I think he did some moderation-worthy things in the recent conflict with Duncan, but a) so did Duncan, and I think there’s a “it takes two-to-tango” aspect of demon threads, b) at most, those’d result in me giving one or both of them a 1-week ban and then calling it a day. I basically endorse Vaniver’s take on some object level stuff. I have a bit more to say but not much.
  • Overall pattern. 
    • I think Said’s overall pattern of commen
... (read more)

This sounds drastic enough that it makes me wonder, since the claimed reason was that Said's commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?

Also, is this thing an experiment with a set duration, or a permanent measure? If it's permanent, it has a very rubber room vibe to it, where you don't outright ban someone but continually humiliate them if they keep coming by and wish they'll eventually get the hint.

A background model I want to put out here: two frames that feel relevant to me here are "harm minimization" and "taxing". I think the behavior Said does has unacceptably large costs in aggregate (and, perhaps to remind/clarify, I think a similar-in-some-ways set of behaviors I've seen Duncan do also would have unacceptably large costs in aggregate). And the three solutions I'd consider here, at some level of abstraction, are: 1. So-and-so agrees to stop doing the behavior (harder when the behavior is subtle and multifaceted, but, doable in principle) 2. Moderators restrict the user such that they can't do the behavior to unacceptable degrees 3. Moderators tax the behavior such that doing-too-much-of-it is harder overall (but, it's still something of the user's choice if they want to do more of it and pay more tax).  All three options seem reasonable to me apriori, it's mostly a question of "is there a good way to implement them?". The current rate-limit-proposal for Said is mostly option 2. All else being equal I'd probably prefer option 3, but the options I can think of seem harder to implement and dev-time for this sort of thing is not unlimited.
Quick update for now: @Said Achmiz's rate limit has expired, and I don't plan to revisit applying-it-again unless a problem comes up.  I do feel like there's some important stuff left unresolved here. @Zack_M_Davis's comment on this other post asks some questions that seem worth answering.  I'd hoped to write up something longer this week but was fairly busy, and it seemed better to explicitly acknowledge it. For the immediate future I think improving on the auto-rate-limits and some other systemic stuff seems more important that arguing or clarifying the particular points here.
  It seems like the natural solution here would be something that establishes this common knowledge. Something like the twitter "community notes" being attached to relevant comments that says something like "There is no obligation to respond to this comment, please feel comfortable ignoring this user if you don't feel he will productive to engage with. Discussion here"
Yeah I did list that as one of my options I'd consider in the previous announcement.  A problem I anticipate is that it's some combination of ineffective, and also in some ways a harsher punishment. But if Said actively preferred some version of this solution I wouldn't be opposed to doing it instead of rate-limiting.
7Said Achmiz5mo
Forgive me for making what may be an obvious suggestion which you’ve dismissed for some good reason, but… is there, actually, some reason why you can’t attach such a note to all comments? (UI-wise, perhaps as a note above the comment form, or something?) There isn’t an obligation, in terms of either the site rules or the community norms as the moderators have defined them, to respond to any comment, is there? (Perhaps with the exception of comments written by moderators…? Or maybe not even those?) That is, it seems to me that the concern here can be characterized as a question of communicating forum norms to new participants. Can it not be treated as such? (It’s surely not unreasonable to want community members to refrain from actively interfering with the process of communicating rules and norms to newcomers, such as by lying to them about what those rules/norms are, or some such… but the problem, as such, is one which should be approached directly, by means of centralized action, no?)
4Ben Pace5mo
I think it could be quite nice to give new users information about what site norms are and give a suggested spirit in which to engage with comments. (Though I'm sure there's lots of things it'd be quite nice to tell new users about the spirit of the site, but there's of course bandwidth limitations on how much they'll read, so just because it's an improvement doesn't mean it's worth doing.)
6Said Achmiz5mo
If it’s worth banning[1] someone (and even urgently investing development resources into a feature that enables that banning-or-whatever!) because their comments might, possibly, on some occasions, potentially mislead users into falsely believing X… then it surely must be worthwhile to simply outright tell users ¬X? (I mean, of all the things that it might be nice to tell new users, this, which—if this topic, and all the moderators’ comments on it, are to be believed—is so consequential, has to be right up at the top of list?) -------------------------------------------------------------------------------- 1. Or rate-limiting, or applying any other such moderation action to. ↩︎
This is not what I said though.
2Said Achmiz5mo
Now that you’ve clarified your objection here, I want to note that this does not respond to the central point of the grandparent comment: If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X? Communicating this to users seems like an obvious win, and one which would make a huge chunk of this entire discussion utterly moot.

If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?

Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution. 

But even assuming we did add such a message, there are many other problems: 

  • Posting such a message would communicate a level of importance of this specific norm, which does not actually come up very frequently in conversations that don't involve you and a small number of other users, that is not commensurate with its actual importance. We have the standard frontpage commenting guidelines, and they cover what I consider the actually most important things to communicate, and they are approximately the maximum length I expect new users to r
... (read more)

First, concerning the first half of your comment (re: importance of this information, best way of communicating it):

I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”

Have you checked that users understand that they don’t have an obligation to respond to comments?

If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)

Second, concerning the second half of your comment:

Frankly, this whole perspective you describe just seems bizarre.

Of course I can’t possibly create a formal obligation to respond to comments. Of course ... (read more)

(I am not planning to engage further at this point. 

My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don't think I am saying particularly complicated things, and I think I've communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them. 

My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we'll continue to take some moderator actions until things look better by our models. I think we've both gone far beyond our duty of effort to explain where we are coming from and what our models are.)

-3Said Achmiz5mo
This seems like an odd response. In the first part of the grandparent comment, I asked a couple of questions. I can’t possibly “figure out what you mean” in those cases, since they were questions about what you’ve done or haven’t done, and about what you think of something I asked. In the second part of the grandparent comment, I gave arguments for why some things you said seem wrong or incoherent. There, too, “figuring out what you mean” seems like an inapplicable concept. You and the other moderators have certainly written many words. But only the last few comments on this topic have contained even an attempted explanation of what problem you’re trying to solve (this “enforcement of norms” thing), and there, you’ve not only not “gone far beyond your duty” to explain—you’ve explicitly disclaimed any attempt at explanation. You’ve outright said that you won’t explain and won’t try!
It's important for users to know when it comes up. It doesn't come up much except with you.

(I wrote the following before habryka wrote his message)

While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I've been expressing concerns about in this particular discussion. 

The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).

But, I think it'd be fairly tractable to have a message like "btw, if this conversation doesn't seem productive to you, consider downvoting it and moving on with your day [link to some background]" appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)

But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)

This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)

-8Said Achmiz5mo
3Said Achmiz5mo
Do I understand you correctly as saying that the problem, specifically, is… that people reading my comments might, or do, get a mistaken impression that there exists on Less Wrong some sort of social norm which holds that authors have a social obligation to respond to comments on their posts? -------------------------------------------------------------------------------- That aside, I have questions about this rate limit: * Does it apply to all posts of any kind, written by anyone? More specifically: * Does it apply to both personal and frontpage posts? * Does it apply to posts written by moderators? Posts written about me (or specifically addressing me)? Posts written by moderators about me? * Does it apply to this post? (I assume that it must not, since you mention that you’d like me to make a case that so-and-so, you say “I am interested in what Said actually prefers here”, etc., but just want to confirm this) EDIT: See below * Does it apply to “open thread” type posts (where the post itself is just a “container”, so to speak, and entirely different conversations may be happening under different top-level comments)? * Does it apply to my own posts? (That would be very strange, of course, but it wouldn’t be the strangest edge case that’s ever been left unhandled in a feature implementation, so seems worth checking…) * Does it apply retroactively to existing posts (including very old posts), or only new posts going forward? * Is there any way for a post author to disable this rate limit, or opt out of it? * Does the rate limit reset at a specific time each week, or is there simply a check for whether 3 posts have been written in the period starting one week before current time? * Is there any rate limit on editing comments, or only posting new ones? (It is presumably not the intent to have the rate limit triggered by fixing a typo, for instance…) * Is there a way for me to see th
Aww christ I am very sorry about this. I had planned to ship the "posts can be manually overridden to ignore rate limiting" feature first thing this morning and apply it to this post, but I forgot that you'd still have made some comments less than a week ago which would block you for awhile. I agree that was a really terrible experience and I should have noticed it. The feature is getting deployed now and will probably be live within a half hour.  For now, I'm manually applying the "ignore rate limit" flag to posts that seem relevant. (I'll likely do a migration backfill on all posts by admins that are tagged "Site Meta". I haven't made a call yet about Open Threads) I think some of your questions are answered in the previous comment: I'll write a more thorough response after we've finished deploying the "ignoreRateLimits flag for posts" PR.
Site Meta posts contains a lot more moderation, so not sure we should do that.
Basically yes, although I note I said a lot of other words here that  were all fairly important, including the links back to previous comments. For example, it's important that I think you are factually incorrect about there being "normatively correct general principles" that people who don't engage with your comments "should be interpreted as ignorant". (While I recall you explicitly disclaiming such an obligation in some other recent comments... if you don't think there is some kind of social norm about this, why did you previously use phrasing like "there is always such an obligation" and "Then they shouldn’t post on a discussion forum, should they? What is the point of posting here, if you’re not going to engage with commenters?". Even if you think most of your comments don't have the described effect, I think the linked comment straightforwardly implies a social norm. And I think the attitude in that comment shines through in many of your other comments) I think my actual crux "somehow, at the end of the day, people feel comfortable ignoring and/or downvoting your comments if they don't think they'll be productive to engage with."  I believe "Said's commenting style actively pushes against this in a norm-enforcing-feeling way", but, as noted in the post, I'm still kind of confused about that (and I'll say explicitly here: I am still not sure I've named the exact problem). I said a whole lot of words about various problems and caveats and how they fit together and I don't think you can simplify it down to "the problem is X". I said at the end, a major crux is "Said can adhere to the spirit of '“don’t imply people have an obligation to engage with your comments'," where "spirit" is doing some important work of indicating the problem is fuzzy. We've given you a ton of feedback about this over 5-6 years. I'm happy to talk or answer questions for a couple more days if the questions look like they're aimed at 'actually figure out how to comply with the spirit of

Basically yes, although I note I said a lot of other words here that were all fairly important, including the links back to previous comments. For example, it’s important that I think you are factually incorrect about there being “normatively correct general principles” that people who don’t engage with your comments “should be interpreted as ignorant”.

Well, no doubt most or all of what you wrote was important, but by “important” do you specifically mean “forms part of the description of what you take to be ‘the problem’, which this moderation action is attempting to solve”?

For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!

Or, take the links. One of them is cl... (read more)

7Ben Pace5mo
The philosophical disagreement is related-to but not itself the thing I believe Ray is saying is bad. The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do, and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts. Enforcing norms on behalf of a space that you don't have buy-in for and that the space would reject tricks people into wasting their time and energy trying to be good citizens of the space in a way that isn't helping and isn't being asked of them. If you did so, I think that behavior ought to be clearly punished in some way. I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm, and also regardless of whether you were fully aware that you were doing so. I think it's often correct to issue a blanket punishment of a costly behavior even on the occasions that it is done unknowingly, to ensure that there is a consistent incentive against the behavior — similar to how it is typically illegal to commit a crime even if you aren't aware what you did was a crime.

The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do

Is that really the claim? I must object to it, if that’s so. I don’t think I’ve ever made any false claims about what social norms obtain on Less Wrong (and to the extent that some of my comments were interpreted that way, I was quick to clearly correct that misinterpretation).

Certainly the “normatively correct general principles” comment didn’t contain any such false claims. (And Raemon does not seem to be claiming otherwise.) So, the question remains: what exactly is the relevance of the philosophical disagreement? How is it connected to any purported violations of site rules or norms or anything?

… and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts

I am not sure what this means. I am not a moderator, so it’s not clear to me how I can enforce any norm. (I can exemplify conformance to a norm, of course, but that, in this case, would be me replying to comments on my posts, which is not what we’re talking about here. And I can encourage or even demand conformance to some fa... (read more)

For a quick answer connecting the dots between "What does the recent Duncan/Said conflict have to do with Said's past behavior," I think your behavior in the various you/Duncan threads was bad in basically the same way we gave you a mod warning about 5 years ago, and also similar to a preliminary warning we gave you 6 years ago (in intercom, which ended in us deciding to take no action ath the time) (i.e. some flavor of aggressiveness/insultingness, along with demanding more work from others than you were bringing yourself). As I said, I cut you some slack for it because of some patterns Duncan brought to the table, but not that much slack.  The previous mod warning said "we'd ban you for a month if you did it again", I don't really feel great about that since over the past 5 years there's been various comments that flirted with the same behavior and the cost of evaluating it each time is pretty high. I will think on whether this changes anything for me. I do think it's helpful, offhand I don't feel that it completely (or obviously more than 50%) solves the problem, but, I do appreciate it and will think on it.

… bad in basically the same way we gave you a mod warning about 5 years ago …

I wonder if you find this comment by Benquo (i.e., the author of the post in question; note that this comment was written just months after that post) relevant, in any way, to your views on the matter?

Yeah I do find that comment/concept important. I think I basically already counting that class of thing in the list of positive things I'd mentioned elsethread, but yes, I am grateful to you for that. (Benquo being one to say it in that context is a bit more evidence of it's weight which I had missed before, but I do think I was already weighting the concept approximately the right amount for the right reasons. Partly from having already generally updated on some parts of the Benquo worldview)
5Said Achmiz5mo
Please note, my point in linking that comment wasn’t to suggest that the things Benquo wrote are necessarily true and that the purported truth of those assertions, in itself, bears on the current situation. (Certainly I do agree with what he wrote—but then, I would, wouldn’t I?) Rather, I was making a meta-level point. Namely: your thesis is that there is some behavior on my part which is bad, and that what makes it bad is that it makes post authors feel… bad in some way (“attacked”? “annoyed”? “discouraged”? I couldn’t say what the right adjective is, here), and that as a consequence, they stop posting on Less Wrong. And as the primary example of this purported bad behavior, you linked the discussion in the comments of the “Zetetic Explanation” post by Benquo (which resulted in the mod warning you noted). But the comment which I linked has Benquo writing, mere months afterward, that the sort of critique/objection/commentary which I write (including the sort which I wrote in response to his aforesaid post) is “helpful and important”, “very important to the success of an epistemic community”, etc. (Which, I must note, is tremendously to Benquo’s credit. I have the greatest respect for anyone who can view, and treat, their sometime critics in such a fair-minded way.) This seems like very much the opposite of leaving Less Wrong as a result of my commenting style. It seems to me that when the prime example you provide of my participation in discussions on Less Wrong purportedly being the sort of thing that drive authors away, actually turns out to be an example of exactly the opposite—of an author (whose post I criticized, in somewhat harsh terms) fairly soon (months) thereafter saying that my critical comments are good and important to the community and that I should continue… … well, then regardless of whether you agree with the author in question about whether or not my comments are good/important/whatever, the fact that he holds this view casts very serious dou
The reason it's not additional evidence to me is that I, too, find value in the comments you write for the reasons Benquo states, despite also finding them annoying at the time. So, Benquo's response here seems like an additional instance of my viewpoint here, rather than a counterexample. (though I'm not claiming Benquo agrees with me on everything on this domain)
5[DEACTIVATED] Duncan Sabien5mo
Said is asking Ray, not me, but I strongly disagree. Point 1 is that a black raven is not strong evidence against white ravens. (Said knows this, I think.) Point 2 is that a behavior which displeases many authors can still be pleasant or valuable to some authors. (Said knows this, I think.) Point 3 is that benquo's view on even that specific comment is not the only author-view that matters; benquo eventually being like "this critical feedback was great" does not mean that other authors watching the interaction at the time did not feel "ugh, I sure don't want to write a post and have to deal with comments like this one." (Said knows this, I think.) (Notably, benquo once publicly stated that he suspected a rough interaction would likely have gone much better under Duncan moderation norms specifically; if we're updating on benquo's endorsements then it comes out to "both sets of norms useful," presumably for different things.) I'd say it casts mild doubt on the thesis, at best, and that the most likely resolution is that Ray ends up feeling something like "yeah, fair, this did not turn out to be the best example," not "oh snap, you're right, turns out it was all a house of cards." (This will be my only comment in this chain, so as to avoid repeating past cycles.)
9Said Achmiz5mo
A black raven is, indeed, not strong evidence against white ravens. But that’s not quite the right analogy. The more accurate analogy would go somewhat like this: Alice: White ravens exist! Bob: Yeah? For real? Where, can I see? Alice (looking around and then pointing): Right… there! That one! Bob (peering at the bird in question): But… that raven is actually black? Like, it’s definitely black and not white at all. Now not only is Bob (once again, as he was at the start) in the position of having exactly zero examples of white ravens (Alice’s one purported example having been revealed to be not an example at all), but—and perhaps even more importantly!—Bob has reason to doubt not only Alice’s possession of any examples of her claim (of white ravens existing), but her very ability to correctly perceive what color any given raven is. Now if Alice says “Well, I’ve seen a lot of white ravens, though”, Bob might quite reasonably reply: “Have you, though? Really? Because you just said that that raven was white, and it is definitely, totally black.” What’s more, not only Bob but also Alice herself ought rightly to significantly downgrade her confidence in her belief in white ravens (by a degree commensurate with how big a role her own supposed observations of white ravens have played in forming that belief). Just so. But, once again, we must make our analysis more specific and more precise in order for it to be useful. There are two points to make in response to this. First is what I said above: the point is not just that the commenting style/approach in question is valuable to some authors (although even that, by itself, is surely important!), but that it turns out to be valuable specifically to the author who served as an—indeed, as the—example of said commenting style/approach being bad. This calls into question not just the thesis that said approach is bad in general, but also the weight of any purported evidence of the approach’s badness, which comes from the sam
Answering some other questions: By default, the rate limit applies to all posts, unless we've made an exception for it. There are two exceptions to it: 1. I just shipped the "ignore rate limits" flag on posts, which authors or admins can set so that a given post allows rate-limited comments to comment without restriction. 2. I haven't shipped yet, but expect within the next day to ship "rate-limited authors can comment on their own posts without restriction." (for the immediate future this just applies to authors, I expect to ship something that makes it work for coauthors) In general, we are starting by rolling out the simplest versions of the rate-limiting feature (which is being used on many users, not just you), and solving problems as we notice them. I acknowledge this makes for some bad experiences along the way. I think I stand by that decision because I'm not even sure rate limits will turn out to work as a moderator tool, and investing like 3 months of upfront work ironing out the bugs first doesn't seem like the right call.  For the general question of "whether a given such-and-such post will be rate limited", the answer will route through "will individual authors choose to do set "ignoreRateLimit", and/or will site admins choose to do it?".  Ruby and I have some disagreements on how important it is to set the flag on moderation posts. I personally think it makes sense to be extra cautious about limiting people's ability to speak in discussions that will impact their future ability to speak, since those can snowball and I think people are rightly wary of that.  There are some other tradeoffs important to @Ruby, which I guess he can elaborate on if he wants.  For now, I'm toggling on the ignoreRateLimits flag on most of my own moderation posts (I've currently done so for LW Team is adjusting moderation policy and "Rate limiting" as a mod tool) Other random questions: * Re: Open threads – I haven't made a call yet, but I'm leaving the flag disab
A lot of this is that the set of "all moderation posts" covers a wide range of topics and the potential set "all rate limited users" might include a wide diversity of users, making me reluctant to commit upfront to not rate limits apply blanketly across the board on moderation posts. The concern about excluding people from conversations that affect whether they get to speak is a valid consideration, but I think there are others too. Chiefly, people are likely rate limited primarily because they get in the way of productive conversation, and in so far as I care about moderation conversations going well, I might want to continue to exclude rate limited users there. Note that there are ways, albeit with friction, for people to get to weigh in on moderation questions freely. If it seemed necessary, I'd be down with creating special un-rate-limited side-posts for moderation posts. -------------------------------------------------------------------------------- I am realizing that what seems reasonable here will depend on your conception of rate limits. A couple of conceptions you might have: 1. You're currently not producing stuff that meets the bar for LessWrong, but you're writing a lot, so we'll rate limit you as a warning with teeth to up your quality. 2. We would have / are close to banning you, however we think rate limits might serve either as 1. a sufficient disincentive against the actions we dislike 2. a restriction that simply stops you getting into unproductive things, e.g. Demon Threads Regarding 2., a banned user wouldn't get to participate in moderation discussions either, so under that frame, it's not clear rate limited users should get to. I guess it really depends if it was more of a warning / light rate ban or something more severe, close to an actual ban. I can say more here, not exactly a complete thought. Will do so if people are interested.
I just shipped the "ignore rate limit" flag for posts, and removed the rate limit for this post. All users can set the flag on individual posts.  Currently they have to set it for each individual post. I think it's moderately likely we'll make it such that users can set it as a default setting, although I haven't talked it through with other team members yet so can't make an entirely confident statement on it. We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent) I'm working on a longer response to the other questions.

We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent)

I could be misunderstanding all sorts of things about this feature that you've just implemented, but…

Why would you want to limit newer users from being able to declare that rate-limited users should be able to post as much as they like on newer users' posts? Shouldn't I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?

100+ karma means something like you've been vetted for some degree of investment in the site and enculturation, reducing the likelihood you'll do something with poor judgment and ill intention. I might worry about new users creating posts that ignore rate limits, then attracting all the rate-limited new users who were not having good effects on the site to come comment there (haven't thought about it hard, but it's the kind of thing we consider).  The important thing is that the way the site currently works, any behavior on the site is likely to affect other parts of the site, such that to ensure the site is a well-kept garden, the site admins do have to consider which users should get which privileges. (There are similarly restrictions on which users can be users from which posts.)
I expect Ray will respond more. My guess is you not being able to comment on this specific post is unintentional and it does indeed seem good to have a place where you can write more of a response to the moderation stuff. The other details will likely be figured out as the feature gets used. My guess is how things behave are kind of random until we spend more time figuring out the details. My sense was that the feature was kind of thrown together and is now being iterated on more.
0Said Achmiz4mo
The discussion under this post is an excellent example of the way that a 3-per-week per-post comment limit makes any kind of useful discussion effectively impossible.
I continue to be disgusted with this arbitrary moderator harrassment of a long-time, well-regarded user, apparently on the pretext that some people don't like his writing style. Achmiz is not a spammer or a troll, and has made many highly-upvoted contributions. If someone doesn't like Achmiz's comments, they're free to downvote (just as I am free to upvote). If someone doesn't want to receive comments from Achmiz, they're free to use already-existing site functionality to block him from commenting on their own posts. If someone doesn't like his three-year-old views about an author's responsibility or lack thereof to reply to criticisms, they're free to downvote or offer counterarguments. Why isn't that the end of the matter? Elsewhere, Raymond Arnold complains that Achmiz isn't "corrigible about actually integrating the spirit-of-our-models into his commenting style". Arnold also proposes that awareness of frame control—a concept that Achmiz has criticized—become something one is "obligated to learn, as a good LW citizen". I find this attitude shockingly anti-intellectual. Since when is it the job of a website administrator to micromanage how intellectuals think and write, and what concepts they need to accept? (As contrated to removing low-quality, spam, or off-topic comments; breaking up flame wars, &c.) My first comment on Overcoming Bias was on 15 December 2007. I was at the first Overcoming Bias meetup on 21 February 2008. Back then, there was no conept of being a "good citizen" of Overcoming Bias. It was a blog. People read the blog, and left comments when they had something to say, speaking in their own voice, accountable to no authority but their own perception of reality, with no obligation to be corrigible to the spirit of someone else's models. Achmiz's first comment on Less Wrong was in May 2010. We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard? Perhaps it will be

I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here.

Sure, but... I think I don't know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more. 

Some quick thoughts: 

  • LessWrong totally has prerequisites. I don't think you necessarily need to be an atheist to participate in LessWrong, but if you straightforwardly believe in the Christian god, and haven't really engaged with the relevant arguments on the site, and you comment on posts that assume that there is no god, I will likely just ban you or ask you to stop. There are many other dimensions for which this is also true. Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
  • Well-Kept Gardens Die by Pacifism is IMO one of the central moderation principles of LessWrong. I have huge warning flags around your language here and feel like it's doing something pretty similar to the outraged calls for "censorship" that Eliezer refers to in that post, but I might just be misunderstanding you. In-general, LessWr
... (read more)

But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.

I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.

Thanks, to clarify: I don't intend to make a "how dare the moderators moderate Less Wrong" objection. Rather, the objection is, "How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma." (That's why the grandparent specifies "long-time, well-regarded", "many highly-upvoted contributions", "We were here first", &c.) I'm saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don't want to accept literally any speech (which is why the grandparent mentions "removing low-quality [...] comments" as a legitimate moderator duty).

Note that "permanently restrict the account of" is different from "moderate". For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic, and Achmiz complied. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I'm accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz... (read more)

Hmm, I am still not fully sure about the question (your original comment said "I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here", which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit. 

There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said's net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here. 

One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like "purchase him out of his right to use LessWrong" or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in ce... (read more)

But second, and more importantly, there is a huge bias in karma towards positive karma.


I don't know if it's good that there's a positive bias towards karma, but I'm pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular. 

I think I mostly meant "answer" in the sense of "reply" (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity. I have a lot of extremely strong disagreements with this, but they can wait three months.
Cool, makes sense. Also happy to chat in-person sometime if you want. 
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior? how is this even a reasonable- Isn't this community close in idea terms to Effective Altruism? Wouldn't it be better to say "Said, if you change your commenting habits in the manner we prescribe, we'll donate $10k-$100k to a charity of your choice?" I can't believe there's a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I've been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.

Seems sad! Seems like there is an opportunity for trade here.

Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.

And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.

We can also donate instead, but I don't really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don't really get what this would improve. Also, not everyone cares about donating to charity, and that's fine.

The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between "good user" and "ban". I guess I'm having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I'm familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don't know). I do want to note that my problem isn't with offering Said money - any offer to any user of any Internet community feels... extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that's contracting and not unusual. I'm not even necessarily offended by such an offer, just, again, extremely surprised.
I think if you model things as just "an internet community" this will give you the wrong intuitions.  I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone's access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable.  I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn't seem totally crazy.
This, plus Vaniver's comment, has made me update - LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.
I've had a nagging feeling in the past that the rationalist community isn't careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I've seen have been kinda small-scale and so I haven't really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton's fence.
It might help to think of LW as more like a small town's newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with "business expense" lenses instead of "personal budget" lenses. 
Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn't really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he's made, paid to settle a legal issue... the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.
Exactly.  It's hilarious and awesome.  (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)

We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?

I endorse much of Oliver's replies, and I'm mostly burnt out from this convo at the moment so can't do the followthrough here I'd ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:

Yes, the bar for banning or permanently limiting the speech of a longterm member in Said's reference class is very high, and I'd treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens. 

I don't think the Spirit of LessWrong 2009 actually supports you on the specific claims you're making here.

As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer's buy-in, and now we have 6 years of track of reco... (read more)

Not to respond to everything you've said, but I question the argument (as I understand it) that because someone is {been around a long-time, well-regarded, many highly-upvoted contributions, lots of karma}, this means they are necessarily someone who at the end of the day you want around / are net positive for the site. Good contributions are relevant. But so are costs. Arguing against the costs seems valid, saying benefits outweigh costs seems valid, but assuming this is what you're saying, I don't think just saying someone has benefits means that obviously obviously you want them as unrestricted citizen. (I think in fact how it's actually gone is that all of those positive factors you list have gone into moderators decisions so far in not outright banning Said over the years, and why Ray preferred to rate limit Said rather than ban him. If Said was all negatives, no positives, he'd have been banned long ago.) Correct me though if there's a deeper argument here that I'm not seeing.

In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.

It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the... (read more)

9Said Achmiz5mo
Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way? As far as I know, no such thing has been done. (Aella’s post on the subject was manifestly nonsensical, to say the least; if that’s the best explanation we’ve got, then I think that it’s safe to say that the concept is incoherent nonsense, and using it does more harm than good.) But perhaps I’ve missed it?
In the comment Zack cites, Raemon said the same when raising the idea of making it a prerequisite:
Also for everyone's awareness, I have since wrote up Tabooing "Frame Control" (which I'd hope would be like part 1 of 2 posts on the topic), but the reception of the post,  i.e. 60ish karma, didn't seem like everyone was like "okay yeah this concept is great", and I currently think the ball is still in my court for either explaining the idea better, refactoring it into other ideas, or abandoning the project.
Yep! As far as I remember the thread Ray said something akin to "it might be reasonable to treat this as a prerequisite if someone wrote a better explanation of it and there had been a bunch of discussion of this", but I don't fully remember. Aella's post did seem like it had a bunch of issues and I would feel kind of uncomfortable with having a canonical concept with that as its only reference (I overall liked the post and thought it was good, but I don't think a concept should reach canonicity just on the basis of that post, given its specific flaws).
Arnold says he is thinking about maybe proposing that, in future, after he has done the work to justify it and paying attention to how people react to it.

(Tangentially) If users are allowed to ban other users from commenting on their posts, how can I tell when the lack of criticism in the comments of some post means that nobody wanted to criticize it (which is a very useful signal that I would want to update on), or that the author has banned some or all of their most prominent/frequent critics? In addition, I think many users may be mislead by lack of criticism if they're simply not aware of the second possibility or have forgotten it. (I think I knew it but it hasn't entered my conscious awareness for a while, until I read this post today.)

(Assuming there's not a good answer to the above concerns) I think I would prefer to change this feature/rule to something like allowing the author of a post to "hide" commenters or individual comments, which means that those comments are collapsed by default (and marked as "hidden by the post author") but can be individually expanded, and each user can set an option to always expand those comments for themselves.

Maybe a middle ground would be to give authors a double-strong downvote power for comments on their posts. A comment with low enough karma is already hidden by default, and repeated strong downvotes without further response would tend chill rather than inflame the ensuing discussion, or at least push the bulk of it away from the author's arena, without silencing critics completely.
2Wei Dai5mo
I think a problem that my proposal tries to solve, and this one doesn't, is that some authors seem easily triggered by some commenters, and apparently would prefer not to see their comments at all. (Personally if I was running a discussion site I might not try so hard to accommodate such authors, but apparently they include some authors that the LW team really wants to keep or attract.)
5Adam Zerner5mo
To me it seems unlikely that there'd be enough banning to prevent criticism from surfacing. Skimming through https://www.lesswrong.com/moderation, the amount of bans seems to be pretty small. And if there is an important critique to be made I'd expect it to be something that more than the few banned users would think of and decide to post a comment on.

And if there is an important critique to be made I’d expect it to be something that more than the few banned users would think of and decide to post a comment on.

This may be true in some cases, but not all. My experience here comes from cryptography where it often takes hundreds of person-hours to find a flaw in a new idea (which can sometimes be completely fatal), and UDT, where I found a couple of issues in my own initial idea only after several months/years of thinking (hence going to UDT1.1 and UDT2). I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.

Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques. I have this impression for Duncan myself which may explain why I've rarely commented on any of his posts. I seem to remember once trying to talk him out of (what seemed to me like) overreacting to a critique and banning the critic on Faceb... (read more)

4Adam Zerner5mo
Hm, interesting points. My impression is that there are some domains for which this is true, but those are the exception rather than the rule. However, this impression is just based off of, err, vaguely querying my brain? I'm not super confident in it. And your claim is one that I think is "important if true". So then, it does seem worth an investigation. Maybe enumerating through different domains and asking "Is it true here? Is it true here?". One thing I'd like to point out is that, being a community, something very similar is happening. Only a certain type of person comes to LessWrong (this is true of all communities to some extent; they attract a subset of people). It's not that "outsiders" are explicitly banned, they just don't join and don't thus don't comment. So then, effectively, ideas presented here currently aren't available to "outsiders" for critiques. I think there is a trade off at play: the more you make ideas available to "outsiders" the lower the chance something gets overlooked, but it also has the downside of some sort of friction. (Sorry if this doesn't make sense. I feel like I didn't articulate it very well but couldn't easily think of a better way to say it.) Good point. I think that's true and something to factor in.
While the current number of bans is pretty small, I think this is in part because lots of users don't know about the option to ban people from their posts. (See here, for example.)
2Adam Zerner5mo
That makes sense. Still, even if it were more well known, I wouldn't expect the number of bans to reach the point where it is causing real problems with respect to criticism surfacing.
One solution is to limit the number of banned users to a small fraction of overall commentors. I've written 297 posts so far and have banned only 3 users from commenting on them. (I did not ban Duncan or Said.) My highest-quality criticism comes from users who I have never even considered banning. Their comments are consistently well-reasoned and factually correct.
What exactly does "nobody wanted to criticize it" signal that you don't get from high/low karma votes?

Some UI thoughts as I think about this:

Right now, you see total karma for posts and comments, and total vote count, but not the number of upvotes/downvotes. So you can't actually tell when something is controversial.

One reason for this is because we (once) briefly tried turning this on, and immediately found it made the site much more stressful and anxiety inducing. Getting a single downvote felt like "something is WRONG!" which didn't feel productive or useful. Another reason is that it can de-anonymize strong-votes because their voting power is a less common number.

But, an idea I just had was that maybe we should expose that sort of information once a post becomes popular enough. Like maybe over 75 karma. [Better idea: once a post has a certain number of votes. Maybe at least 25]. At that point you have more of a sense of the overall karma distribution so individual votes feel less weighty, and also hopefully it's harder to infer individual voters.

Tagging @jp who might be interested.

I support exposing the number of upvotes/downvotes. (I wrote a userscript for GW to always show the total number of votes, which allows me to infer this somewhat.) However that doesn't address the bulk of my concerns, which I've laid out in more detail in this comment. In connection with karma, I've observed that sometimes a post is initially upvoted a lot, until someone posts a good critique, which then causes the karma of the post to plummet. This makes me think that the karma could be very misleading (even with upvotes/downvotes exposed) if the critique had been banned or disincentivized.

We've been thinking about this for the EA Forum. I endorse Raemon's thoughts here, I think, but I know I can't pass the ITT of a more transparent side here.

I don't keep track of people's posting styles and correlate them with their names very well. Most people who post on LW, even if they do it a lot, I have negligible associations beyond "that person sounds vaguely familiar" or "are they [other person] or am I mixing them up?".

I have persistent impressions of both Said and Duncan, though.

I am limited in my ability to look up any specific Said comment or things I've said elsewhere about him because his name tragically shares a spelling with a common English word, but my model of him is strongly positive.  I don't think I've ever read a Said comment and thought it was a waste of time, or personally bothersome to me, or sneaky or pushy or anything.

Meanwhile I find Duncan vaguely fascinating like he is a very weird bug which has not, yet, sprayed me personally with defensive bug juice or bitten me with its weird bug pincers.  Normally I watch him from a safe distance and marvel at how high a ratio of "incredibly suspicious and hackle-raising" to "not often literally facially wrong in any identifiable ways" he maintains when he writes things.  It's not against any rules to be incredibly suspicious and hackle-raising in a pu... (read more)

Meanwhile I find Duncan vaguely fascinating like he is a very weird bug

I don't know[1] for sure what purpose this analogy is serving in this comment, and without it the comment would have felt much less like it was trying to hijack me into associating Duncan with something viscerally unpleasant.

  1. ^

    My guess is that it's meant to convey something like your internal emotional experience, with regards to Duncan, to readers.

I think weird bugs are neat.

I wasn't sure if I should include the analogy.  I came up with it weeks ago when I was remarking to people in my server about how suspicious I find things Duncan writes, and it was popular there; I guess people here are less universally delighted by metaphors about weird bugs than people on my server, whoops!  For what it's worth I think the world is enriched by the presence of weird bugs.  The other day someone remarked that they'd found a weird caterpillar on the sidewalk near my house and half my dinner guests got up to go look at it and I almost did myself.  I just don't want to touch weird bugs, and am nervous in a similar way about making it publicly knowable that I have an opinion about Duncan.

I've tried for a bit to produce a useful response to the top-level comment and mostly failed, but I did want to note that

"Oh, it sort of didn't occur to me that this analogy might've carried a negative connotation, because when I was negatively gossiping about Duncan behind his back with a bunch of other people who also have an overall negative opinion of him, the analogy was popular!"

is a hell of a take. =/

Oh, no, it's absolutely negative.  I don't like you.  I just don't specifically think that you are disgusting, and it's that bit of the reaction to the analogy that caught me by surprise. "Oh, I'm going to impute malice with the phrase 'gossiping behind my back' about someone I have never personally interacted with before who talked about my public blog posts with her friends, when she's specifically remarked that she's worried about fallout from letting me know that she doesn't care for me!" is also kind of a take, and a pretty good example of why I don't like you.  I retract the tentative positive update I made when your only reaction to my comment had been radio silence; I'd found that really encouraging wrt it being safe to have opinions about you where you might see them, but no longer.

It is only safe for you to have opinions if the other people don't dislike them?

I think you're trying to set up a really mean dynamic where you get to say mean things about me in public, but if I point out anything frowny about that fact you're like "ah, see, I knew that guy was Bad; he's making it Unsafe for me to say rude stuff about him in the public square."

(Where "Unsafe" means, apparently, "he'll respond with any kind of objection at all."  Apparently the only dynamic you found acceptable was "I say mean stuff and Duncan just takes it.")


I won't respond further, since you clearly don't want a big back-and-forth, but calling people a weird bug and then pretending that doesn't in practice connote disgust is a motte and bailey.

I kind of doubt you care at all, but here for interested bystanders is more information on my stance.

  • I suspect you of brigading-type behavior wrt conflicts you get into.  Even if you make out like it's a "get out the vote" campaign where the fact that rides to the polls don't require avowing that you're a Demoblican is important to your reception, when you're the sort who'll tell all your friends someone is being mean to you and then the karma swings around wildly I make some updates.  This social power with your clique of admirers in combination with your contagious lens on the world that they pick up from you is what unnerves me.
  • I experience a lot of your word choices (e.g. "gossiping behind [your] back") as squirrelly[1] , manipulative, and more rhetoric than content.  I would not have had this experience in this particular case if, for example, you'd said "criticizing [me] to an unsympathetic audience".  Gossip behind one's back is a social move for a social relationship.  One doesn't clutch one's pearls about random people gossiping about Kim Kardashian behind her back.  We have never met.  I'd stand a better chance of recognizing Ms. Ka
... (read more)
Positive reinforcement for disengaging!
It doesn't seem like too many people had a reaction similar to mine, so I don't know that you were especially miscalibrated.  (On reflection, I think the "bug" part is maybe only half of what I found disagreeable about the analogy.  Not sure this is worth the derailment.)

For what it's worth, I had a very similar reaction to yours. Insects and arthropods are a common source of disgust and revulsion, and so comparing anyone to an insect or an arthropod, to me, shows that you're trying to indicate that this person is either disgusting or repulsive.

I'm sorry!  I'm sincerely not trying to indicate that.  Duncan fascinates and unnerves me but he does not revolt me.  I think that "weird bug" made sense to my metaphor generator instead of "weird plant" or "weird bird" or something is that bugs have extremely widely varying danger levels - an unfamiliar bug may have all kinds of surprises in the mobility, chemical weapons, aggressiveness, etc. department, whereas plants reliably don't jump on you and birds are basically all just WYSIWYG; but many weird bugs are completely harmless, and I simply do not know what will happen to me if I poke Duncan.
What about "weird frog"? Frogs don't have the same negative connotations as bugs and they have the same wide range of danger levels.
I think most poisonous frogs look it and would accordingly pick up a frog that wasn't very brightly colored if I otherwise wanted to pick up this frog, whereas bugs may look drab while being dangerous.

Poisonous frogs often have bright colors to say "hey don't eat me", but there are also ones that use a "if you don't notice me you won't eat me" strategy. Ex: cane toad, pickerel frog, black-legged poison dart frog.

Welp, guess I shouldn't pick up frogs.  Not what I expected to be the main takeaway from this thread but still good to know.

5M. Y. Zuo5mo
Don't pick up amphibians, or anything else with soft porous skin, in general, unless your sure.
...why do they bother being poisonous then tho?
I believe it: https://slatestarcodex.com/2017/10/02/different-worlds/
I liked the analogy and I also like weird bugs
7Adam Zerner5mo
Yup, I strongly agree with this. And it seems to me that the effort spent moderating this is mostly going to be consequential for Duncan and Said's future interactions instead of generalizing and being consequential to the interactions between other people on LessWrong, because these sorts of conflicts seem to be quite infrequent. If so, it doesn't seem worth spending too much time on. Maybe as a path forward, Duncan and Said can agree to keep exchanges to a maximum of 10 total comments and subsequently move the conversation to a private DM, see if that works, and if it doesn't re-evaluate from there?

First, my read of both Said and Duncan is that they appreciate attention to the object level in conflicts like this. If what's at stake for them is a fact of the matter, shouldn't that fact get settled before considering other issues? So I will begin with that. What follows is my interpretation (mentioned here so I can avoid saying "according to me" each sentence).

In this comment, Said describes as bad "various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on", without specifically identifying Duncan as proposing that norm (tho I think it's heavily implied).

Then gjm objects to that characterization as a straw man.

In this comment Said defends it, pointing out that Duncan's standard of "critics should do some of the work of crossing the gap" is implicitly a rule against "asking people for examples of their claims [without anything else]", given that Duncan thinks asking for examples doesn't count as doing the work of crossing the gap. (Earlier in the conversation Duncan calls it 0% of the work.) I think the point as I have written it here is correct and uncontroversial; I think there is an important difference between the point as I wrot... (read more)

Vaniver privately suggested to me that I may want to offer some commentary on what I could’ve done in this situation in order for it to have gone better, which I thought was a good and reasonable suggestion. I’ll do that in this comment, using Vaniver’s summary of the situation as a springboard of sorts.

In this comment, Said describes as bad “various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on”, without specifically identifying Duncan as proposing that norm (tho I think it’s heavily implied).

Then gjm objects to that characterization as a straw man.

So, first of all, yes, I was clearly referring to Duncan. (I didn’t expect that to be obscure to anyone who’d bother to read that subthread in the first place, and indeed—so far as I can tell—it was not. If anyone had been confused, they would presumably have asked “what do you mean?”, and then I’d have linked what I mean—which is pretty close to what happened anyway. This part, in any case, is not the problem.)

The obvious problem here is that “don’t ask people for examples of their claims”—taken literally—is, indeed, a strawman.

The question is, whose problem (to solve) is it?

There a... (read more)

In the response I would have wanted to see, Duncan would have clearly and correctly pointed to that difference. He is in favor of people asking for examples [combined with other efforts to cross the gap], does it himself, gives examples himself, and so on. The unsaid [without anything else] part is load-bearing and thus inappropriate to leave out or merely hint at. [Or, alternatively, using "ask people for examples" to refer to comments that do only that, as opposed to the conversational move which can be included or not in a comment with other moves.]

I agree that the hypothetical comment you describe as better is in fact better. I think something like ... twenty-or-so exchanges with Said ago, I would have written that comment?  I don't quite know how to weigh up [the comment I actually wrote is worse on these axes of prosocial cooperation and revealing cruxes and productively clarifying disagreement and so forth] with [having a justified true belief that putting forth that effort with Said in particular is just rewarded with more branches being created].

(e.g. there was that one time recently where Said said I'd blocked people due to disagreeing with me/criticizing me, and I s... (read more)

At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you're [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You've spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse.  You've written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.

I don't think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you'd be much happier with my ideal, you'd think it was pretty good if not perfect. Respectable, maybe adequate. A garden.

And I'm really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a "this place is really under control" kind of ideal. I tak... (read more)

But sir, you impugn my and my site's honor

This is fair, and I apologize; in that line I was speaking from despair and not particularly tracking Truth.

A [less straightforwardly wrong and unfair] phrasing would have been something like "this is not a Japanese tea garden; it is a British cottage garden."

I have been to the Japanese tea garden in Portland, and found it exquisite, so I think get your referent there. Aye, indeed it is not that.

I probably rushed this comment out the door in a "defend my honor, set the record straight" instinct that I don't think reliably leads to good discourse and is not what I should be modeling on LessWrong. 

I didn't make it to every point, but hopefully you find this more of the substantive engagement you were hoping for.

I did, thanks.

gjm specifically noted the separation between the major issue of whether balance is required, and this other, narrower claim.

I think gjm's comment was missing the observation that "comment that just ask for examples" are themselves an example of "unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself", and so it wasn't cleanly about "balance: required or not?". I think a reasonable reader could come away from that comment of gjm's uncertain whether or not Said simply saying "examples?" would count as an example.

My interpretation of this section is basically the double crux dots arguing over the labels they should have, with Said disagreeing strenuously with calling his mode "unproductive" (and elsewhere over whether labor is good or bad, or how best to minimize it) and moving from the concrete examples to an abstract pattern (I suspect because he thinks the former is easier to defend than the latter).

I should also note here that I don't think you have explici... (read more)

8[DEACTIVATED] Duncan Sabien5mo
To clarify: If one starts out looking to collect and categorize evidence of their conversational partner not doing their fair share of the labor, then a bunch of comments that just say "Examples?" would go into the pile. But just encountering a handful of comments that just say "Examples?" would not be enough to send a reasonable person toward the hypothesis that their conversational partner reliably doesn't do their fair share of the labor. "Do you have examples?" is one of the core, common, prosocial moves, and correctly so.  It is a bid for the other person to put in extra work, but the scales of "are we both contributing?" don't need to be balanced every three seconds, or even every conversation.  Sometimes I'm the asker/learner and you're the teacher/expounder, and other times the roles are reversed, and other times we go back and forth. The problem is not in asking someone to do a little labor on your behalf. It's having 85+% of your engagement be asking other people to do labor on your behalf, and never reciprocating, and when people are like, hey, could you not, or even just a little less? being supercilious about it. Said simply saying "examples?" is an example, then, but only because of the strong prior from his accumulated behavior; if the rule is something like "doing this <100x/wk is fine, doing it >100x/wk is less fine," then the question of whether a given instance "is an example" is slightly tricky. Yeah, you may have pinned it down (the disagreement).  I definitely don't (currently) think it's sensible to read the second comment that way, and certainly not sensible enough to mentally dock someone for not reading it that way even if that reading is technically available (which I agree it is).   I perhaps have some learned helplessness around what I can, in fact, expect from the mod team; I claim that if I had believed that this would be received as defensible I would've done that instead. At the time, I felt helpless and alone*/had no expectati

The problem is not in asking someone to do a little labor on your behalf. It’s having 85+% of your engagement be asking other people to do labor on your behalf, and never reciprocating, and when people are like, hey, could you not, or even just a little less? being supercilious about it.

But why should this be a problem?

Why should people say “hey, could you not, or even just a little less”? If you do something that isn’t bad, that isn’t not a problem, why should people ask you to stop? If it’s a good thing to do, why wouldn’t they instead ask you to do it more?

And why, indeed, are you still speaking in this transactional way?

If you write a post about some abstract concept, without any examples of it, and I write a post that says “What are some examples?”, I am not asking you to do labor on my behalf, I am not asking for a favor (which must be justified by some “favor credit”, some positive account of favors in the bank of Duncan). Quite frankly, I find that claim ridiculous to the point of offensiveness. What I am doing, in that scenario, is making a positive contribution to the discussion, both for your benefit and (even more importantly) for the benefit of other readers and com... (read more)

There is no good reason why you should resent responding to a request like “what are some examples”.

Maybe "resent" is doing most work here, but an excellent reason to not respond is that it takes work. To the extent that there are norms in place that urge response, they create motivation to suppress criticism that would urge response. An expectation that it's normal for criticism to be a request for response that should normally be granted is pressure to do the work of responding, which is costly, which motivates defensive action in the form of suppressing criticism.

A culture could make it costless (all else equal) to ignore the event of a criticism having been made. This is an inessential reason for suppressing criticism that can be removed, and therefore should, to make criticism cheaper and more abundant.

The content of criticism may of course motivate the author of a criticized text to make further statements, but the fact of criticism's posting by itself should not. The fact of not responding to criticism is some sort of noisy evidence of not having a good response that is feasible or hedonic to make, but that's Law, not something that can change for the sake of mechanism design.

5Said Achmiz5mo
It’s certainly doing a decent amount of work, I agree. Anyhow, your overall point is taken—although I have to point out that that your last sentence seems like a rebuttal of your next-to-last sentence. That having been said, of course the content of criticism matters. A piece of criticism could simply be bad, and clearly wrong; and then it’s good and proper to just ignore it (perhaps after having made sure that an interested party could, if they so wished, easily see or learn why that criticism is bad). I do not, and would not, advocate for a norm that all comments, all critical questions, etc., regardless of their content, must always be responded to. That is unreasonable. I also want to note—as I’ve said several times in this discussion, but it bears repeating—there is nothing problematic or blameworthy about someone other than the author of a post responding to questions, criticism, requests for examples, etc. That is fine. Collaborative development of ideas is a perfectly normal and good thing. What that adds up to, I think, is a set of requirements for a set of social norms which is quite compatible with your suggestion of making it “costless (all else equal) to ignore the event of a criticism having been made”.
They are in opposition, but the point is that they are about different kinds of things, and one of them can't respond to policy decisions. It's useful to have a norm that lessens the burden of addressing criticism. It's Law of reasoning that this burden can nonetheless materialize. The Law is implacable but importantly asymmetric, it only holds when it does, not when the court of public opinion says it should. While the norms are the other way around, and their pressure is somewhat insensitive to facts of a particular situation, so it's worth pointing them in a generally useful direction, with no hope for their nuanced or at all sane response to details. Perhaps the presence of Law justifies norms that are over-the-top forgiving to ignoring criticism, or find ignoring criticism a bit praiseworthy when it would be at all unpleasant not to ignore it, to oppose the average valence of Law, while of course attempting to preserve its asymmetry. So I'd say my last sentence in that comment argues that the next-to-last sentence should be stronger. Which I'm not sure I agree with, but here's the argument.
5[DEACTIVATED] Duncan Sabien5mo
Said, above, is saying a bunch of things, many of which I agree with, as if they are contra my position or my previous claims. He can't pass my ITT (not that I've asked him to), which means that he doesn't understand the thing he's trying to disagree with, which means that his disagreement is not actually pointing at my position; the things he finds ridiculous and offensive are cardboard cutouts of his own construction. More detail on that over here.
2Said Achmiz5mo
This response is manifestly untenable, given the comment of yours that I was responding to.
BTW I was surprised earlier to see you agree with the 'relational' piece of this comment because Duncan's grandparent comment seems like it's a pretty central example of that. (I view you as having more of a "visitor-commons" orientation towards LW, and Duncan has more of an orientation where this is a place where people inhabit their pairwise relationships, as well as more one-to-many relationships.)
3Said Achmiz5mo
Sorry, I’m not quite sure I follow the references here. You’re saying that… this comment… is a central example of… what, exactly? That… seems like it’s probably accurate… I think? I think I’d have to more clearly understand what you’re getting at in your comment, in order to judge whether this part makes sense to me.
Sorry, my previous comment wasn't very clear. Earlier I said: and you responded with: (and a few related comments) which made me think "hmm, I don't think we mean the same thing by 'relational'. Then Duncan's comment had a frame that I would have described as 'relational'--as in focusing on the relationships between the people saying and hearing the words--which you then described as transactional. 
2Said Achmiz5mo
Ah, I see. I think that the sense in which I would characterize Duncan’s description as “transactional” is… mostly orthogonal to the question of “is this a relational frame”. I don’t think that this has much to do with the “‘visitor commons’ vs. ‘pairwise relationships’” distinction, either (although that distinction is an interesting and possibly important one in its own right, and you’re certainly more right than wrong about where my preferences lie in that regard). (There’s more that I could say about this, but I don’t know whether anything of importance hinges on this point. It seems like it mostly shouldn’t, but perhaps you are a better judge of that…)
A couple quick notes for now: I agree with Duncan here it's kinda silly to start the clock at "Killing Socrates". Insofar as there's a current live fight that is worth tracking separately from overall history, I think it probably starts in the comments of LW Team is adjusting moderation policy, and I think the recent-ish back and forth on Basics of Rationalist Discourse and "Rationalist Discourse" Is Like "Physicist Motors" is recent enough to be relevant (hence me including the in the OP) I think Vaniver right now is focusing on resolving the point "is Said a liar?", but not resolving the "who did most wrong?" question. (I'm not actually 100% sure on Vaniver's goals/takes at the moment). I agree this is an important subquestion but it's not the primary question I'm interested in.  I'm somewhat worried about this thread taking in more energy that it quite warrants, and making Duncan feel more persecuted than really makes sense here.  I roughly agree with Vaniver than "Liar!" isn't the right accusation to have levied, but also don't judge you harshly for having made it.  I think this comment of mine summarizes my relevant opinions here. (tagging @Vaniver to make sure he's at least tracking this comment)
6[DEACTIVATED] Duncan Sabien5mo
Thanks. I note (while acknowledging that this is a small and subtle distinction, but claiming that it is an important one nonetheless) that I said that I now categorize Said as a liar, which is an importantly and intentionally weaker claim than Said is a liar, i.e. "everyone should be able to see that he's a liar" or "if you don't think he's a liar you are definitely wrong." (This is me in the past behaving in line with the points I just made under Said's comment, about not confusing [how things seem to me] with [how they are] or [how they do or should seem to others].) This is much much closer to saying "Liar!" than it is to not saying "Liar!" ... if one is to round me off, that's the correct place to round me off to. But it is still a rounding.
Nod, seems fair to note.

I interpret a lot of Duncan’s complaints here thru the lens of imaginary injury that he writes about here.

I just want to highlight this link (to one of Duncan’s essays on his Medium blog), which I think most people are likely to miss otherwise.

That is an excellent post! If it was posted on Less Wrong (I understand why it wasn’t, of course EDIT: I was mistaken about understanding this; see replies), I’d strong-upvote it without reservation. (I disagree with some parts of it, of course, such as one of the examples—but then, that is (a) an excellent reason to provide specific examples, and part of what makes this an excellent post, and (b) the reason why top-level posts quite rightly don’t have agree/disagree voting. On the whole, the post’s thesis is simply correct, and I appreciate and respect Duncan for having written it.)

4[DEACTIVATED] Duncan Sabien5mo
It's not on LessWrong because of you, specifically. Like, literally that specific essay, I consciously considered where to put it, and decided not to put it here because, at the time, there was no way to prevent you from being part of the subsequent conversation.
-1Said Achmiz5mo
Hmm. I retract the “I understand why it wasn’t [posted on Less Wrong]” part of my earlier comment! I definitely no longer understand. (I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.)
Said, as a quick note - this particular comment reminds me of the "bite my thumb" scene from Romeo and Juliet. To you, it might be innocuous, but to me, and I suspect to Duncan and others, it sounds like a deliberate insult, with just enough of a veil of innocence to make it especially infuriating. I am presuming you did not actually mean this as an insult, but were instead meaning to express your genuine confusion about Duncan's thought process. I am curious to know a few things: 1. Did you recognize that it sounded potentially insulting? 2. If so, why did you choose to express yourself in this insulting-sounding manner? 3. If not, does it concern you that you may not recognize when you are expressing yourself in an insulting-sounding way, and is that something you are interested in changing? 4. And if you didn't know you sounded insulting, and don't care to change, why is that?

There are some things which cannot be expressed in a non-insulting manner (unless we suppose that the target is such a saint that no criticism can affect their ego; but who among us can pretend to that?).

I did not intend insult, in the sense that insult wasn’t my goal. (I never intend insult, as a rule. What few exceptions exist, concern no one involved in this discussion.)

But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.

So, you ask:

If so, why did you choose to express yourself in this insulting-sounding manner?

The choice was between writing something that was necessary for the purpose of fulfilling appropriate and reasonable conversational goals, but could be written only in such a way that anyone but a saint would be insulted by it—or writing nothing.

I chose the former because I judged it to be the correct choice: writing nothing, simply in order to to avoid insult, would have been worse than writing the comment which I wrote.

(This explanation is also quite likely to apply to any past or future comments I write which seem to be insulting in similar fashion.)

But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.

I want to register that I don't believe you that you cannot, if we're using the ordinary meaning of "cannot". I believe that it would be more costly for you, but it seems to me that people are very often able to express content like that in your comment, without being insulting.

I'm tempted to try to rephrase your comment in a non-insulting way, but I would only be able to convey its meaning-to-me, and I predict that this is different enough from its meaning-to-you that you would object on those grounds. However, insofar as you communicated a thing to me, you could have said that thing in a non-insulting way.

3Said Achmiz5mo
I believe you when you say that you don’t believe me. But I submit to you that unless you can provide a rephrasing which (a) preserves all relevant meaning while not being insulting, and (b) could have been generated by me, your disbelief is not evidence of anything except the fact that some things seem easy until you discover that they’re impossible.
My guess is that you believe it's impossible because the content of your comment implies a negative fact about the person you're responding to. But insofar as you communicated a thing to me, it was in fact a thing about your own failure to comprehend, and your own experience of bizarreness. These are not unflattering facts about Duncan, except insofar as I already believe your ability to comprehend is vast enough to contain all "reasonable" thought processes.

These are not unflattering facts about Duncan

Indeed, they are not—or so it would seem. So why would my comment be insulting?

After all, I didn’t write “your stated reason is bizarre”, but “I find your stated reason bizarre”. I didn’t write “it seems like your thinking here is incoherent”, but “I can’t form any coherent model of your thinking here”. I didn’t… etc.

So what makes my comment insulting?

Please note, I am not saying “my comment isn’t insulting, and anyone who finds it so is silly”. It is insulting! And it’s going to stay insulting no matter how you rewrite it, unless you either change what it actually says or so obfuscate the meaning that it’s not possible to tell what it actually says.

The thing I am actually saying—the meaning of the words, the communicated claims—imply unflattering facts about Duncan.[1] There’s no getting around that.

The only defensible recourse, for someone who objects to my comment, is to say that one should simply not say insulting things; and if there are relevant things to say which cannot be said non-insultingly, then they oughtn’t be said… and if anything is lost thereby, well, too bad.

And that would be a consistent point of view, certainly. Bu... (read more)

For what it's worth, I don't think that one should never say insulting things. I think that people should avoid saying insulting things in certain contexts, and that LessWrong comments are one such context. I find it hard to square your claim that insultingness was not the comment's purpose with the claim that it cannot be rewritten to elide the insult. An insult is not simply a statement with a meaning that is unflattering to its target - it involves using words in a way that aggressively emphasizes the unflatteringness and suggests, to some extent, a call to non-belief-based action on the part of the reader. If I write a comment entirely in bold, in some sense I cannot un-bold it without changing its effect on the reader. But I think it would be pretty frustrating to most people if I then claimed that I could not un-bold it without changing its meaning.
You still haven't actually attempted the challenge Said laid out.
I'm not sure what you mean - as far as I can tell, I'm the one who suggested trying to rephrase the insulting comment, and in my world Said roughly agreed with me about its infeasibility in his response, since it's not going to be possible for me to prove either point: Any rephrasing I give will elicit objections on both semantics-relative-to-Said and Said-generatability grounds, and readers who believe Said will go on believing him, while readers who disbelieve will go on disbelieving.
You haven't even given an attempt at rephrasing.
Nor should I, unless I believe that someone somewhere might honestly reconsider their position based on such an attempt. So far my guess is that you're not saying that you expect to honestly reconsider your position, and Said certainly isn't. If that's wrong then let me know! I don't make a habit of starting doomed projects.
I think for the purposes of promoting clarity this is a bad rule of thumb. The decision to explain should be more guided by effort/hedonicity and availability of other explanations of the same thing that are already there, not by strategically withholding things based on predictions of how others would treat an explanation. (So for example "I don't feel like it" seems like an excellent reason not to do this, and doesn't need to be voiced to be equally valid.)
I think I agree that this isn't a good explicit rule of thumb, and I somewhat regret how I put this. But it's also true that a belief in someone's good-faith engagement (including an onlooker's), and in particular their openness to honest reconsideration, is an important factor in the motivational calculus, and for good reasons.
The structure of a conflict and motivation prompted by that structure functions in a symmetric way, with the same influence irrespective of whether the argument is right or wrong. But the argument itself, once presented, is asymmetric, it's all else equal stronger when correct than when it's not. This is a reason to lean towards publishing things, perhaps even setting up weird mechanisms like encouraging people to ignore criticism they dislike in order to make its publication more likely.
If you're not even willing to attempt the thing you say should be done, you have no business claiming to be arguing or negotiating in good faith. You claimed this was low-effort. You then did not put in the effort to do it. This strongly implies that you don't even believe your own claim, in which case why should anyone else believe it? It also tests your theory. If you can make the modification easily, then there is room for debate about whether Said could. If you can't, then your claim was wrong and Said obviously can't either.

I think it's pretty rough for me to engage with you here, because you seem to be consistently failing to read the things I've written. I did not say it was low-effort. I said that it was possible. Separately, you seem to think that I owe you something that I just definitely do not owe you. For the moment, I don't care whether you think I'm arguing in bad faith; at least I'm reading what you've written.

3Said Achmiz5mo
I more or less agree with this; I think that posting and commenting on Less Wrong is definitely a place to try to avoid saying anything insulting. But not to try infinitely hard. Sometimes, there is no avoiding insult. If you remove all the insult that isn’t core to what you’re saying, and if what you’re saying is appropriate, relevant, etc., and there’s still insult left over—I do not think that it’s a good general policy to avoid saying the thing, just because it’s insulting. By that measure, my comment does not qualify as an insult. (And indeed, as it happens, I wouldn’t call it “an insult”; but “insulting” is slightly different in connotation, I think. Either way, I don’t think that my comment may fairly be said to have these qualities which you list. Certainly there’s no “call to non-belief-based action”…!) True, of course… but also, so thoroughly dis-analogous to the actual thing that we’re discussing that it mostly seems to me to be a non sequitur.
I think I disagree that your comment does not have these qualities in some measure, and they are roughly what I'm objecting to when I ask that people not be insulting. I don't think I want you to never say anything with an unflattering implication, though I do think this is usually best avoided as well. I'm hopeful that this is a crux, as it might explain some of the other conversation I've seen about the extent to which you can predict people's perception of rudeness. There are of course more insulting ways you could have conveyed the same meaning. But there are also less insulting ways (when considering the extent to which the comment emphasizes the unflatteringness and the call to action that I'm suggesting readers will infer).   I believe that none was intended, but I also expect that people (mostly subconsciously!) interpret (a very small) one from the particular choice of words and phrasing. Where the action is something like "you should scorn this person", and not just "this person has unflattering quality X". The latter does not imply the former.
1Said Achmiz5mo
I think that, at this point, we’re talking about nuances so subtle, distinctions so fragile (in that they only rarely survive even minor changes of context, etc.), that it’s basically impossible to predict how they will affect any particular person’s response to any particular comment in any particular situation. To put it another way, the variation (between people, between situations, etc.) in how any particular bit of wording will be perceived, is much greater than the difference made by the changes in wording that you seem to be talking about. So the effects of any attempt to apply the principles you suggest is going to be indistinguishable from noise. And that means that any effort spent on doing so will be wasted.
3Jasnah Kholin5mo
I actually DO believe you can't write this in not-insulting way. I find it the result of not prioritizing developing and practicing those skills in general.  while i do judge you for this, i judge you for this one time, on the meta-level, instead of judging any instance separately. as i find this behavior orderly and predictable.  
I'm not quite clear: are you saying that it's literally impossible to express certain non-insulting meanings in a non-insulting way? Or that you personally are not capable of doing so? Or that you potentially could, but you're not motivated to figure out how? Edit - also, do you mean that it's impossible to even reduce the degree to which it sounds insulting? Or are you just saying that such comments are always going to sound at least a tiny bit insulting? This is helpful to me understanding you better. Thank you.
2Said Achmiz5mo
I… think that the concept of “non-insulting meaning” is fundamentally a confused one in this context. Reduce the degree? Well, it seems like it should be possible, in principle, in at least some cases. (The logic being that it seems like it should be quite possible to increase the degree of insultingness without changing the substance, and if that’s the case, then one would have to claim that I always succeed at selecting exactly the least insulting possible version—without changes in substance—of any comment; and that seems like it’s probably unlikely. But there’s a lot of “seems” in that reasoning, so I wouldn’t place very much confidence in it. And I can also tell a comparably plausible story that leads to the opposite conclusion, reducing my confidence even further.) But I am not sure what consequence that apparent in-principle truth has on anything.

Here's a potential alternative wording of your previous statement.

Original: (I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.)

New version: I am very confused by your stated reason, and I'm genuinely having trouble seeing things from your point of view. But I would genuinely like to. Here's a version that makes a little more sense to me [give it your best shot]... but here's where that breaks down [explain]. What am I missing?

I claim with very high confidence that this new version is much less insulting (or is not insulting at all). It took me all of 15 seconds to come up with, and I claim that it either conveys the same thing as your original comment (plus added extras), or that the difference is negligible and could be overcome with an ongoing and collegial dialog of a kind that the original, insulting version makes impossible. If you have an explanation for what of value is lost in translation here, I'm listening.

4Said Achmiz5mo
It’s certainly possible to write more words and thereby to obfuscate what you’re saying and/or alter your meaning in the direction of vagueness. And you can, certainly, simply say additional things—things not contained in the original message, and that aren’t simply transformations of the meaning, but genuinely new content—that might (you may hope) “soften the blow”, as it were. But all of that aside, what I’d actually like to note, in your comment, is this part: First of all, while it may be literally true that coming up with that specific wording, with the bracketed parts un-filled-in, took you 15 seconds (if you say it, I believe it), the connotation that transmuting a comment from the “original” to the (fully qualified, as it were) “new version” takes somewhere on the order of 15 seconds (give or take a couple of factors of two, perhaps) is not believable. Of course you didn’t claim that—it’s a connotation, not a denotation. But do you think it’s true? I don’t. I don’t think that it’s true even for you. (For one thing, simply typing out the “fully qualified” version—with the “best shot” at explanation outlined, and the pitfalls noted, and the caveats properly caveated—is going to take a good bit longer. Type at 60 WPM? Then you’ve got the average adult beat, and qualify as a “professional typist”; but even so just the second paragraph of your comment would take you most of a minute to type out. Fill out those brackets, and how many words are you adding? 100? 300? More?) But, perhaps more importantly, that stuff requires not just more typing, but much more thinking (and reading). What is worse, it’s thinking of a sort that is very, very likely to be a complete waste of time, because it turns out to be completely wrong. For example, consider this attempt, by me, to describe in detail Duncan’s approach to banning people from his posts. It seemed—and still seems—to me to be an accurate characterization; and certainly it was written in such a way that I quite
This is the part I think is important in your objection - I agree with you that expanding the bracketed part would take more than 15 seconds. You're claiming somewhere on the implicit-explicit spectrum that something substantial is lost in the translation from the original insulting version by you to the new non-insulting version by me. I just straightforwaredly disagree with that, and I challenge you to articulate what exactly you think is lost and why it matters.
8Said Achmiz5mo
I confess that I am not sure what you’re asking. As far as saying additional things goes—well, uh, the additional things are the additional things. The original version doesn’t contain any guessing of meaning or any kind of thing like that. That’s strictly new. As I said, the rest is transparent boilerplate. It doesn’t much obfuscate anything, but nor does it improve anything. It’s just more words for more words’ sake. I don’t think anything substantive is lost in terms of meaning; the losses are (a) the time and effort on the part of the comment-writer, (b) annoyance (or worse) on the part of the comment target (due to the inevitably-incorrect guessing), (c) annoyance (or worse) on the part of the comment target (due to the transparent fluff that pretends to hide a fundamentally insulting meaning). The only way for someone not to be insulted by a comment that says something like this is just to not be insulted by what it says. (Take my word for this—I’ve had comments along these lines directed at me many, many times, in many places! I mostly don’t find them insulting—and it’s not because people who say such things couch them in fluff. They do no such thing.)
  Ah, I see. So the main thing I'm understanding here is that the meaning you were trying to convey to Duncan is understood, by you, as a fundamentally insulting one. You could "soften" it by the type of rewording I proposed. But this is not a case where you mean to say something non-insulting, and it comes out sounding insulting by accident. Instead, you mean to say something insulting, and so you're just saying it, understanding that the other person will probably, very naturally, feel insulted. An example of saying something fundamentally insulting is to tell somebody that you think they are stupid or ugly. You are making a statement of this kind. Is that correct?
7Said Achmiz5mo
No, I don’t think so… But this comment of yours baffles me. Did we not already cover this ground?
Then what did you mean by this: My understanding of this statement was that you are asserting that the core meaning of the original quote by you, in both your original version and my rewrite, was a fundamentally insulting one. Are you saying it was a different kind of fundamental insult from calling somebody stupid or ugly? Or are you now saying it was not an insult?
1Said Achmiz5mo
Well, firstly—as I say here, I think that there’s a subtle difference between “insulting” and “an insult”. But that’s perhaps not the key point. That aside, it really seems like your question is answered, very explicitly, in this earlier comment of mine. But let’s try again: Is my comment insulting? Yes, as I said earlier, I think that it is (or at least, it would not be unreasonable for someone to perceive it thus). (Should it be insulting? Who knows; it’s complicated. Is it gratuitously insulting, or insulting in a way that is extraneous to its propositional meaning? No, I don’t think so. Would all / most people perceive it as insulting if they were its target? No / probably, respectively. Is it possible not to be insulted by it? Yes, it’s possible; as I said earlier, I’ve had this sort of thing said to me, many times, and I have generally failed to be insulted by it. Is it possible for Duncan, specifically, to not be insulted by that comment as written by me, specifically? I don’t know; probably not. Is that, specifically, un-virtuous of Duncan? No, probably not.) Is my comment thereby similar to other things which are also insulting, in that it shares with those other things the quality of being insulting? By definition, yes. Is it insulting in the same way as is calling someone stupid, or calling someone ugly? No, all three of these are different things, which can all be said to be insulting in some way, but not in the same way.
OK, this is helpful. So it sounds like you perceive your comment as conveying information - a fact or a sober judgment of yours - that will, in its substance, tend to trigger a feeling of being insulted in the other person, possibly because they are sensitive to that fact or judgment being called to their attention. But it is not primarily intended by you to provoke that feeling of being insulted. You might prefer it if the other person did not experience the feeling of being insulted (or you might simply not care) - your aim is to convey the information, irrespective of whether or not it makes the other person feel insulted. Is that correct?
4Said Achmiz5mo
Sounds about right.
Now that we've established this, what is your goal when you make insulting comments? (Note: I'll refer to your comments as "insulting comments," defined in the way I described in my previous comment). If you subscribe to a utilitarian framework, how does the cost/benefit analysis work out? If you are a virtue ethicist, what virtue are you practicing? If you are a deontologist, what maxim are you using? If none of these characterizes the normative beliefs you're acting under, then please articulate what motivates you to make them in whatever manner makes sense to you. Making statements, however true, that you expect to make the other person feel insulted seems like a substantial drawback that needs some rationale.

If you care more about not making social attacks than telling the truth, you will get an environment which does not tell the truth when it might be socially inconvenient. And the truth is almost always socially inconvenient to someone.

So if you are a rationalist, i.e. someone who strongly cares about truth-seeking, this is highly undesirable.

Most people are not capable of executing on this obvious truth even when they try hard; the instinct to socially-smooth is too strong. The people who are capable of executing on it are, generally, big-D Disagreeable, and therefore also usually little-d disagreeable and often unpleasant. (I count myself as all three, TBC. I'd guess Said would as well, but won't put words in his mouth.)

Yes, caring too much about not offending people means that people do not call out bullshit. However, are rude environments more rational? Or do they just have different ways of optimizing for something other than truth? -- Just guessing here, but maybe disagreeable people derive too much pleasure from disagreeing with someone, or offending someone, so their debates skew that way. (How many "harsh truths" are not true at all; they are just popular because offend someone?) (When I tried to think about examples, I thought I found one: military. No one cares about the feelings of their subordinates, and yet things get done. However, people in the military care about not offending their superiors. So, probably not a convincing example for either side of the argument.)

I'm sure there is an amount of rudeness which generates more optimization-away-from-truth than it prevents. I'm less sure that this is a level of rudeness achievable in actual human societies. And for whether LW could attain that level of rudeness within five years even if it started pushing for rudeness as normative immediately and never touched the brakes - well, I'm pretty sure it couldn't. You'd need to replace most of the mod team (stereotypically, with New Yorkers, which TBF seems both feasible and plausibly effective) to get that to actually stick, probably, and it'd still be a large ship turning slowly.

A monoculture is generally bad, so having a diversity of permitted conduct is probably a good idea regardless. That's extremely hard to measure, so as a proxy, ensuring there are people representing both extremes who are prolific and part of most important conversations will do well enough.

I am probably just saying the obvious here, but a rude environment is not only one where people say true things rudely, but also where people say false things rudely. So when we imagine the interactions that happen there, it is not just "someone says the truth, ignoring the social consequences" which many people would approve, but also "someone tries to explain something complicated, and people not only respond by misunderstanding and making fallacies, but they are also assholes about it" where many people would be tempted to say 'fuck this' and walk away. So the website would gravitate towards a monoculture anyway. (I wanted to give theMotte as an example of a place that is further in that direction and the quality seems to be lower... but I just noticed that the place is effectively dead.)

a rude environment is not only one where people say true things rudely, but also where people say false things rudely

The concern is with requiring the kind of politeness that induces substantive self-censorship. This reduces efficiency of communicating dissenting observations, sometimes drastically. This favors beliefs/arguments that fit the reigning vibe.

The problems with (tolerating) rudeness don't seem as asymmetric, it's a problem across the board, as you say. It's a price to consider for getting rid of the asymmetry of over-the-top substantive-self-censorship-inducing politeness.

The Motte has its own site now. (I agree the quality is lower than LW, or at least it was several months ago and that's part of why I stopped reading. Though idk if I'd attribute that to rudeness.)
I do not think that is the usual result.
2[comment deleted]5mo
1M. Y. Zuo5mo
There's another example, frats. Even though the older frat members harass their subordinates via hazing rituals and so on, the new members wouldn't stick around if they genuinely thought the older members were disagreeable people out to get them. 
4Said Achmiz5mo
I write comments for many different reasons. (See this, this, etc.) Whether a comment happens to be (or be likely to be perceived as) “insulting” or not generally doesn’t change those reasons. I do not agree. Please see this comment and this comment for more details on my approach to such matters.
OK, I have read the comments you linked. My understanding is this: * You understand that you have a reputation for making comments perceived as social attacks, although you don't intend them as such. * You don't care whether or not the other person feels insulted by what you have to say. It's just not a moral consideration for your commenting behavior. * Your aesthetic is that you prefer to accept that what you have to say has an insulting meaning, and to just say it clearly and succinctly. Do you care about the manner in which other people talk to you? For example, if somebody wished to say something with an insulting meaning to you, would you prefer them to say it to you in the same way you say such things to others? (Incidentally, I don't know who's been going through our comment thread downvoting you, but it wasn't me. I'm saying this because I now see myself being downvoted, and I suspect it may be retaliation from you, but I am not sure about that).

You understand that you have a reputation for making comments perceived as social attacks, although you don’t intend them as such.

I have (it would seem) a reputation for making certain sorts of comments, which are of course not intended as “attacks” of any sort (social, personal, etc.), but which are sometimes perceived as such—and which perception, in my view, reflects quite poorly on those who thus perceive said comments.

You don’t care whether or not the other person feels insulted by what you have to say. It’s just not a moral consideration for your commenting behavior.

Certainly I would prefer that things were otherwise. (Isn’t this often the case, for all of us?) But this cannot be a reason to avoid making such comments; to do so would be even more blameworthy, morally speaking, than is the habit on the part of certain interlocutors to take those comments as attacks in the first place. (See also this old comment thread, which deals with the general questions of whether, and how, to alter one’s behavior in response to purported offense experienced by some person.)

Your aesthetic is that you prefer to accept that what you have to say has an insulting meaning, and to just

... (read more)
3[DEACTIVATED] Duncan Sabien5mo
Just a small note that "Said interpreting someone as [interpreting Said's comment as an attack]" is, in my own personal experience, not particularly correlated with [that person in fact having interpreted Said's comment as an attack]. Said has, in the past, seemed to have perceived me as perceiving him as attacking me, when in fact I was objecting to his comments for other reasons, and did not perceive them as an attack, and did not describe them as attacks, either.
0Said Achmiz5mo
The comment you quoted was not, in fact, about you. It was about this (which you can see if you read the thread in which you’re commenting). Note that in the linked discussion thread, it is not I, but someone else, who claims that certain of my comments are perceived as attacks. In short, your comment is a non sequitur in this context.
0[DEACTIVATED] Duncan Sabien5mo
No, it's relevant context, especially given that you're saying in the above ~[and I judge people for it]. (To be clear, I didn't think that the comment I quoted was about me. Added a small edit to make that clearer.)
I wrote about five paragraphs in response to this, which I am fine with sharing with you on two conditions. First, because my honest answer contains quite a bit of potentially insulting commentary toward you (expressed in the same matter of fact tone I've tried to adopt throughout our interaction here), I want your explicit approval to share it. I am open to not sharing it, DMing it to you, or posting it here. Secondly, if I do share it, I want you to precommit not to respond with insulting comments directed at me.
4Said Achmiz5mo
This seems like a very strange, and strangely unfair, condition. I can’t make much sense of it unless I read “insulting” as “deliberately insulting”, or “intentionally insulting”, or something like it. (But surely you don’t mean it that way, given the conversational context…?) Could you explain the point of this? I find that I’m increasingly perplexed by just what the heck is going on in this conversation, and this latest comment has made me more confused than ever…
Yes, it's definitely an unfair condition, and I knew that when I wrote it. Nevertheless - that is my condition. If you would prefer a vague answer with no preconditions, I am satisfying my curiosity about somebody who thinks very differently about commenting norms than I do.
4Said Achmiz5mo
Alright, thanks.
2Said Achmiz5mo
I did (weak-)downvote one comment of yours in this comment section, but only one. If you’re seeing multiple comments downvoted, then those downvotes aren’t from me. (Of course I don’t know how I’d prove that… but for whatever my word’s worth, you have it.)
I believe you, and it doesn't matter to me. I just didn't want you to perceive me incorrectly as downvoting you.
I like the norm of discussing a hypothetical interpretation you find interesting/relevant, without a need to discuss (let alone justify) its relation to the original statement or God forbid intended meaning. If someone finds it interesting to move the hypothetical in another direction (perhaps towards the original statement, or even intended meaning), that is a move of the same kind, not a move of a different and privileged kind.
4Said Achmiz5mo
I agree that this can often be a reasonable and interesting thing to do. I would certainly not support any such thing becoming expected or mandatory. (Not that you implied such a thing—I just want to forestall the obvious bad extrapolation.)
Do you mean that you don't support the norm of it not being expected for hypothetical interpretations of statements to not needing to justify themselves as being related to those statements? In other words, that (1) you endorse the need to justify discussion of hypothetical interpretations of statements by showing those interpretations to be related to the statements they interpret, or something like that? Or (2) that you don't endorse endless tangents becoming the norm, forgetting about the original statement? The daisy chain is too long. It's unclear how to shape the latter option with policy. For the former option, the issue is demand for particular proof. Things can be interesting for whatever reason, doesn't have to be a standard kind of reason. Prohibiting arbitrary reasons is damaging to the results, in this case I think for no gain.

Do you mean that … (1) you endorse the need to justify discussion of hypothetical interpretations of statements by showing those interpretations to be related to the statements they interpret, or something like that?

No, absolutely not.

Or (2) that you don’t endorse endless tangents becoming the norm, forgetting about the original statement?


My view is that first it’s important to get clear on what was meant by some claim or statement or what have you. Then we can discuss whatever. (If that “whatever” includes some hypothetical interpretation of the original (ambiguous) claim, which someone in the conversation found interesting—sure, why not.) Or, at the very least, it’s important to get that clarity regardless—the tangent can proceed in parallel, if it’s something the participants wish.

EDIT: More than anything, what I don’t endorse is a norm that says that someone asking “what did you mean by that word/phrase/sentence/etc.?” must provide some intepretation of their own, whether that be a guess at the OP’s meaning, or some hypothetical, or what have you. Just plain asking “what did you mean by that?” should be ok!

Things can be interesting for whatever reason, doesn’t have to be a standard kind of reason. Prohibiting arbitrary reasons is damaging to the results, in this case I think for no gain.

Totally agreed.

7Said Achmiz5mo
(Expanding on this comment) The key thing missing from your account of my views is that while I certainly think that “local validity checking” is important, I also—and, perhaps, more importantly—think that the interactions in question are not only fine, but good, in a “relational” sense. So, for example, it’s not just that a comment that just says “What are some examples of this?” doesn’t, by itself, break any rules or norms, and is “locally valid”. It’s that it’s a positive contribution to the discussion, which is aimed at (a) helping a post author to get the greatest use out of his post and the process and experience of posting it, and (b) helping the commentariat get the greatest use out of the author’s post. (Of course, (b) is more important than (a)—but they are both important!) Some points that follow from this, or depend on this: First, such contributions should be socially rewarded to the degree that they are necessary. By “necessary”, here, I mean that if it is the case that some particular sort of criticism or some particular sort of question is good (i.e., it contributes substantially to how much use can be gotten out of a post), but usually nobody asks that sort of question or makes that sort of criticism, then anyone who does do that, should be seen as making not only a good but a very important contribution. (And it’s a bad sign when this sort of thing is common—it means that at least some sorts of important criticisms, or some sorts of important questions, are not asked nearly often enough!) Meanwhile, asking a sort of question or making a sort of criticism which is equally good but is usually or often made, such that it is fairly predictable and authors can, with decent probability, expect to get it, then such a question or criticism is still good and praiseworthy, but not individually as important (though of course still virtuous!). In the limit, an author will know that if they don’t address something in their post, somebody will ask about it
6Said Achmiz5mo
Thank you for laying out your reasoning. I don’t have any strong objections to any of this (various minor ones, but that’s to be expected)… … except the last paragraph (#5, starting with “I think Said is trying to figure out …”). There I think you importantly mis-characterize my views; or, to be more precise, you leave out a major aspect, which (in addition to being a missing key point), by its absence colors the rest of your characterization. (What is there is not wrong, per se, but, again, the missing aspect makes it importantly misleading.) I would, of course, normally elaborate here, but I hesitate to end up with this comment thread/section being filled with my comments. Let me know if you want me to give my thoughts on this in detail here, or elsewhere. (EDIT: Now expanded upon in this comment.)
I would appreciate more color on your views; by that point I was veering into speculation and hesitant to go too much further, which naturally leads to incompleteness.
4[DEACTIVATED] Duncan Sabien5mo
By the way, I will note that I am both quite surprised and, separately, something like dismayed, at how devastatingly effective has been what I will characterize as "Said's privileging-the-hypothesis gambit." Like, Said proposed, essentially, "Duncan holds a position which basically no sane person would advocate, and he has somehow held this position for years without anyone noticing, and he conspicuously left this position out of his very-in-depth statement of his beliefs about discourse norms just a couple of months ago" and if I had realized that I actually needed to seriously counter this claim, I might have started with "bro do you even Bayes?" (Surely a reasonable prior on someone holding such a position is very very very low even before taking into account the latter parts of the conjunction.) Like, that Vaniver would go so far as to take the hypothesis and then go sifting through the past few comments with an eye toward using them to distinguish between "true" and "false" is startling to me. The observation "Duncan groused at Said for doing too little interpretive and intellectual labor relative to that which he solicited from others" is not adequate support for "Duncan generally thinks that asking for examples is unacceptable." This is what I meant by the strength of the phrase "blatant falsehood." I suppose if you are starting from "either Mortimer Snodgrass did it, or not," rather than from "I wonder who did the murder," then you can squint at my previous comments— (including the one that was satirical, which satire, I infer from Vaniver pinging me about my beliefs on that particular phrase offline, was missed) —and see in them that the murderer has dark hair, and conclude from Mortimer's dark hair that there should be a large update toward his guilt. But I rather thought we didn't do that around here, and did not expect anyone besides Said to seriously entertain the hypothesis, which is ludicrous. (I get that Said probably genuinely believed it

Again, just chiming in, leaving the actual decision up to Ray: 

My current take here is indeed that Said's hypothesis, taking fully literal and within your frame was quite confused and bad. 

But also, like, people's frames, especially in the domain of adversarial actions, hugely differ, and I've in the past been surprised by the degree to which some people's frames, despite seeming insane and gaslighty to me at first turned out to be quite valuable. Most concretely I have in my internal monologue indeed basically fully shifted towards using "lying" and "deception" the way Zack, Benquo and Jessica are using it, because their concept seems to carve reality at its joints much better than my previous concept of lying and deception. This despite me telling them many times that their usage of those terms is quite adversarial and gaslighty. 

My current model is that when Said was talking about the preference he ascribes to you, there is a bunch of miscommunication going on, and I probably also have deep disagreements with his underlying model, but I have updated against trying to stamp down on that kind of stuff super hard, even if it sounds quite adversarial to me on first gl... (read more)

I think you are mistaken about the process that generated my previous comment; I would have preferred a response that engaged more with what I wrote.

In particular, it looks to me like you think the core questions are "is the hypothesis I quote correct? Is it backed up by the four examples?", and the parent comment looks to me like you wrote it thinking I thought the hypothesis you quote is correct and backed up by the examples. I think my grandparent comment makes clear that I think the hypothesis you quote is not correct and is not backed up by the four examples. 

Why does the comment not just say "Duncan is straightforwardly right"? Well, I think we disagree about what the core questions are. If you are interested in engaging with that disagreement, so am I; I don't think it looks like your previous comment.

4[DEACTIVATED] Duncan Sabien5mo
(I intended to convey with "by the way" that I did not think I had (yet) responded to the full substance of your comment/that I was doing something of an aside.)
3[DEACTIVATED] Duncan Sabien5mo
I plan to just leave/not post essays here anymore if this isn't fixed. LW is a miserable place to be, right now. ¯\_(ツ)_/¯ (I also said the following in a chat with several of the moderators on 4/8: > I spent some time wondering if I would endorse a LW where both Duncan and Said were banned, and my conclusion was "yes, b/c that place sounds like it knows what it's for and is pruning and weeding accordingly.")
0[DEACTIVATED] Duncan Sabien5mo
I note that this is leaving out recent and relevant background mentioned in this comment.

I have not read all the words in this comment section, let alone in all the linked posts, let alone in their comments sections, but/and - it seems to me like there's something wrong with a process that generates SO MANY WORDS from SO MANY PEOPLE and takes up SO MUCH PERSON-TIME for what is essentially two people not getting along. I get that an individual social conflict can be a microcosm of important broader dynamics, and I suspect that Duncan and/or Said might find my "not getting along" summary trivializing, which may even be true, as noted I haven't read all the words - just, still, is this really the best thing for everyone involved to be doing with their time?

It is already happening, so the choices are either one big thread, or dozen (not much) smaller ones.
Or at least, if there's something so compelling-in-some-way going on for some people that they want to keep engaging, at least we could hope that somehow they could be facilitated in doing mental work that will be helpful for whatever broader things there are. Like, if it's a microcosm of stuff, if it represents some important trends, if there's something important but hard to see without trying really hard, then it might be good for them to focus on that rather than being in a fight. (Of course, easier said than done(can); a lot of the ink spilled will feel like trying to touch on the broader things, but only some of it actually will.)

This seems like a situation that is likely to end up ballooning into something that takes up a lot of time and energy. So then, it seems worth deciding on an "appetite" up front. Is this worth an additional two hours of time? Six? Sixty? Deciding on that now will help avoid a scenario where (significantly) more time is spent than is desirable.

Skimmed all the comments here and wanted to throw in my 2c (while also being unlikely to substantively engage further, take that into account if you're thinking about responding):

  • It seems to me that people should spend less time litigating this particular fight and more time figuring out the net effects that Duncan and Said have on LW overall. It seems like mods may be dramatically underrating the value of their time and/or being way too procedurally careful here, and I would like to express that I'd support them saying stuff like "idk exactly what went wrong but you are causing many people on our site (including mods) to have an unproductive time, that's plenty of grounds for a ban".
  • It seems to me that many (probably most) people who engage with Said will end up having an unproductive and unpleasant time. So then my brain started generating solutions like "what if you added a flair to his comments saying 'often unproductive to engage'" and then I was like "wait this is clearly a missing stair situation (in terms of the structural features not the severity of the misbehavior) and people are in general way too slow to act on those; at the point where this seems like a plausibly-net-
... (read more)

Wei Dai had a comment below about how important it is to know whether there’s any criticism or not, but mostly I don’t care about this either because my prior is just that it’s bad whether or not there’s criticism. In other words, I think the only good approach here is to focus on farming the rare good stuff and ignoring the bad stuff (except for the stuff that ends up way overrated, like (IMO) Babble or Simulators, which I think should be called out directly).

But how do you find the rare good stuff amidst all the bad stuff? I tend to do it with a combination of looking at karma, checking the comments to see whether or not there’s good criticism, and finally reading it myself if it passes the previous two filters. But if a potentially good criticism was banned or disincentivized, then that 1) causes me to waste time (since it distorts both signals I rely on), and 2) potentially causes me to incorrectly judge the post as "good" because I fail to notice the flaw myself. So what do you do such that it doesn't matter whether or not there's criticism?

My approach is to read the title, then if I like it read the first paragraph, then if I like that skim the post, then in rare cases read the post in full (all informed by karma). I can't usually evaluate the quality of criticism without at least having skimmed the post. And once I've done that then I don't usually gain much from the criticisms (although I do agree they're sometimes useful). I'm partly informed here by the fact that I tend to find Said's criticisms unusually non-useful.

Thanks for weighing in! Fwiw I've been skimming but not particularly focused on the litigation of the current dispute, and instead focusing on broader patterns. (I think some amount of litigation of the object level was worth doing but we're past the point where I expect marginal efforts there to help)

One of the things that's most cruxy to me is what people who contribute a lot of top content* feel about the broader patterns, so, I appreciate you chiming in here.

*roughly operationalized as "write stuff that ends up in the top 20 or top 50 of the annual review"

Makes sense. FYI I personally haven't had bad experiences with Said (and in fact I remember talking to mods who were at one point surprised by how positively he engaged with some of my posts). My main concern here is the missing stair dynamic of "predictable problem that newcomers will face".
7Said Achmiz5mo
You know, I’ve seen this sort of characterization of my commenting activity quite a few times in these discussions, and I’ve mostly shrugged it off; but (with apologies, as I don’t mean to single you out, and indeed you’re one of the LW members whom I respect significantly more than average) I think at this point I have to take the time to address it. My objection is simply this: Is it actually true that I “comment pessimistically on lots of stuff”? Do I do this more than other people? There are many ways of operationalizing that, of course. Here’s one that seems reasonable to me: let’s find all the posts (not counting “meta”-type posts that are already about me, or referring to me, or having to do with moderation norms that affect me, etc.) on which I’ve commented “pessimistically” in, let’s say, the last six months, and see if my comments are, in their level of “pessimism”, distinguishable from those of other commenters there; and also what the results of those comments turn out to be. #1: https://www.lesswrong.com/posts/Hsix7D2rHyumLAAys/run-posts-by-orgs Multiple people commenting in similarly “pessimistic” ways, including me. The most, shall we say, vigorous, discussion that takes place there doesn’t involve me at all. #2: https://www.lesswrong.com/posts/2yWnNxEPuLnujxKiW/tabooing-frame-control My overall view is certainly critical, but here I write multiple medium-length comments, which contain substantive analyses of the concept being discussed. (There is, however, a very brief comment from someone else which is just a request—or “demand”?—for clarification; such is given, without protest.) #3: https://www.lesswrong.com/posts/67NrgoFKCWmnG3afd/you-ll-never-persuade-people-like-that Here I post what can be said to be a critical comment, but one that offers my own take. Other comments are substantially more critical than mine. #4: https://www.lesswrong.com/posts/Y4hN7SkTwnKPNCPx5/why-don-t-more-people-talk-about-ecological-psychology#JcADzrnoJjhFHWE5W
Not responding to the main claim, cos mods have way more context on this than me, will defer to them. Very plausibly. But pessimism itself isn't bad, the question is whether it's the sort of pessimism that leads to better content or the sort that leads to worse content. Where, again, I'm going to defer to mods since they've aggregated much more data on how your commenting patterns affect people's posting patterns.

Here is some information about my relationship with posting essays and comments to LessWrong. I originally wrote it for a different context (in response to a discussion about how many people avoid LW because the comments are too nitpicky/counterproductive) so it's not engaging directly with anything in the OP, but @Raemon mentioned it would be useful to have here.


I *do* post on LW, but in a very different way than I think I would ideally. For example, I can imagine a world where I post my thoughts piecemeal pretty much as I have them, where I have a research agenda or a sequence in mind and I post each piece *as* I write it, in the hope that engagement with my writing will inform what I think, do, and write next. Instead, I do a year's worth of work (or more), make a 10-essay sequence, send it through many rounds of editing, and only begin publishing any part of it when I'm completely done, having decided in advance to mostly ignore the comments.

It appears to me that what I write is strongly in line with the vision of LW (as I understand it; my understanding is more an extrapolation of Eliezer's founding essays and the name of the site than a reflection of discussion with current ... (read more)

I also have the sense that most posts don't get enough / any high-quality engagement, and my bar for such engagement is likely lower than yours. I suspect though that the main culprit here is not the site culture, but instead a bunch of related reasons: the sheer amount of words on the site and in each essay, which cause the readership to spread out over a gigantic corpus of work; standard Internet engagement patterns (only a small fraction of readers write comments, and only a small fraction of those are high-quality); median LW essays receive too few views to produce healthy discussions; high-average-quality commenters are rare on the Internet, and their comments are spread out over everything they read; imperfect karma incentives; etc. Are there ways for individuals to reliably get a number of comments sufficiently large to produce the occasional high-quality engagement? The only ways I've seen are for them to either already be famous essayists (e.g. the comments sections on ACX or Slow Boring are sufficiently big to contain the occasional gem), or to post in their own Facebook community or something. Feed-like sites like Facebook suffer from their recency bias, however, which is kind of antithetical to the goal of writing truth-seeking and timeless essays.
Strong agree. Though I also engage in the commenting behavior, at an uncharitable view of my behavior. One can dream of some genius cracking the filtering problem and creating a criss-crossing tesseract of subcultures that can occupy the same space (e.g. LW) but go off in their own shared-goals directions (those people who jam and analyze with each other; those people who carefully nitpick and verify; those people who gather facts; those people who just vibe; ...).

Okay, overall outline of thoughts on my mind here:

  • What actually happened in the recent set of exchanges? Did anyone break any site norms? Did anyone do things that maybe should be site norms but we hadn't actually made it an explicit rule and we should take the opportunity to develop some case law and warn people not to do it in the future?
  • 5 years ago, the moderation team has issued Said a mod warning about a common pattern of engagment he does that a lot of people have complained about (this was operationalized as "demanding more interpretive labor than he has given"). We said if he did it again we'd ban him for a month. My vague recollection is he basically didn't do it for a couple years after the warning, but maybe started to somewhat over the past couple years, but I'm not sure, (I think he may have not done the particular thing we asked him not to, but I've had a growing sense his commenting making me more wary of how I use the site). What are my overall thoughts on that?
  • Various LW team members have concerns about how Duncan handles conflict. I'm a bit confused about how to think about it in this case. I think a number of other users are worried about this too. We should prob
... (read more)

Maybe explicit rules against blocking users from "norm-setting" posts.

On blocking users from commenting 

I still endorse authors being able to block other users (whether for principles reasons, or just "this user is annoying"). I think a) it's actually really important for authors for the site to be fun to use, b) there's a lot of users who are dealbreakingly annoying to some people but not others. Banning them from the whole site would be overkill. c) authors aren't obligated to lend their own karma/reputation to give space to other people's content. If an author doesn't want your comments on his post, whether for defensible reasons or not, I think it's an okay answer that those commenters make their own post or shortform arguing the point elsewhere. 

Yes, there are some trivial inconveniences to posting that criticism. I do track that in the cost. But I think that is outweighed by the effect on authors being motivated to post.

That all said...

Blocking users on "norm-setting posts"

I think it's more worrisome to block users on posts that are making major momentum towards changing site norms/culture. I don't think the censorship effects are that strong or distorting in most c... (read more)

This is exactly why I wrote Here's Why I'm Hesitant To Respond In More Depth. The purpose wasn't just to explain myself to somebody specific. It was to give myself an alternative resource when I received a specific time of common feedback that was giving me negative vibes. Instead of my usual behaviors (get in an argument, ignore and feel bad, downvote without explanation, or whatever), I could link to this post, which conveyed more detail, warmth and charity than I would be able to muster reliably or in the moment. I advocate that others should write their own versions tailored to their particular sensitivities, and I think it would be a step toward a healthier site culture.
2Jasnah Kholin5mo
"I do generally wish Duncan did more of this and less trying to set-the-record straight in ways that escalate in IMO very costly ways" strongly agree.
2[DEACTIVATED] Duncan Sabien5mo
I note for context/as a bit of explanation that Zack was blocked because of having shot from the hip with "This is insane" on what was literally a previous partial draft of that very post (made public by accident); I didn't want a repeat of a specific sort of interaction I had specific reason to fear.

Recap of mod team history with Said Achmiz

First, some background context. When LW2.0 was first launched, the mod team had several back-and-forth with Said over complaints about his commenting style. He was (and I think still is) the most-complained-about LW user. We considered banning him. 

Ultimately we told him this:

As Eliezer is wont to say, things are often bad because the way in which they are bad is a Nash equilibrium. If I attempt to apply it here, it suggests we need both a great generative and a great evaluative process before the standards problem is solved, at the same time as the actually-having-a-community-who-likes-to-contribute-thoughtful-and-effortful-essays-about-important-topics problem is solved, and only having one solved does not solve the problem.

I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that's fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the

... (read more)

I think some additional relevant context is this discussion from three years ago, which I think was 1) an example of Said asking for definitions without doing any interpretive labor, 2) appreciated by some commenters (including the post author, me), and 3) reacted to strongly by people who expected it to go poorly, including some mods. I can't quickly find any summaries we posted after the fact. 

Death by a thousand cuts and "proportionate"(?) response

A way this all feels relevant to current disputes with Duncan is that thing that is frustrating about Said is not any individual comment, but an overall pattern that doesn't emerge as extremely costly until you see the whole thing. (i.e. if there's a spectrum of how bad behavior is, from 0-10, and things that are a "3" are considered bad enough to punish, someone who's doing things that are bad at a "2.5" or "2.9" level don't quite feel worth reacting to. But if someone does them a lot it actually adds up to being pretty bad. 

If you point this out, people mostly shrug and move on with their day. So, to point it out in a way that people actually listen to, you have to do something that looks disproportionate if you're just paying attention to the current situation. And, also, the people who care strongly enough to see that through tend to be in an extra-triggered/frustrated state, which means they're not at their best when they're dong it.

I think Duncan's response looks very out-of-proportion. I think Duncan's response is out of proportion to some degree (see Vaniver thread for some reasons why. I have some more reasons I ... (read more)

Personally, the thing I think should change with Said is that we need more of him, preferably a dozen more people doing the same thing. If there were a competing site run according to Said's norms, it would be much better for pursuing the art of rationality than modern LessWrong is; disagreeable challenges to question-framing and social moves are desperately necessary to keep discussion norms truth-tracking rather than convenience-tracking.

But this is not an argument I expect to be able to win without actually trying the experiment. And even then I would expect at least five years would be required to get unambiguous results.

It would definitely be an interesting experiment. Different people would make different predictions about its outcome, but that's exactly what the experiments are good for. (My bet would be that the participants would only discuss "safe" topics, such as math and programming.)
3[DEACTIVATED] Duncan Sabien5mo
When Said was spilling thousands of words uncharitably psychoanalyzing me last week, I asked for mod help, and got none. I did, in fact, try the strategy of "don't engage much" (I think I left like three total comments to Said's dozens) and "get someone else to handle the conflict," and the moderators demurred. If you don't want me to defend myself my way, please make it not necessary to defend myself.

I am not sure what you mean, didn't Ray respond on the same day that you tagged him? 

I haven't read the details of all of the threads, but I interpreted your comment here as "the mod team ignored your call for clarification" as opposed to "the mod team did respond to your call for clarification basically immediately, but there was some <unspecified issue> with it".

0[DEACTIVATED] Duncan Sabien5mo
He responded to say ~"I don't like this much but we're not gonna do anything." EDIT: to elaborate, Ray actually put quite a bit of effort into a back and forth with Said, and eventually asked him to stop commenting/put a pause on the whole conversation.  But there wasn't any "this thing that Said was doing before I showed up is not clearing the bar for LW."