Thanks for making this post!
One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like "1 comment per 3 days" is not equivalent to a general ban from LessWrong, but should be more interpreted as "please comment primarily on your own posts", though of course it shares many important properties of a ban).
Things that seem most important to bring up in terms of moderation philosophy:
Moderation on LessWrong does not depend on effort
Another thing I've noticed is that almost all the users are trying. They are trying to use rationality, trying to understand what's been written here, trying to apply Baye's rule or understand AI. Even some of the users with negative karma are trying, just having more difficulty.
Just because someone is genuinely trying to contribute to LessWrong, does not mean LessWrong is a good place for them. LessWrong has a particular culture, with particular standards and particular interests, and I think many people, even if they are genuinely trying, don't fit well within that culture and those standards.
In making rate-limiting decisions like this I don't pay much attention to whether the user in question is "genuinely trying " to contribute to LW, I am mostly just evaluating the effects I see their actions having on the quality of the discussions happening on the site, and the quality of the ideas they are contributing.
Motivation and goals are of course a relevant component to model, but that mostly pushes in the opposite direction, in that if I have someone who seems to be making great contributions, and I learn they aren't even trying, then that makes me more excited, since there is upside if they do become more motivated in the future.
Signal to Noise ratio is important
Thomas and Elizabeth pointed this out already, but just because someone's comments don't seem actively bad, doesn't mean I don't want to limit their ability to contribute. We do a lot of things on LW to improve the signal to noise ratio of content on the site, and one of those things is to reduce the amount of noise, even if the mean of what we remove looks not actively harmful.
We of course also do other things than to remove some of the lower signal content to improve the signal to noise ratio. Voting does a lot, how we sort the frontpage does a lot, subscriptions and notification systems do a lot. But rate-limiting is also a tool I use for the same purpose.
Old users are owed explanations, new users are (mostly) not
I think if you've been around for a while on LessWrong, and I decide to rate-limit you, then I think it makes sense for me to make some time to argue with you about that, and give you the opportunity to convince me that I am wrong. But if you are new, and haven't invested a lot in the site, then I think I owe you relatively little.
I think in doing the above rate-limits, we did not do enough to give established users the affordance to push back and argue with us about them. I do think most of these users are relatively recent or are users we've been very straightforward with since shortly after they started commenting that we don't think they are breaking even on their contributions to the site (like the OP Gerald Monroe, with whom we had 3 separate conversations over the past few months), and for those I don't think we owe them much of an explanation. LessWrong is a walled garden.
You do not by default have the right to be here, and I don't want to, and cannot, accept the burden of explaining to everyone who wants to be here but who I don't want here, why I am making my decisions. As such a moderation principle that we've been aspiring to for quite a while is to let new users know as early as possible if we think them being on the site is unlikely to work out, so that if you have been around for a while you can feel stable, and also so that you don't invest in something that will end up being taken away from you.
Feedback helps a bit, especially if you are young, but usually doesn't
Maybe there are other people who are much better at giving feedback and helping people grow as commenters, but my personal experience is that giving users feedback, especially the second or third time, rarely tends to substantially improve things.
I think this sucks. I would much rather be in a world where the usual reasons why I think someone isn't positively contributing to LessWrong were of the type that a short conversation could clear up and fix, but it alas does not appear so, and after having spent many hundreds of hours over the years giving people individualized feedback, I don't really think "give people specific and detailed feedback" is a viable moderation strategy, at least more than once or twice per user. I recognize that this can feel unfair on the receiving end, and I also feel sad about it.
I do think the one exception here is that if people are young or are non-native english speakers. Do let me know if you are in your teens or you are a non-native english speaker who is still learning the language. People do really get a lot better at communication between the ages of 14-22 and people's english does get substantially better over time, and this helps with all kinds communication issues.
We consider legibility, but its only a relatively small input into our moderation decisions
It is valuable and a precious public good to make it easy to know which actions you take will cause you to end up being removed from a space. However, that legibility also comes at great cost, especially in social contexts. Every clear and bright-line rule you outline will have people budding right up against it, and de-facto, in my experience, moderation of social spaces like LessWrong is not the kind of thing you can do while being legible in the way that for example modern courts aim to be legible.
As such, we don't have laws. If anything we have something like case-law which gets established as individual moderation disputes arise, which we then use as guidelines for future decisions, but also a huge fraction of our moderation decisions are downstream of complicated models we formed about what kind of conversations and interactions work on LessWrong, and what role we want LessWrong to play in the broader world, and those shift and change as new evidence comes in and the world changes.
I do ultimately still try pretty hard to give people guidelines and to draw lines that help people feel secure in their relationship to LessWrong, and I care a lot about this, but at the end of the day I will still make many from-the-outside-arbitrary-seeming-decisions in order to keep LessWrong the precious walled garden that it is.
I try really hard to not build an ideological echo chamber
When making moderation decisions, it's always at the top of my mind whether I am tempted to make a decision one way or another because they disagree with me on some object-level issue. I try pretty hard to not have that affect my decisions, and as a result have what feels to me a subjectively substantially higher standard for rate-limiting or banning people who disagree with me, than for people who agree with me. I think this is reflected in the decisions above.
I do feel comfortable judging people on the methodologies and abstract principles that they seem to use to arrive at their conclusions. LessWrong has a specific epistemology, and I care about protecting that. If you are primarily trying to...
then LW is probably not for you, and I feel fine with that. I feel comfortable reducing the visibility or volume of content on the site that is in conflict with these epistemological principles (of course this list isn't exhaustive, in-general the LW sequences are the best pointer towards the epistemological foundations of the site).
If you see me or other LW moderators fail to judge people on epistemological principles but instead see us directly rate-limiting or banning users on the basis of object-level opinions that even if they seem wrong seem to have been arrived at via relatively sane principles, then I do really think you should complain and push back at us. I see my mandate as head of LW to only extend towards enforcing what seems to me the shared epistemological foundation of LW, and to not have the mandate to enforce my own object-level beliefs on the participants of this site.
Now some more comments on the object-level:
I overall feel good about rate-limiting everyone on the above list. I think it will probably make the conversations on the site go better and make more people contribute to the site.
Us doing more extensive rate-limiting is an experiment, and we will see how it goes. As kave said in the other response to this post, the rule that suggested these specific rate-limits does not seem like it has an amazing track record, though I currently endorse it as something that calls things to my attention (among many other heuristics).
Also, if anyone reading this is worried about being rate-limited or banned in the future, feel free to reach out to me or other moderators on Intercom. I am generally happy to give people direct and frank feedback about their contributions to the site, as well as how likely I am to take future moderator actions. Uncertainty is costly, and I think it's worth a lot of my time to help people understand to what degree investing in LessWrong makes sense for them.
I very much appreciate @habryka taking the time to lay out your thoughts; posting like this is also a great example of modeling out your principles. I've spent copious amounts of time shaping the Manifold community's discourse and norms, and this comment has a mix of patterns I find true out of my own experiences (eg the bits about case law and avoiding echo chambers), and good learnings for me (eg young/non-English speakers improve more easily).
Re: post/comment quality, one thing I do suspect helps which I didn't see anyone mention (and imo a potential upside of rate-limiting) is that age-old forum standard, lurking moar. I think it can actually be hugely valuable to spend awhile reading the historical and present discussion of a site and absorbing its norms of discourse before attempting to contribute; in particular, it's useful for picking up illegible subtleties of phrasing and thought that distinguish quality from non-quality contributors, and for getting a sense of the shared context and background knowledge that users expect each other to have.
So I'm one of the rate limited users. I suspect it's because I made a bad early April fools joke about a WorldsEnd movement that would encourage people to maximise utility over the next 25 years instead of pursuing long term goals for humanity like alignment. Made some people upset and it hit me that this site doesn't really have the right culture for those kinds of jokes. I apologise and don't contest being rate limited.
feature proposal: when someone is rate limited, they can still write comments. their comments are auto-delayed until the next time they'd be unratelimited. they can queue up to k comments before it behaves the same as it does now. I suggest k be 1. I expect this would reduce the emotional banneyness-feeling by around 10%.
feature proposal: when someone is ratelimited, the moderators can give a public reason and/or a private reason. if the reason is public, it invites public feedback as well as indicating to users passing by what things might get moderated. ...
I strongly suspect that spending time building features for rate limited users is not valuable enough to be worthwhile. I suspect this mainly because:
what feels to me a subjectively substantially higher standard for rate-limiting or banning people who disagree with me
Positions that are contrarian or wrong in intelligent ways (or within a limited scope of a few key beliefs) provoke valuable discussion, even when they are not supported by legible arguments on the contrarian/wrong side. Without them, there is an "everybody knows" problem where some important ideas are never debated or fail to become common knowledge. I feel there is less of that than optimal on LW, it's possible to target a level of disruption.
Have you or anyone else on the LW team written anywhere about the effects of your new rate-limiting infrastructure, which was IIRC implemented last year? E.g. have some metrics improved which you care about?
I think one outcome is ‘we’re actually willing to moderate at all on ambiguous cases’. For years we would accumulate a list of users that seemed like they warranted some kind of intervention, but banning them felt too harsh and they would sit there in an awkwardly growing pile and eventually we’d say ‘well I guess we’re not really going to take action’ and click the ‘approve’ button.
Having rate limits made it feel more possible to intervene, but it still required writing some kind of message which was still very time consuming.
Auto-rate-limits have done a pretty good job of handling most cases in a way I endorse, in a way that helps quickly instead of after months of handwringing.
The actual metric I’d want is ‘do users who produce good content enjoy the site more’, or ‘do readers, authors and/or commenters feel comment sections are better than they used to be?’. This is a bit hard to judge because there are other confounding factors. But it probably would be good to try checking somehow.
users we've been very straightforward with since shortly after they started commenting that we don't think they are breaking even on their contributions to the site
Why don't you just say this? Also give a general description of what "ROI" from the point of view your site is. Reddit has no concept of this. I was completely unaware you had a positive goal.
There's thousands of message boards, this is literally the first one I have ever seen that has even the idea of ROI. Also while negative rules may not exist, positive ones are ...
We have really given you a lot of feedback and have communicated that we don't think you are breaking even. Here are some messages we sent to you:
April 7th 2023:
You've been commenting fairly frequently, and my subjective impression as well as voting patterns suggest most people aren't finding your comments sufficiently helpful.
And from Ruby:
...In the "wrong" category, some of your criticisms of the Time piece post seemed to be failing to operating probabilistically which is a fundamental basic I expect from LW users. "May not" is not sufficient argument. You need to talk about probabilities and why yours are different from others. "It's irrational to worry about X because it might not happen" does not cut it. That's just something that stuck out to me.
In my mind, the 1 contribution/day is better than a ban because it gives you a chance to improve your contributions and become unrestricted.
Regarding your near-1000 karma, this is not a great sign given you have nearly 900 comments, meaning your average comment is not getting much positive engagement. Unfortunately karma is an imperfect measure and captures the combination of "is good" and "engages a lot" and engaging a
I'd also want to add LW Team is adjusting moderation policy as a post that laid out some of our thinking here. One section that's particularly relevant/standalone:
...LessWrong has always had a goal of being a well-kept garden. We have higher and more opinionated standards than most of the rest of the internet. In many cases we treat some issues as more "settled" than the rest of the internet, so that instead of endlessly rehashing the same questions we can move on to solving more difficult and interesting questions.
What this translates to in terms of moderation policy is a bit murky. We've been stepping up moderation over the past couple months and frequently run into issues like "it seems like this comment is missing some kind of 'LessWrong basics', but 'the basics' aren't well indexed and easy to reference." It's also not quite clear how to handle that from a moderation perspective.
I'm hoping to improve on "'the basics' are better indexed", but meanwhile it's just generally the case that if you participate on LessWrong, you are expected to have absorbed the set of principles in The Sequences (AKA Rationality A-Z).
In some cases you can get away without doing that while pa
To answer, for now, just one piece of this post:
We're currently experimenting with a rule that flags users who've received several downvotes from "senior" users (I believe 5 downvotes from users with above 1,000 karma) on comments that are already net-negative (I believe that were posted in the last year).
We're currently in the manual review phase, so users are being flagged and then users are having the rate limit applied if it seems reasonable. For what it's worth, I don't think this rule has an amazing track record so far, but all the cases in the "rate limit wave" were reviewed by me and Habryka and he decided to apply a limit in those cases.
(We applied some rate limit in 60% of the cases of users who got flagged by the rule).
People who get manually rate-limited don't have an explanation visible when trying to comment (unlike users who are limited by an automatic rule, I think).
We have explained this to users that reached out (in fact this answer is adapted from one such conversation), but I do think we plausibly should have set up infrastructure to explain these new rate limits.
I have been in the position of trying to moderate a large and growing community - it was at 500k users last I checked, although I threw in the towel around 300k - and I know what a thankless, sisyphean task it is.
I know what it is to have to explain the same - perfectly reasonable - rule/norm again and again and again.
I know what it is to try to cultivate and nurture a garden while hordes of barbarians trample all over the place.
But...
If it aint broke, don't fix it.
I would argue that the majority of the listed people penalized are net contributors to lesswrong, including some who are strongly net positive.
I've noticed y'all have been tinkering in this space for a while, I think you're trying super hard to protect lesswrong from the eternal september and you actually seem to be succeeding, which is no small feat, buuut...
I do wonder if the team needs a break.
I think there's a thing that happens to gardeners (and here I'm using that as a very broad archetype), where we become attached to and identify with the work of weeding - of maintaining, of day after day holding back entropy - and cease to take pleasure in the garden itself.
As that sets in, even new growth begins to seem like a weed.
Are you able in my case to link the comment?
Doesn't this "in the last year" equate to retroactively creating a rule and applying it?
A year ago the only rule I saw enforced was positive karma. It was fine to get into arguments, fine to post as often as you felt like. Seems like i have been punished a lot retroactively.
Hey, I'm just some guy but I've been around for a while. I want to give you a piece of feedback that I got way back in 2009 which I am worried no one has given you. In 2009 I found lesswrong, and I really liked it, but I got downvoted a lot and people were like "hey, your comments and posts kinda suck". They said, although not in so many words, that basically I should try reading the sequences closely with some fair amount of reverence or something.
I did that, and it basically worked, in that I think I really did internalize a lot of the values/tastes/habits that I cared about learning from lesswrong, and learned much more so how to live in accordance with them. Now I think there were some sad things about this, in that I sort of accidentally killed some parts of the animal that I am, and it made me a bit less kind in some ways to people who were very different from me, but I am overall glad I did it. So, maybe you want to try that? Totally fair if you don't, definitely not costless, but I am glad that I did it to myself overall.
The CCP once ran a campaign asking for criticism and then purged everyone who engaged.
I'd be super wary of participating in threads such as this one. A year ago I participated in a similar thread and got the rate limit ban hit.
If you talk about the very valid criticisms of LessWrong (which you can only find off LessWrong) then expect to be rate limited.
If you talk about some of the nutty things the creator of this site has said that may as well be "AI will use Avada Kadava" then expect to be rate limited.
I find it really sad honestly. The group think here is restrictive and bound up by verbose arguments that start with claims that someone hasn't read the site. Or that there are subjects that are settled and must not be discussed.
Rate limiting works to push away anyone even slightly outside the narrow view.
I think the creator of this site is like a bad L. Ron Hubbard quite frankly except they never succeeded with their sci-fi and so turned to being a doomed prophet.
But hey, don't talk about the weird stuff he has said. Don't talk about the magic assumption that AI will suddenly be able to crack all encryption instantly.
I stopped participating because of the rate limit. I don't think a read of my comments show that I was participating in bad faith or ignorance.
I just don't fully agree...
Forums that do this just die eventually. This place will because no new advances can be made so long as there exists a body of so-called knowledge that you're required to agree with to even start participating.
Better conversations are happening elsewhere and have been for a while now.
https://en.m.wikipedia.org/wiki/Hundred_Flowers_Campaign is the source. Re-education camps to execution were the punishment.
Thank you for telling me about the rate limit a year ago. I thought I was the only one. Were you given any kind of feedback from the moderators for the reason you were punished such or an advance warning to give you the opportunity to change anything?
I am not a moderator, just sharing my hunches here.
I was only ratelimited for a day because I got in this fight.
re: Akram Choudhary - the example you give of a post by them is an exemplar of what habryka was talking about, the "you have to be joking". this site has very tight rules on what argumentation structure and tone is acceptable: generally low-emotional-intensity words and generally arguments need to be made in a highly step-by-step way to be held as valid. I don't know if that's the full reason for the mute.
you got upvoted on april 1 because you were saying the things that, if you said the non-sarcastic version about ai, would be in line with general yudkowskian-transhumanist consensus. you continue to confuse me. it might be worth having the actual technical discussions you'd like to have about ai under the comments of those posts. what would you post on the april fools posts if you had thought they were not april fools at all? perhaps you can examine the step by step ways your reactions to those posts differ from ai in order to extract cruxes?
Victor Ashioya was posting a high ratio of things that sounded like advertisements, which I and likely others would then downvote on the homepage, and which would then disappear. Presumably Victor would delete them when they got downvotes. some still remain, which should give you a sense of why they were getting downvotes. Or not, if you're so used to such things on twitter that they just seem normal.
I am surprised trevor, shminux, and noosphere are muted. I expect it is temporary, but if it is not, I would wonder why. I would require more evidence about the reasoning before I got pitchforky about it. (Incidentally, my willingness to get pitchforky fast may be a reason I get muted easily. Oh well.)
I don't have an impression of the others in either direction on this topic.
But in general, my hunch is that since I was on this list and my muting was only for a day, the same may be true for others as well.
I appreciate you getting defensive about it rather than silently disappearing, even though I have had frustrating interactions with you before. I expect this post to be in the negatives. I have not voted yet, but if it goes below zero, I will strong upvote.
this site has very tight rules on what argumentation structure and tone is acceptable: generally low-emotional-intensity words and generally arguments need to be made in a highly step-by-step way to be held as valid.
I actually love this norm. It prevents emotions from affecting judgement, and laying out arguments step by step makes them easier to understand.
I spent several years moderating r/changemyview on Reddit which also has this rule. Having removed at least hundreds of comments that break it, I think the worst thing about it is that it rewards aloofness and punishes sincerity. That's acceptable to trade off to prevent the rise of very sincere flame wars, but it elevates people pretending to be wise at the expense of those with more experience who likely have more deeply held but also informed opinions about the subject matter. This was easily the most common moderation frustration expressed by users.
I don't really know, the best I can offer is sort of vaguely gesturing at LessWrong's moderation vector and pointing in a direction.
LW's rules go for a very soft, very subjective approach to definitions and rule enforcement. In essence, anything the moderators feel is against the LW ethos is against the rules here. That's the right approach to take in an environment where the biggest threat to good content is bad content. Hacker News also takes this approach and it works well - it keeps HN protected against non-hackers.
ChangeMyView is somewhat under threat of bad content - if too many people post on a soapbox, then productive commenters will lose hope and leave the subreddit. However it's also under threat of loss of buy-in - people with non-mainstream views, or those that would be likely to attract backlash elsewhere need to feel that the space is safe for them to explore.
When optimising for buy-in, strictness and clarity is desirable. We had roughly consistent standards in terms of numbers of violations, to earn a ban, and consistently escalating bans (3 days, 30 days, permanent) in line with behavioural infractions. When there were issues, buy-in seemed present that we were at l...
That fight (when I scanned over it briefly yesterday) seemed to be you and one other user (Shankar Sivarajan), having a sort of comment tennis game where you were pinging back an fourth, and (when I saw it) you both had downvotes that looked like you were both downvoting the other, and no one else was participating. I imagine that neither of was learning or having fun from that conversation. Ending that kind of chain might be the place the rate-limit has a use case. Whether it is the right solution I don't know.
I'm rate limited? I've heard about this problem before, but somehow I can still post despite being much less careful than other new users. I just posted two quick takes (which aren't that quick, I will admit that. But the rules seem more relaxed for quick takes than posts).
Edit: Rate limited now, lol. By the way, I enjoy your kind words of non-guilt. And I agree, I haven't done anything wrong. Can I still be a "danger" to the community in a way which needs to be gatekept? Only socially, not intellectually. I'm correct like Einstein was correct, stubbornly.
My comments are too long and ranty, and they're also hard to understand. But I don't think they're wrong or without value. Other than downvotes, there's not much engagement at all.
Self-censorship doesn't suit me. If it's required here then I don't want to stay. I could communicate easier and simpler ideas, but such ideas wouldn't be worth much. My current ideas might look like word salad to 90% of users, but I think the other 10% will find something of value. (exact ratio unknown)
Edit: Also, my theories are quite ambitious. Anything of sufficiently high level will look wrong, or like noise to those who do not understand it. Now, it may actually be noise, but the "attacker" only has to find one flaw whereas the defender has to make no mistakes. This effort ratio makes it a little pathetic when something gets, say -15 karma but zero comments, surely somebody can point out a mistake instead? Too kind? But banning isn't all that kind.
[disclaimer here]
My comments are too long and ranty, and they're also hard to understand. But I don't think they're wrong or without value.
I don't think "wrong or without value" is or should be the bar. My personal bar is heavily based on the ratio of effort:value[1], and how that compares to other ways I could spend my time. Assuming arguendo that your posts are correct, they may still be fall below the necessary return-on-effort.
That said, I think (and I believe the real mod team agrees) that Short Form should have a much lower bar than posts or comments...
First of all, thank you, this was exactly the type of answer I was hoping for. Also, if you still have the ability to comment freely on your short form, I’m happy to hop over there.
You've requested people stop sugarcoating so I'm going to be harsher than normal. I think the major disagreement lies here:
> But the entire point of punishment is teaching
I do not believe the mod team's goal is to punish individuals. It is to gatekeep in service of keeping lesswrong's quality high. Anyone who happens to emerge from that process making good contributions is a bonus, but not the goal.
How well is this signposted? The new user message says
Followed by a crippling long New User Guide.
I think that message was put in last summer but am not sure when. You might have joined before it went up (although then you would have been on the site when the equivalent post went up).
Going against the consensus is *probably* enough to get one rate-limited, even if they're correct
For issues interesting enough to have this problem, there is no ground source of truth that humans can access. There is human judgement, and a long process that will hopefully lead to better under...
Another thing I've noticed is that almost all the users are trying.
I haven't thought about whether these rate-limits are justified (I currently think at least 1/4 of them are unjustified and 1/2 are okay), but I want to point out that post/comment quality is real. That is, some users have higher quality comments than others (due to reasoning in the comment, combativeness, how often this leads to good discussion, etc.) often for illegible reasons, this substantially affects the value readers get, and this is predictive of their future content. It follows that if moderators want to reduce the incidence of low-quality content beyond what is caught by simple rules, then they cannot defend themselves perfectly against accusations of arbitrariness. The signal-to-noise ratio of LW is very important, and IMO this justifies mods making judgment calls.
Take MiguelDev, who posts extremely long posts consisting mostly of LLM output. My guess is that the experiments are mediocre due to lack of rigor, with a small possibility that they are good. They are not egregiously bad. But as evidenced by the low karma, few people get value from reading these extremely long posts. I would like to see much less of this content on the frontpage because it decreases the SNR; maybe three posts per year is okay. Therefore I'm fine with this user being rate-limited by moderator fiat to something like one post per month. If moderators started rate-limiting Nora Belrose or someone else whose work I thought was particularly good, they would lose my confidence, but this hasn't happened yet.
I agree about providing explanations for bans or ratelimits that are functionally bans though.
[note: I am technically a mod but in practice that means I give the team my opinions and occasionally curate something. The following is my opinion. I think it has some overlap with team members' opinion but I don't know how much, or how much of that opinion has made it into policy]
A counterintuitive effect is that bad comments are often worse for SNR than bad posts. Bad posts seem like they'd be worse, because posts are more important than comments and are placed higher. But karma works better (on average) on posts: bad or mediocre posts get enough downvotes, or at least not enough upvotes to compete, and gently disappear. But comments' views scale with the viewership of the original post, so a mediocre comment on a popular post will get lots of attention regardless of its karma. If a post gets enough comments that low karma comments can't get much attention, they still compete with new high-quality comments, and cut into the attention for the latter.
And even if no one else sees a bad comment, they are still likely to be read by the author and annoy them. If this gets bad enough, authors may stop reading their comment sections or stop posting altogether.
If a post gets enough comments that low karma comments can't get much attention, they still compete with new high-quality comments, and cut into the attention for the latter.
Seems like this could be addressed by changing the comment sorting algorithm to favor recent comments more?
I disagree. Negative comments often provide feedback to the author he wouldn't get elsewhere. And if you are annoyed by it you can filter them out (settings -> hide low votes).
Sure, but literally bad ones will quickly get downvoted and the poster banned. This is about the less clearcut cases, right?
It sounds like you don't think there should be any user-focused mod response between "nothing" and "banned". Is that correct?
No. There can be many means in between or different altogether.
But back to my original comment: It was about the not made explicit action of what to do with bad comments. I agree that the dynamic for posts and comments is different. But I disagree with what I saw was the push that negative comments should be stronger discouraged because they have higher weight.
But when rereading, I see that you don't say what to do about these comments. You only point out negative effects. What is your proposal?
Note: I'm in favor of tending the garden and discouraging orcs and banning trolls. But I'm also in favor of critical and negative remarks. Reduce their visibility maybe, but don't completely prevent them.
But when rereading, I see that you don't say what to do about these comments. You only point out negative effects. What is your proposal?
Rate limiting. If I was pope I'd make a few tweaks, but I think the concept is fundamentally sound and the implementation good enough.
Then we agree about the general moderation of LW.
Did your comment also apply to the latest automated bans?
AFAIK there was a wave of rate limits, not bans. I think it's a huge error to conflate those. Most importantly, you can complain on-site about being rate limited in a way you can't complain about being banned.
I have complaints about implementation but the theory seems sound. I'd like the team to put more work into implementation or treat false positives as more costly, but that's easy for me to say since I'm not the one that has to do it.
Complaints:
If moderators started rate-limiting Nora Belrose or someone else whose work I thought was particularly good
I actually did get rate-limited today, unfortunately.
I'm not seeing any active rate limits. Do you know when you observed it? It's certainly the case that an automatic rate limit could have kicked in and then, as voting changed, been removed.
Yeah, I am also not seeing anything. Maybe it was something temporary, but I thought we had set it up to leave a trace if any automatic rate limits got applied in the past.
Curious what symptom Nora observed (GreaterWrong has been having some problems with rate-limit warnings that I've been confused by, so I can imagine that looking like a rate-limit from our side).
I still agree with myself above and think this is a bad moderation decision. Although I don't know the full story and don't see you on the moderation log.
Any published examples of work product by the moderators or any moderators of a similar site? Even if the reasoning is black box the output should be inspectable.
"This user I will give a 5 for quality, and this a 3, and see here this user made an attack here and see how 10 comments later everyone is arguing in in this series of exchanges".
I wasn't aware anything like this existed, reddit I thought was SOTA, and that's automated karma visibility, stateless, and mods usually only ban for bright line rules or abuse their power and ban any dissent.
I was accused of degrading whole comment sections, I just want to see it. Show how you know this happens. Or if not me, any examples of this.
All I see is there's maybe a back and forth with 1 user. Unremarkable on reddit, what's the harm?
Also I wanted you to note the karma system is a hair from auto banning Nora. She seems to be afloat only a few points recently.
[ not a moderator, and my base position is that this is going to suck no matter how good most people's intensions are. ]
I overall think the site admins are doing a very good job of maintaining LessWrong quality over time. They're doing the right thing with an impossible job, implementing rate limits rather than bans, being somewhat responsive to questions and challenges about it, and being fairly open about their decisions.
It also sucks for many users who get throttled, especially for reasons they don't agree with or for illegible reasons that can't be evaluated objectively.
And that illegibility is what makes it an impossible balance - there are some adversarial interactions, where users will do their best to skim any legible rules (and where admins overreact to violations of a legible rule, for that matter). These are not the majority at all, but they're enough to need some amount of handwaving and judgement that can't be rigorously defended.
I'm sorry you're throttled. I personally don't find your comments problematic, though I haven't studied closely enough to be sure. I'm glad that it's not a full ban (or worse, a shadowban, where you can post, but nobody else sees it), and I wish there were a bit of a sliding scale in terms of N per day that ramps up as you get upvotes (and down if not).
Just wanted to give some validation. I left a comment on this post a while ago pointing out how one (or apparently a few) users can essentially down vote you however they like to silence opinions they don't agree with. Moderation is tricky and it is important to remember why. Most users on a website forum are lurkers meaning that trying to gather feedback on moderation policies has a biased sampling problem. The irony on likely not being able to leave another comment or engage in discussion is not lost on me.
At first, I thought getting soft-banned meant my "contributions" weren't valuable. For context, I study AI and integrate it into my thinking which hadn't been received well on this site. Ironically, not being able to interact with other people pushed me to explore deeper discussions with AI. For example, I have this entire thread to Claude3 and it agreed there were some changes to be made on the rate-limiting system.
It does seem concerning that as a PhD student studying AI alignment, I was effectively pushed out of participating in discussions on LessWrong and the AI Alignment Forum due to the automatic rate limiting system and disagreements with senior user whose downvotes carry much more weight. On the other hand, compared to a few years ago during COVID, now I have colleagues and AI that I have a lot more shared context with than users on this forum so this just matters less to me. I return only because I am taking a class on social computing and am revisiting what makes for good/bad experiences.
Anyway, hopefully this gives you some solace. I would encourage you to seek other sources of validation. There are so many more options than you think! :)
FWIW, my sense is that the rate-limit system triggering was a mistake on your account, and we tweaked the numbers to make that no longer happen. Still sucks that you got rate-limited for a while, but the numbers are quite different now, and you almost certainly would not have been caught in the manual review that is part of these rate limits.
Strong upvoted for visibility and because this sort of posts contribute to create a healthy culture of free speech and rational discussion.
I'm conflicted. I appreciate the effort put into the post, but it seems like a lot of the posters are genuinely creating lots of low quality content and I'd much rather have a small amount of good content than a large amount of meh-or-bad content to sift through to find the good stuff.
I've settled on a net downvote, but would probably do a upvote and a disagree vote if that was an option.
Summary: the moderators appear to be soft banning users with 'rate-limits' without feedback. A careful review of each banned user reveals it's common to be banned despite earnestly attempting to contribute to the site. Some of the most intelligent banned users have mainstream instead of EA views on AI.
Note how the punishment lengths are all the same, I think it was a mass ban-wave of 3 week bans:
Gears to ascension was here but is no longer, guess she convinced them it was a mistake.
Have I made any like really dumb or bad comments recently:
https://www.greaterwrong.com/users/gerald-monroe?show=comments
Well I skimmed through it. I don't see anything. Got a healthy margin now on upvotes, thanks April 1.
Over a month ago, I did comment this stinker. Here is what seems to the same take by a very high reputation user here, @Matthew Barnett , on X: https://twitter.com/MatthewJBar/status/1775026007508230199
Must be a pretty common conclusion, and I wanted this site to pick an image that reflects their vision. Like flagpoles with all the world's flags (from coordination to ban AI) and EMS uses cryonics (to give people an alternative to medical ASI).
I asked the moderators:
@habryka says:
I skimmed all comments I made this year, can't find anything that matches to this accusation. What comment did this happen on? Did this happen once or twice or 50 times or...? Any users want to help here, it surely must be obvious.
You can look here: https://www.greaterwrong.com/users/gerald-monroe?show=comments if you want to help me find what habryka could possibly be referring to.
I recall this happening once, Gears called me out on it, and I deleted the comment.
Conditional that this didn't happen this year, why wasn't I informed or punished or something then?
Skimming the currently banned user list:
Let's see why everyone else got banned. Maybe I can infer a pattern from it:
Akram Choudhary : -2 per comment and 1 post at -25. Taking the doomer view here: frankybegs +2.23 karma per comment. This is not bad. Does seem to make comments personal. Decided to enjoy the site and make 16 comments 6-8 days ago. Has some healthy karma on the comments, +6 to +11. That's pretty good by lesswrong standards. No AI views. Ban reason is???
Victor Ashioya
His negative karma doesn't add up to -38, not sure why. AI view is in favor of red teaming, which is always good.
@Remmelt
doomer view, good karma (+2.52 karma per comment), hasn't made any comments in 17 days...why rate limit him? Skimming his comments they look nice and meaty and well written...what? All I can see is over the last couple of month he's not getting many upvotes per comment.
green_leaf
Ok at least I can explain this one. One comment at -41, in the last 20, green_leaf rarely comments. doomer view.
PeteJ
Tries to use humanities knowledge to align AI, apparently the readerbase doesn't like it. Probably won't work, banned for trying.
@StartAtTheEnd
1.02 karma per comment, a little low, may still be above the bar. Not sure what he did wrong, comments are a bit long?
doomer view, lots of downvotes
omnizoid
Seems to just be running a low vote total. People didn't like a post justifying religion.
@MiguelDev
Why rate limited? This user seems to be doing actual experiments. Karma seems a little low but I can't find any big downvote comments or posts recently.
@RomanS
Overall Karma isn't bad, 19 upvotes the most recent post. Seems to have a heavily downvoted comment that's the reason for the limit.
@shminux this user has contributed a lot to the site. One comment heavily downvoted, algorithm is last 20.
It certainly feels that way from the receiving end.
2.49 karma per comment, not bad. Cube tries to applies Baye's rule in several comments, I see a couple barely hit -1, I don't have an explanation here.
M. Y. Zuo
possibly just karma
@Noosphere89
One heavily downvoted comment for AI views. I also noticed the same and I also got a lot of downvotes. It's a pretty reasonable view, we know humans can be very misaligned, upgrading humans and trying to control them seems like a superset of the AI alignment problem. Don't think he deserves this rate limit but at least this one is explainable.
Has anyone else experienced anything similar? Has anyone actually received feedback on a specific post or comment by the moderators?
Finally, I skipped several negative overall karma users not mentioned, because the reason is obvious.
Remarks :
I went into this expecting the reason had to do with AI views, because the site owners are very much 'doomer' faction. But no, plenty of rate limited people on that faction. I apologize for the 'tribalism' but it matters:
https://www.greaterwrong.com/users/nora-belrose Nora Belrose is one of the best posters this site has in terms of actual real world capabilities knowledge. Remember the OAI contributors we see here aren't necessarily specialists in 'make a real system work'. Look at the wall of downvotes.
vs
https://www.greaterwrong.com/users/max-h Max is very worried about AI, but I have seen him write things I think disagree with current mainstream science and engineering. He writes better than everyone banned though.
But no, that doesn't explain it. Another thing I've noticed is that almost all the users are trying. They are trying to use rationality, trying to understand what's been written here, trying to apply Baye's rule or understand AI. Even some of the users with negative karma are trying, just having more difficulty. And yeah it's a soft ban from the site, I'm seeing that a lot of rate limited users simply never contribute 20 more comments to get out of the sump from one heavily downvoted comment or post.
Finally, what rationality principles justify "let's apply bans to users of our site without any reason or feedback or warning. Let's make up new rules after the fact."
Specifically, every time I have personally been punished, it would be no warning, then @Raemon first rate limited me, by making up a new rule (he could have just messaged me me first), then issued a 3 month ban, and gave some reasons I could not substantiate, after carefully reviewing my comments for the past year. I've been enthusiastic about this site for years now, I absolutely would have listened to any kind of warnings or feedback. The latest moderator limit is the 3rd time I have been punished, with no reason I can validate given or content cited.
I asked for, in a private email to the moderators, any kind of feedback or specific content I wrote to justify the ban, and was not given it. All I wanted was a few examples of the claimed behavior, something I could learn from.
Is there some reason the usual norms of having rules, not punishing users until after making a new rule, and informing users when they broke a rule and what user submission was rule violating isn't rational? Just asking here, every mainstream site does this, laws do this, what is the evidence justifying doing it differently?
There's this:
well-kept-gardens-die-by-pacifism
Is not giving a reason for a decision, or informing a user/issuing a lesser punishment instead of immediately going to the maximum punishment a community with abusive moderators? I can say in other online communities, absolutely. Sites have split over one wrongful ban of a popular user.