Below is a message I just got from jackk. Some specifics have been redacted 1) so that we can discuss general policy rather than the details of this specific case 2) because presumption of innocence, just in case there happens to be an innocuous explanation to this.

Hi Kaj_Sotala,

I'm Jack, one of the Trike devs. I'm messaging you because you're the moderator who commented most recently. A while back the user [REDACTED 1] asked if Trike could look into retributive downvoting against his account. I've done that, and it looks like [REDACTED 2] has downvoted at least [over half of REDACTED 1's comments, amounting to hundreds of downvotes] ([REDACTED 1]'s next-largest downvoter is [REDACTED 3] at -15).

What action to take is a community problem, not a technical one, so we'd rather leave that up to the moderators. Some options:

1. Ask [REDACTED 2] for the story behind these votes
2. Use the "admin" account (which exists for sending scripted messages, &c.) to apply an upvote to each downvoted post
3. Apply a karma award to [REDACTED 1]'s account. This would fix the karma damage but not the sorting of individual comments
4. Apply a negative karma award to [REDACTED 2]'s account. This makes him pay for false downvotes twice over. This isn't possible in the current code, but it's an easy fix
5. Ban [REDACTED 2]

For future reference, it's very easy for Trike to look at who downvoted someone's account, so if you get questions about downvoting in the future I can run the same report.

If you need to verify my identity before you take action, let me know and we'll work something out.

-- Jack

So... thoughts? I have mod powers, but when I was granted them I was basically just told to use them to fight spam; there was never any discussion of any other policy, and I don't feel like I have the authority to decide on the suitable course of action without consulting the rest of the community.


New Comment
240 comments, sorted by Click to highlight new comments since: Today at 2:29 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

As one of those targeted, I thought about what I would change if I could. All I came up with is posting mass downvoting stats periodically. If people knew their actions would be detected and made public, they would probably refrain from doing it in the first place.

I am not familiar with the LW database schema, but It is probably trivial to write a SELECT statement which finds users who have been downvoted more than, say, 100 times in the last month, and find the most prolific downvoter of that user. Hopefully this can be a roughly O(n) task, so that the server is not overloaded. I'm sure Jack can come up with something sensible.

Minimally invasive and might be effective. I like it.
Thanks! However, judging by the anti-trolling discussions some year and a half ago, simple automated solutions are not very popular here.
Isn't downvoting a valid a signal? Why should it necessarily be discouraged? Is there anything that keeps sock puppets from voting? Wouldn't the offenders just switch to those? I think a better alg is the author of the max downvotes on one person. It just seems to me that downvoting per se is not necessarily a bad thing.
Yes, I believe that this is similar to what I have suggested. A mass downvoter would be a strong outlier on the 30-day downvote histogram (# users who downvoted vs # downvotes they gave) of a given user.
I also see it that strictly downvoting is a valid signal - esp. as it is limited to x4 karma. See my comment here [].
The limit on total downvotes proportional to karma gives you more than you'll ever need unless you're planning to downvote the world, but it does make it significantly harder to manage a sockpuppet army. You could potentially use sockpuppets to vote more than once on someone's posts, if you feel so inclined, but all your socks would individually have to be productive contributors in good standing, and you're limited by your total contributions in the same way. If we're talking hundreds of total downvotes, pushing socks' individual contributions into undetectable territory would entail tedious account management and some pretty serious compromises in terms of status on your main account. I can think of a couple ways of finessing this with automated help, but they're pretty fragile and easily detected.
Sockpuppets boost one another. If you have, say, five sockpuppets, each post by one of them immediately gets +4 karma.
That'd work, but I feel voting your own stuff up, especially in a systematic way across several accounts, is much more clearly a violation of community fair-play norms than systematic downvoting or running sockpuppets is. It's also pretty easily detectable.
Once you spin up a few sock puppets for karma manipulation, I don't think the community fair-play norms bind you much.

Healthy gardens have moderation. If Eliezer doesn't want to do it I think someone else should have the authority to moderate. I consider you (Kaj Sotala) to be trustworthy for that role. Having somebody who's in charge helps.

It's usually a debacle when moderators start punishing people, particularly when the moderators are also members of the forum. God's wrath should be reserved for significant issues. But I'd be in favor of God sending a vision to the perpetrator "You're causing me a problem that I don't want to have to figure out. Do you really need to do this? Can you knock it off?"

I have one of these too. Someone is slowly working back through my comments systematically downvoting them. Given the rate, I think they're actually doing it by hand, and must have a browser window they've kept open for months just for this task. It's like they're trolling themselves for me, without me having to actually lift a finger. Some LW karma is cheap for such entertainment.

It was/is the same for me and others, too - small blocks of downvotes on old comments until they reach your first one, and then periodic block downvotes on your recent comments.

I also suspected that it is done by hand at first, but now I am leaning towards it being done with a bot/script (something adapted from reddit most likely), since it happens to many users and the pattern is quite regular over a long time.

Oh, leave me my illusions. I want to picture them FURIOUSLY DOWNVOTING ME COMMENT BY COMMENT, in UNQUENCHABLE NERD RAGE.

With your and David's karma, it seems like you must have a fair number of comments. The 4xkarma limitation on downvotes suggests that it's someone who's got a fair amount of karma (or several accounts with a fair amount of karma if you're getting multiple downvotes per comment) doing the mass downvoting. That's just weird. It's hard to imagine which high karma person on LW would engage in individual persecution like that.

I have around 10,000 almost entirely from commenting on posts over three and a half years, it's not hard. I would assume someone with a long-running grudge. It's difficult to think of a worse (appropriate) punishment for them than continuing to be someone who would think this was a worthwhile way to spend their life, however.

Assuming they currently have 1 karma/post on average, which seems low to me, it would only take ~2500 karma to downvote all of David, Tenoke and falenas' comments. That isn't tiny, but for example I'm not particularly prolific and I have ~1500 karma, which I'd expect to be more than sufficient.

It's hard to imagine which high karma person on LW would engage in individual persecution like that.

One can get sufficiently high karma rather easily. We are not necessarily speaking about the "top contributor" level here.

For example, if someone gets 10 karma points in a month, which is easy if they write regularly, they have 120 karma points in a year. If they don't downvote regularly, and only decide to drop the whole bomb on one person, that's 4×120 = 480 downvotes. Even if they spend half of it on regular downvoting, and the other half on a bomb, that's still "hundreds" of downvotes.

We've traced the call, and it turns out it was Eliezer Yudkowsky the whole time!
Interesting. I didn't know about the x4 limitation. As that puts a natural limit on the downvoting I do not see any problem in principle with the 'mass' downvoting. If you do not have the freedom to actually spend your karma on (mass) downvotes, then the problem is not the downvoting but the limit. The limit ensures that you downvotes need to be compensated by correspondingly valued contributions. If more people exercised their downvoting share this 'mass downvoting' wouldn't even have been noticable. The problem may be that it is applied to individuals. But even though that can be perceived as unfair it is still strictly the choice available to the voter (not much different that voting on the popularity of people instead of comments which is seldom nowadays instead of in popularity (up)votes. My proposal would be to either a) reduce the limit to x2 or b) change the limit to x1 ''per person'' (if that is possible easily). This is conditional on attackers not artificially accumulating karma by upvoting themselves (via multiple accounts). Such self-voting can in principle be either detected or prevented by network flow algorithms like Advogato's ( [] ) but that requires significant changes to the karma logic. Note: I'm not afiliated with Advogato but I'd really like to see the basic principle (the network flow) be applied more to voting algorithms in general.
I tend to think of downvoting as a mechanism to signal and filter low-quality content rather than as a mechanism to 'spend karma' on some goal or another. It seems that mass downvoting doesn't really fit the goal of filtering content -- it just lets you know that someone is either trolling LW in general, or just really doesn't like someone in a way that they aren't articulating in a PM or response to a comment/article.

[REDACTED 2], your behavior is bad and you should feel bad.

I'm also one of those target. Literally every comment I have ever made has been downvoted, 10 downvotes a day, for a few months. This happened until whoever was doing it reached my oldest comment. Recent comments are also downvoted.

Not only is mass downvoting feel pretty terrible, it also messes up the purpose of voting. Voting is meant to be a signal of how useful the community thinks a person's comments are, and that's no longer true of my votes or any other victim of downvoting.


My own view is :

  1. Mass downvoting of most/all a user wrote regardless of content defeats the purpose of the karma/score system and therefore is harmful to the community.

  2. Mass downvoting is rude and painful for the target, and therefore is harmful to the community.

So we should have an official policy forbidding it. For the current case, I would support using first 1. (it's always good to ask for reasons behind an act before taking coercive action), and then apply any of 4. and 5. depending on the answer (or lack of it).

I would rather see mods take matters into their own hands than see a tribunal or other bureaucracy.

I think it is vital that any moderator action be public. If you ban them, fine - but let's see a great big USER WAS BANNED FOR THIS POST.

I think that if we believe mass downvoting is wrong then there should be a public ex cathedra statement that this is so and any practical technical measures to prevent it should be applied.

Well, here I am again, this time providing a paper backing up my claim that having a downvote mechanism at all is just pure poison.

It doesn't make any sense for this type of community. This isn't Digg. We're not trying to rate content so an algorithm can rank it as a news aggregation service.

Look at Slate Star Codex, where everybody is spending their time now - no aversive downvote mechanism, relaxed, cordial atmosphere, extremely minimal moderation. Proof of concept.

Just turn off the downvote button for one week and if LessWrong somehow implodes catastrophically ... I'll update.

For what it's worth I find the SSC comment section pretty unreadable, since it is just a huge jumble of good and bad comments with no way to find the good ones.

There's also a significant amount of astroturfing from various sources that muddies the water further.
?? Such as?
Presumably p-m primarily means the neoreactionaries.
I don't think that's astroturfing; I think it's just that Scott's one of the few semi-prominent writers outside their own sphere who'll talk to NRx types without immediately writing them off as hateful troglodytic cranks. Which is to his credit, really.
That's fair, but I think it was probably what paper-machine was referring to.
More or less. They're not the only ones, of course, but perhaps they're the most obvious.
I wouldn't call that astroturfing, I'd say that's more wanting anyone to talk to. The lack of a rating system means people don't get downvoted to obvlion, instead they get banned if they break the house rules badly enough. (I'm surprised James A. Donald lasted as long as he did there.)
I don't know what "that" you and Nornagest are referring to, so I have no way of knowing if "that" is really astroturfing or not. On the other hand, six comments about the appropriateness of a single word seems like overkill. On the gripping hand, it appears the community wants more of it, so by all means, continue.
I mean the neoreactionaries on SSC.
I meant that I haven't seen any strong evidence of astroturfing on SSC (by the conventional definition of "a deceptive campaign to create the appearance of popular support for a position, usually involving sockpuppets or other proxies"), and that the presence of an unusually large and diverse neoreactionary contingent is more easily explained by the reasons I gave. What did you mean by it? NRx, sure, but what about them, and who're the others you alluded to upthread? If we're just arguing over definitions, giving them explicitly seems like the best way to drive a stake into the argument's heart -- and if you've noticed some bad behavior that I haven't, I'd like to know about that too.
I appreciate your skepticism, but I doubt I can find enough evidence to convince you that NRs do this intentionally. Most of the trouble comes from not being able to find tweets from months ago unless you know exactly what you're looking for, provided they still even exist (e.g., Konk). I'm looking into the PUAs for examples, but I don't know their community as well. If it's the word you object to, perhaps "meatpuppetry" is better? I don't really see much of a difference, as they both involve manufacturing the appearance of support through multiple accounts. So, uh, sorry. I really thought this would be easier to show than it turned out to be.
So if I'm following this correctly, you think that the neoreactionary activity on SSC is thanks to an organized effort to create the appearance of support, but not by deceptive means? That is, Scott posts something relevant to their interests, the first neoreactionary to find it tweets "hey, come back me up", and suddenly half the NRx sphere is posting in the comments under their standard noms de blog? I'm still not convinced, but I'd find that more plausible than astroturfing by my understanding of the word. Not sure what I'd call it, though; "brigading" is close, but not quite it. And I'm not even sure where I'd draw the line; the distinction between "check out this cool thing" and "help me burn this witch" is awfully fine, especially when the cool thing is (e.g.) an anti-FAQ.
"Dogpiling" is the word I've seen.
Swarming? As an aside, I have doubts that the neoreactionaries are *that* interested in gaming Yvain's blog...
They're massively interested in controlling their presence on the Internet.
So one example of a pattern that I saw worked like this: 1. Someone writes a comment being critical of NR. 2. Someone else posts a tweet calling the above names and linking to their comment. 3. Suddenly multiple NRs come out of the rafters to reply to #1. I'd give you actual links but I can't trick twitter into showing me tweets from months ago anymore, and they've probably been deleted anyway. The MRAs and PUAs have been known to do the same thing. I call this astroturfing because an unrelated bystander reading the comment thread interprets the multiple responses of #3 as coming from independent sources, when in reality they're confounded by the call to arms in #2. I suppose Wikipedia calls it "meatpuppetry", which amounts to the same thing, IMO.

I think people go to Slate Star Codex, because that's where Scott writes his articles, not because of the voting mechanism.

From the paper:

authors of negatively evaluated content are encouraged to post more, and their future posts are also of lower quality

Seen that at LW a few times. At some moment the user's karma became so low they couldn't post anymore, or perhaps an admin banned them. From my point of view, problem solved.

I think it would be useful to distinguish between systems where the downvoted comments remain visible, and where the downvoted comments are hidden.

I am reading another website, where the downvoted comments remain proudly visible, with the number of downvotes, and yes, it seem to enrage the user to write more and more of the same stuff. My hypothesis is that some people perceive downvotes as rewards (maybe they love to make people angry, or they feel they are on a crusade and the downvotes mean they successfully hurt the enemy), and these people are encouraged by downvoting. Hiding the comment, and removing the ability to comment, now that is a punishment.

A bog-standard troll wants attention and drama. Downvotes are evidence of attention and drama.
When I think others are wrong, and in particular, the groupthink is wrong, I take downvotes as a greater indication that someone needs to get their head straight, and it could be them or me. Let's see. I can think of at least one case where I criticized someone for something I thought was disgraceful, after his post was massively upvoted. I was massively downvoted in turn, but eventually convinced the original poster that they had crossed a line in their original post. Or at least he so indicated. Maybe he was just humoring the crazy person. Downvotes are a signal. Big downvotes are a big signal. Maybe it's not about hurting people. Maybe it's about identifying contradiction as the place to look for bad ideas that need fixing.
"some people perceive downvotes as rewards" Is this just a dig at people vehemently defending downvoted posts or are you serious in calling this a hypothesis?
Completely serious. Just realise that different people have different goals and/or different models of the world. Downvote is merely a signal for "some people here don't like this". If you care about opinions of LW readers, and you want to be liked by them, then downvotes hurt. Otherwise, they don't. For some sick person, making other people unhappy may be inherently desirable, and downvotes are an evidence they succeeded. Imagine some kind of psychopath that derives pleasure from frustrating strangers on internet. (Some people suggest that this actually explains a lot of internet trolling.) Or someone may model typical LW users -- or, in other forum, typical users of the forum X -- as their enemies whose opinions have to be opposed, and downvotes are an evidence that they succeeded to write an "inconvenient truth". Imagine a crackpot, or a heavily mindkilled person. Or a spammer.
To trolls any attention (including downvotes) is a reward.

Tricky one. I had a look at the Facebook group and was slightly horrified. You know all the weird extrapolations-from-sequences lunacy we don't get any more at LW? Yeah, it's all there. I think because there are no downvotes there.

That's true, but there are other salient differences between Facebook and LessWrong. Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice, while we are hobbled with only aliases here. Or the absence of a nested discussion threading system on Facebook. Or the fact the Eliezer posts on Facebook all the time now and rarely here anymore. But I tend to agree that the aversiveness of karma drives people away.

Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice, while we are hobbled with only aliases here.

My impression is that real-names-and-faces systems incentivize everyone to play to their expected audience's biases, not to be nice. If the audience enjoys being nasty to someone, real-names-and-faces systems strongly disincentivize expressions of toleration.

The very nastiest trolls I've encountered really just do not give a shit. Name, address, phone number, all publicly available.

Like the fact the Facebook has a picture of your real face right there, incentivizing everyone to play nice

This is the "real names make people nicer online" claim, which is one of those ideas people keep putting forth and for which there is no evidence it works this way. I say there is no evidence because every time it comes up I ask for some (and particularly during the G+ nymwars) and don't get any, but if you have some I'd love to see it.

edit: and by the way, here's my "photo".

Using a photograph of yourself on Facebook is optional.

I'd rather kill karma entirely than refactor it into an upvote-only system. If you're trying to do anything more controversial than deciding which cat picture is the best, upvote-only systems encourage nasty factional behavior that I don't want to see here: it doesn't matter how many people you piss off as long as you're getting strong positive reactions, so it's in your interests to post divisive content. That in turn leads to cliques and one-upmanship and other unpleasantness. It's a common pattern on social media, for example.

The other failure mode you get from it is lots of content-free feel-good nonsense, but we have strong enough norms against that that I don't think it'd be a problem in the short term.

I'd be fine with that. I feel a bit silly repeating the same arguments, but we're supposed to be striving to be, like, the most rational humans as a community, yet the social feedback system we are using was chosen ... because it came packaged with Reddit and Reddit is what was chosen as the LessWrong platform because it was the hot thing of its day. There was no clever Quirrell-esque design behind our karma system designed to bring out the best in us or protect us from the worst in us. It's a relic. Let's be rid of it.

No Karma 2014



By applying our methodology to four large online news communities for which we have complete article commenting and comment voting data (about 140 million votes on 42 million comments), we discover that community feedback does not appear to drive the behavior of users in a direction that is beneficial to the community, as predicted by the operant conditioning framework. Instead, we find that community feedback is likely to perpetuate undesired behavior. In particular, punished authors actually write worse in subsequent posts, while rewarded authors do not improve significantly.

In a footnote, they discuss what they meant by "write worse":

One important subtlety here is that the observed quality of a post (i.e., the proportion of up-votes) is not entirely a direct consequence of the actual textual quality of the post, but is also affected by community bias effects. We account for this through experiments specifically designed to disentangle these two factors.

They measure post quality based on textual evidence by spinning up a mechanical turk on 171 comments and using that data to train a binomial regression model. So cool!

When comparing the fraction of

... (read more)

The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.

If you eliminate the downvotes, what will replace them to prune the bad content?

Well, if this is really the goal, then maybe disentangle downvotes from both post/comment karma and personal karma while leaving the invisibility rules in place? Make it more of a "mark as non-constructive" button that if enough people hit it, the post becomes invisible. If we want to make it more comprehensive, it could be made to weigh these votes against upvotes to make the show/hide decision.

Could be done, though it makes karma even more irrelevant to anything.
Negative externalities []. Something else? The above study is sufficient evidence for me (and hopefully others) to start finding another solution.
I am aware of the concept. What exactly do you mean? It says "This paper investigates how ratings on a piece of content affect its author's future behavior." I don't think LW should be in the business of re-educating its users to become good 'net citizens. I'm more interested in effective filtering of trolling, stupidity, aggression, drama, dick waving, drive-by character assassination, etc. etc. It's not like the observation that downvoting a troll does not magically convert him into a hobbit is news.
I do not like the voting and commenting system at Slate Star Codex.
It is seriously broken in many ways, I was mainly highlighting the tone and the fact that it doesn't have a voting mechanism and the fact that people still use it in droves despite its huge flaws.

i think that has way more to do with it being a blog with interesting posts on than anything to do with the commenting system or lack of "like" buttons.

Digging into the paper, I give them an A for effort--they used some interesting methodologies--but there's a serious problem with it that destroys many of its conclusions. Here's 3 different measures they used of a post's quality: * q': Quality as determined by blinded users given instructions on how to vote. * p: upvotes / (upvotes + downvotes) * q: Prediction for p, based on bigram frequencies of the post, trained on known p for half the dataset q is the measure they used for most of their conclusions. Note that it is supposed to represent quality, but is based entirely on bigrams. This doesn't pass the sniff test. Whatever q measures, it isn't quality. At best it's grammaticality. It is more likely a prediction of rating based on the user's identity (individuals have identifiable bigram counts) or politics ("liberal media" and "death tax" vs. "pro choice" and "hate crime"). q is a prediction for p. p is a proxy for q'. There is no direct connection between q' and q -- no reason to think they will have any correlation not mediated by p. R-squared values: * q to p: 0.04 (unless it is a typo when it says "mean R = 0.22" and should actually say "mean R^2 = 0.22") * q to q': 0.25 * q' to p: 0.12 First, the R-squared between q', quality scores by judges, and p, community rating, is 0.12. That's crap. It means that votes are almost unrelated to post quality. Next, the strongest correlation is between q and q', but the maximum possible causal correlation between them is 0.04 * 0.12 = 0.0048, because there is no causal connection between them except p. That means that q, the machine-learned prediction they use for their study, has an acausal correlation with q', post quality, that is 50 times stronger than the causal correlation. In other words, all their numbers are bullshit. They aren't produced by post quality, nor by user voting patterns. There is something wrong with how they've processed their data that has produced an artifactual correlation.
It would be interesting to run the voting data for LW through the analyses they made.
this paper seems to say exactly the opposite of complaints I've heard from people about how posting on lesswrong is scary because they don't want to get downvoted.

Remember to think like an attacker in what you recommend.

If the offender really is at fault (which should be quite easy to tell in most cases), then they should probably be banned since this is a pretty disruptive behaviour.

At any rate, have you checked with Eliezer - he used to claim that it is impossible to check a user's voting history, so he might have some other plans that you are not aware of.

I'm figuring that he'll see this post sooner or later.

So... thoughts? I have mod powers, but when I was granted them I was basically just told to use them to fight spam; there was never any discussion of any other policy, and I don't feel like I have the authority to decide on the suitable course of action without consulting the rest of the community.

I just wanted to comment that I trust you to take thoughtful action with your mod powers. Part of being The Rationalist Community (tm) should be some group coordination abilities, and deferral of the ultimate power of decision and action to an appointed trusted and trustworthy designee seems like a good solution here.

yeah, a mod who cares and has time is just the thing.

I don't consider banning a good option if the person wasn't warned beforehand. People can reregister and it can get messy. Speaking with the person and convince them to behave differently in the future should be the first choice. Karma punishment sounds like a good tool.

Unless this is a different person from the person who has been the cited mass downvoter every other time it's come up, they have very definitely been warned.
In some sense yes, in a practical sense I don't think so. Talking with the person more directly could be enough to get them to stop.

Downvotes are bad. They decrease trust and cause defection spirals. I am confident that the existence of downvotes makes the community less enjoyable, less welcoming and less productive on net.

That said, I'm not sure we should do anything to punish people using them in an extra-bad way.

"being welcoming" is not actually good for a community if you want standards to be high.

I'd agree that it's a two-edged sword, but 1) Keeping standards high is not our only goal, and being welcoming is good for other purposes, and 2) I think there are better ways to be unwelcoming to low-quality people that cause less collateral unwelcomingness to good people.
I assume they mean "you downvoted me so I downvote you, and every subsequent comment in this discussion, this ruining any chance we had at maintaining a cordial tone." Happens all. the. time.
This is why I don't generally downvote people I'm talking to, unless I'm commenting specifically to explain a downvote.

This is also why Hacker News disables downvoting on replies to your comments.

Not a bad feature. It wouldn't solve the main problem we're discussing, but I do think it'd make LW a slightly more pleasant place to be. You know, modulo the usual problems with getting the feature into production.
Yeah. Having basically no code contributors emerge from the community (given there are how many good programmers here?) is odd.
Have you seen the LW code? I looked at it once, and gave up immediately. Rewriting the whole thing from scratch would probably be easier, although this could be just some bias speaking.
Heh. That's a quite plausible explanation :-)
Actually, now that I think about it, it would increase the cost of doing this without giving yourself away, since now you'd need a sockpuppet to downvote their replies to you. One potential problem is that you could frame someone, but it would be fairly easy for them to clear their name.

How easy is it to change the ratio of required upvotes to allowed downvotes? As an example, I very rarely downvote, so I probably have quite a lot of spare downvotes. If you were to change the ratio to require receiving 10 upvotes per 1 downvote, I don't even think I'd notice, and I imagine that a lot of people with this type of voting pattern would be in a similar position.

On the other hand, someone who mass downvotes presumably is going to burn through their downvotes faster than even someone who downvotes fairly, but finds themselves generally more incl... (read more)

Make your downvoting ability proportional to upvotes in the past month rather than upvotes ever?

Soooo... The #0 issue is that votes are supposed to be for ranking content, but people take them to be for rewarding/punishing writers. I'd try whether stopping calculating users' total and last-30-days karma would ameliorate this.

Back in the stone ages, I believe the Extropian list had extensive configurable collaborative filtering mechanisms. I didn't use them much, but that seems to me the actual solution. Let people trust who they want, and follow who they want. I see a Karma Score configured by me.

People who mass downvote have an effect only if people choose to let them. Done.

Not to say that the implementation would be trivial, only that there are solutions.

And I like griping about how the web has gone backwards in significant ways. I can say "yay" or "boo" to a post. Oooh baby, that's high tech. The Singularity must surely be just around the corner.

Back in the stone ages, I believe the Extropian list had extensive configurable collaborative filtering mechanisms. I didn't use them much, but that seems to me the actual solution. Let people trust who they want, and follow who they want. I see a Karma Score configured by me.

The failures of old mailing lists and Usenet were why social mediums universally abandoned killfiles and similar filtering mechanisms: the balance of costs was all wrong - a large number of people had to take affirmative action to ignore the small number of bad apples. It turned out to be better to actively curate the default than to thrust the burden of filtering signal from noise onto each and every user.

To give an Extropian-list-specific example: determined harassment was why Nick Szabo stopped posting there. The filters didn't help there.

I'm curious: Can you tell me/link me more please?

No; a lot of the materials are now private, I don't think Nick wants to drag old stuff up, and if the harasser was the same Detweiler dude who did some later harassing, he may well have been mentally ill and not really responsible for his actions.

Thanks! I guess the main thing I wanted to check was that you meant Nick was the one being harassed rather than the other way round, which you have indeed answered.
Evidence? Aren't such filters still available in Usenet readers? My theory is that such code was just never implemented in the shiny new web. And with collaborative filtering, everyone doesn't need to make every adjustment themselves. That's the point. You delegate ratings to others, or combinations of others. But is plopping someone in an ignore file supposed to be so difficult? Should be easier than ever. Have a plonk button on every post to add the guy to your kill file. "Hmmm, this guy is a dick. Plonk." Couldn't be easier. Just as easy as clicking a point of karma. What was the nature of the harassment, and how would it be prevented in the current list software?

Evidence? Aren't such filters still available in Usenet readers?

I didn't specify 'failure of Usenet readers'. I specified failure of Usenet.

And with collaborative filtering, everyone doesn't need to make every adjustment themselves

Still a serious UI burden which doesn't scale. Torture vs dust specks.

But is plopping someone in an ignore file supposed to be so difficult?

It's difficult in the way that constant strain and vigilance is so difficult. Trivial inconveniences on every post.

What was the nature of the harassment, and how would it be prevented in the current list software?

By flat-out banning the harasser.

Usenet fails, therefore killfiles suck? I still don't see evidence. Collaborative filtering is about the only way to scale. No more strain or vigilance necessary than a click. I don't find that so taxing. Ok, so the current list software is no better. How is that an indictment of collaborative filtering or killfiles? Yeah, they can't solve all problems.
Usenet's failure is often attributed to the defaulting to allowing everyone and expecting users to killfile their way to a good experience, which doesn't work for keeping communities vibrant or dealing with spam. Hence, the decline of Usenet as alternatives opened up and Usenet failed to scale to Internet access getting wider. Or tons of moderation and voting. Seems to work for Reddit. Trivial inconvenience. The question is whether they solve any problems. If they're so great, why are they so rare?
Got a source? Having previously pretty much lived on Usenet and now not having fired up a newsreader in years - while frequenting reunions of two Usenet groups I used to be on, one on Facebook and one on G+ - I'm interested in anything written on the subject; I think it's one there's not enough well-written post-mortems of. I don't think killfiles were a significant factor myself, but I admit I'm basing that opinion just on "it sounds wrong", not any actual data. I'd have attributed the decline of Usenet and mailing lists to (1) not being on the Web (that's the biggie) (2) barrier to entry to create a new discussion forum (even alt.* had process). Mostly (1) - the wine-users list (for Wine, the Windows compatibility layer for Linux) has a two-way gateway to a web forum, and immediately the forum was available the volume was 10x. I also posted some hypothesising as to why there are no good Web-based Usenet readers - and why forums aren't backed by NNTP - here [], with a bunch of people I met on Usenet commenting. tl;dr that the unit of NNTP is the message, but the unit of forums is the thread. Same applies to mailing lists, which is why GMane seems weird considered as a "forum".

Got a source?

Not really. This is my own lived experience comparing Usenet to Google Groups, Reddit, web forums, and Wikipedia, and noting the explosion of user-contribution in the shift from Overcoming Bias to LessWrong. You could easily prove Usenet is declined, but I'm not sure what research you could do to prove that the incentives were structured wrong or that features like killfiles fostered complacency & reluctance to change, other than to note how all of Usenet's replacements were strikingly different from it in similar ways.

I don't think killfiles were a significant factor myself, but I admit I'm basing that opinion just on "it sounds wrong", not any actual data.

My read is that killfiles were a major aspect of systematically bad design of Usenet which made it uncompetitive and unscalable: it increased user costs it should not have, adding friction and trivial inconveniences. Killfiles express a fundamental contempt for user time: if there are 100 readers and 1 spammer, it should not take 100 reader actions to deal with the 1 spammers, as killfiles inherently tilt matters. What would be much better is if 10 readers take an action like downvoting and spar... (read more)

Another experience here from a long-time former user of Usenet, overlapping yours to some extent.

comp.sources.* was made obsolete by the web and cheap disc space. The binaries newsgroups also, except for legally questionable content that no-one wanted the exposure of personally hosting. (I understand the binaries groups still play this role to some extent.)

I dropped sci.logic and sci.math years before I dropped Usenet altogether, and for the same reason that if I was looking today for discussion on such topics, I wouldn't look there. There's only so long you can go on skipping past the same old arguments over whether 0.999... equals 1.

rec.arts.sf.* took a big hit when LiveJournal was invented. Many of its prominent posters left to start their own blogs. Rasf carried on for years after that, but it never really recovered to its earlier level, and slowly dwindled year by year. Some rasf stalwarts mocked those who left, accusing them of wanting their own little fiefdom where they could censor opposing viewpoints. They spoke as if this was a Bad Thing. It's certainly a different thing from Usenet, but if you want a place on the net for pleasant conversation among friends, a blog under... (read more)

As I recall, at least the parts of usenet where I hung out (, fandom, and composition, and weren't that badly plagued by spam (there were volunteers dealing with spam for usenet), but trolls were a problem.
I think it has more to do with the fact that Overcoming Bias didn't allow users to post.

OB allowed users to send in emails and they would be posted, which is not a high bar (lower than, say, learning a Usenet reader) and a fair number of people contributed. It's just that LW made it much easier and unsurprisingly got way more contributions. This apparently came as a big surprise to Eliezer (but not me, because of my long experience with Wikipedia; it was a bit of a Nupedia vs Wikipedia scenario to my eyes).

vBulletin which is very popular has "ignore" mechanism: put a user on ignore and you don't see his posts. Yep, it's just as easy as pressing a button.
I like ignore buttons. Cleans out the crap very quickly. And provides useful feedback to people joining lists who want to talk to people. As grown up after grown up plonks you, those who might get the message do.
Most ignore functions send no information to the ignored. No one ever gets the message because no message is sent.
If I'm engaged with someone, I tend to plonk publicly, so the fellow knows I won't be responding any longer, and others get the idea as well. But I'll silently too.
No, I don't think that's true. You're arguing that internet user interfaces become better at hosting debates over time. If I believed that, I'd also believe that the user interfaces for holding rational discussion have gradually improved, from Usenet, to bulletin boards, to Facebook and Wordpress, to Twitter and Tumblr.

You're arguing that internet user interfaces become better at hosting debates over time.

No, I'm not. I'm saying the interfaces got better at certain features of UX, like dealing with spam and trolls. Usenet could be intrinsically better at debate (in the hypothetical universe where it had a restricted userbase and wasn't dying of spam and other issues).

eg. imagine a forum where all comments had to be accompanied by an argument map but the forum didn't have any way of banning/deleting accounts. I have little doubt that the debates would be of higher quality, since argument maps have been shown repeatedly to help, but would anyone use that forum for very long? I have much doubt.

Apply a negative karma award to [REDACTED 2]'s account. This makes him pay for false downvotes twice over.

They don't seem false to me. That's pretty clearly his opinion.

I'm assuming "false" here is based on the assumption that upvotes/downvotes should be a reflection of the voter's opinion of the particular comment being voted on, not his or her opinion of the user making the comment without regard to the content of the comment itself. Mass downvoting seems like a strategy for conveying a message about a user, not a comment, and that is plausibly a subversion of the karma system's intent.
That rationale for the karma system would be the rankest hypocrisy. To facilitate the upvoting of particular commenters—regardless of content—LW records karma totals.
It does no such thing. It tracks how much people have been upvoted to estimate their contribution to the community; it tracks monthly totals to estimate how much of that was recent.

As a Bayesian, you should count not a user's downvote, but P(downvote | user, facts about the post). If user X downvotes half of all posts, each downvote is 1 bit of evidence. If user X downvotes one out of 16 posts, each downvote is 4 bits of evidence.

The tricky part is how you combine facts about the post with the prior over all posts in cases where user X hasn't voted on many of user Y's posts. What if user X downvotes 1 comment in 50, and they've only voted on one of Y's comments before, and down-voted it? I could talk about how to do that correctly, b... (read more)

I vote for public shaming of the mass downvoter. "Banning" them is fine but creating extra accounts is fairly trivial.

I kind of strongly disagree with this. What kind of community are we, if you have to worry about being publicly shamed for an offence that gets banned at some later date? Creating new accounts is trivial, but since it requires a high karma rating to mass-downvote people, it's likely that the downvoter was at least somewhat invested in their membership here.
What exactly are you trying to get me to pattern-match to with that rhetorical question? We're a totally normal kind of community with the ability to express social disapproval when people within the community act like dicks.
Well, I don't like the idea that I'm a member of a community that might do that. Maybe I'm suffering from the typical mind fallacy here? Still, I think I'll refrain from any public shaming myself [] - unless you have arguments otherwise, which I'd be interested to hear.
A reminder of what downvotes are for. This is what we would get more on a website without downvotes or banning; but the banning could be circumvented by creating new accounts.
This is a bit hyperbolic, no? I expect downvotes had little to do with the grandparent getting moderated.
Sometimes I wish that comments were made wiki style with tracked edits and the like so that you could always see what someone was responding to.
Thanks for the tip. That was...something.

I have seen advice that you can vote however you want. If centralizing your downvotes is an action that is faced with punishment a vote use is prohibited. Thus I am thinking there is a line drawn in the water on accepted vote policies.

For those that have beef with users and not posts maybe a channel for those could be developed as a voteable user karma (maybe require a reason for user-downvotes?). Mass downvoters go for the posts as a proxy for the user.

For what kinds of legit use is the association between an username and post used for? Could we do withou... (read more)

I guess it was silently assumed that you would read the things, and then vote, not just execute a content-independent voting mechanism.
From other comments, that's not actually true. You can only downvote 4 times your own karma. I'm guessing few knew that.

(1) is clearly the appropriate action to take in the first instance.

It's harmless and could be beneficial. It doesn't close the case, though.

Whatever happened to "no penalty without a law"(nulla poena sine lege)? How did we go from "what should our policy on this be" to "let's do a public spectacle, come up with some rules and apply them retroactively"? LW, I am disappointed.

This isn't a legal system; it's a blog forum. Legal systems impose themselves on non-consenting participants, and therefore are properly subject to procedural and moral restrictions that do not apply to consensual social systems.

Trying to apply the proper restrictions of a legal system to an informal, consensual social system leads to all sorts of weirdly biased results. Another example is the popular notion that "innocent until proven guilty" applies to conversation or personal opinion about a person who is believed to have done something wrongful — at least, when the accused is a member of my tribe, and thus someone who I empathize with.

This isn't really very retroactive - mass downvoting has always been disallowed/looked down upon, it is just that [it was claimed that] there was no way to tell who is an offender in order to punish them.

It really is, though. There is a large difference between looking down upon a behavior, and punishment (public shaming or whatever else, though the first is particularly distasteful). (Not that there has been a clear consensus on the topic*, anyways. I, for one, can see circumstances under which it is warranted, and circumstances under which it's not. Of course, once there's an official policy on it, I'd defer to that.) There is plenty of behavior I observe every day (in "real life") which I look down upon / which is generally looked down upon. That is not at all the same as those people being fined / thrown in jail / whatever analogue you envision. * Case in point: This discussion exists.
Go on
I'm sure you're imaginative enough to come up with such circumstances yourself. But since you asked, Sherlock, enjoy a hypothetical villain's soliloquy: 'Well kept gardens die by pacifism' can be applied here (though it cuts both ways). This community's unique characteristic is its high signal-to-noise ratio. Consider if someone consistently flooded the board with perceived-no-value-ergo-noise comments. Given the low frequency of comments on this board overall, such a dribble would easily dominate/drown out the daily comment feed. Consistent downvotes to tell that commenter to, in effect, "go away", could be one response, especially given the near-trivial effort, the use of provided mechanisms and the lack of clear guidelines against (as evident by the frequent discussions on the issue, which did not focus on the technical aspect alone). Note that outright telling someone "this is the wrong place for you, go away" has also occurred. Is this subjective? Of course it is, what else would it be? Note that this was just one example. I could provide many more (as, I hope, could you), depending on time constraints and how finely we partition the categories. That being said, I see more circumstances under which it is not warranted.
Can you provide an example where this wouldn't be obvious to the moderator examining the case? I honestly put a very low probability on the occurrence of even a single punishment due to a false positive..
I'm not going to call out specific users here. You know, privacy concerns, a strong preference against public shaming, and all that.
To suggest that a user whose comments you'd find both ubiquitous and worthless would also be so judged by a moderator "examining the case" seems like folly to me. Do you by extension suggest that people always vote the exact same, too? When you downvote a comment, would you expect everyone else to also downvote that comment, because the downvote would be "obvious"? Why would it be different with a moderator. Such things are evidently subjective. There is a difference between using your own voting to convey a message, and bringing in some authority figure to "examine the case". All these courses of action are not equal. I'm sure you can imagine comments that you yourself find interesting, while others find worthless. I myself have written many such comments, little puns in particular. Just imagine a long string of them. There you go.
If a user's history is controversial (both upvoted and downvoted) versus only downvoted, then punishing you for downvoting all (90%+) of their comments (if they have more than a few) is completely justified. At any rate, here is an extra filter to prevent false positives even further - if you look at the comments where only the offender has downvoted and you see neutral comments (those which would have neither been downvoted nor upvoted normally) there, then you know there is a problem.
Can I ask for some reasoning underlying this? In particular, I'm interested in what, in general, justifies punishment and who gets to decide whom to punish for what.
There is a discussion on this exact topic here [].
I see no discussion on what justifies punishment, I see people saying "I would punish the guy", "I would not punish the guy". And the issue on who gets to decide is mostly absent, too, there is only that faceless "we".
You're missing the point, you can downvote a comment because you want to see fewer such comments. There is no reason -- absent rules -- that couldn't extend to all of a poster's comments if you want to see fewer comments by that person. There are benefits to sending clear signals. This isn't some official process -- it's simply an expression of your personal preferences, not anyone else's. Your "offender" and "punishment" fixation when conflating "I don't like this behavior" with "this behavior must be punished" is a bit frustrating, given we have rules on a host of issues, but no rule yet on this one. It's simply using your own karma. Anyways, I'm signing off on this.
You're missing the point that we are not talking about downvotes of specific comments, but downvotes of most comments by an user.
The only time I tried this I quickly gave up, because the poster I didn't want to see used the monthly quotes threads to karma boost. A malicious user that's intelligent can pretty much troll however they want and still have positive karma simply by copy/pasting quotes into the the quote threads. This doesn't really apply to the topic at hand, but the quote threads are a serious karma problem, because they provide an effort-free way to generate huge amounts of karma.
Yeah, I'm arguing against that mindset as a whole. I'd much rather people only downvote the content they disagree with, and to leave the other comments alone. Anyway, so you think that it is fine, if for example I got annoyed at you during this discussion and went and downvoted all your unrelated to this comments in order to see fewer comments by you in the future?
I wouldn't like it, but there is a difference (a rather large one) between not liking it and thinking that person should be punished for it. I'm not in control of his/her downvote button, he/she is. This topic has come up many times, and yet no consensus has ever been reached. And it wasn't only because of technical problems, either. Your preferences are your own, just leave me out of them, in a nutshell. (I did sign off on this, but didn't want to leave your question unanswered. Stupid red letter symbol!)

Even if we don't apply the rules retroactively to whoever this is, it's a perfect opportunity to come up with some rules and them apply them in the future.

The basic rule of all social spaces is "don't be a dick"; more detailed rules are elaborations of this. This seems to be considered a pretty clear violation.
One person's pedantry is another person's dickishness. One person's nitpicking is another person's "what a jerk". One person's pruning the weeds is another person's harassment. We all frequent social spaces all the time. You say the basic rule of all social spaces was "don't be a dick", and yet [] ... which is fine, some situations call for decisive signals (which may include "being a dick"). I've always appreciated the no-sugar-coating clear feedback signals this community sends, while others have bemoaned exactly that. I don't see signals using provided feedback mechanisms as out of bounds, absent a clear rule.
Well yeah, but that's why it requires discussion. (More "constitutional article" than "rule", maybe.) OTOH, this appears to be the sort of behaviour that causes new rules, and may cause retrospective ones.
I haven't seen any mention of that principle in any of the CEV or TDT articles. If you want to argue that the principle should be a substantial part of a decision framework I invite you to write an article laying out your reasoning in detail.
Let sleeping basilisks lie!
Eh, it seems somewhat self-evident that it does not make a lot of sense to expect agents to avoid punishments which do not exist at the time, as such. CEV, to the point that CEV(mankind) wouldn't be an empty-set anyways, would probably include it just by fiat of it being one of the pillars of the rule of law, and since it's about "the people we'd want to be", presumably your CEV at least would contain it, as would mine. There is no relation to TDT, since we're talking about preferences of groups of agents, not general instrumental rationality.
You mean we aren't talking about the choice whether or not to punish someone? I don't see how that holds. If you only discuss a decision theory in the abstract but are not willing to use it for practical decisions than you are likely going to have a bad decision theory. Don't compartmentalize and stop using your decision theory when things get political. In this case punishing people for doing something that's bad for the community can discourage other people from doing something bad for the community against which we don't have explicit rules. Even if I agree that nation state should only punish in a court of law based on explicit rules that doesn't mean that I think the same is true for privately owned online communities. If I throw a party and someone misbehaves I can throw that person out even without him violating a previously explicitly stated rule. A lot of social interaction works about people simply observing implicit rules and focusing on being nice to each other.
TDT can't tell you how to optimally arrange the flavors of an ice cream cone if you don't input which flavors you like. "But how can that be, that is a choice too?" Decision theory tells you which decisions are optimal, given your preferences. My preference is rule of law (which also makes sense as an instrumental value), I suspect yours is too. TDT doesn't care. It can't tell you your preferences (though it can tell you which instrumental values would make sense). I don't understand. TDT isn't my practical decision theory (I'm a meatbag, not an abstract agent), nor did I bring it up. Nor is it applicable anyways. Optimality is viewpoint dependent.
For nation states with a monopoly on power I consider the rule of law to be valuable but I don't consider it to be a terminal value for online communities or when I host a party. In most social interaction punishing people for violating implicit community norms is quite common. The person who engages in the block downvoting might even think of themselves as punishing someone else for doing something bad. If it isn't applicable then what's wrong with TDT? How do we fix it? Even if you don't personally follow TDT, you are here on LW and while you are here making the argument that the policies you are advocated make sense under TDT has merits if you want to convince others.
Let's stop with the reference class tennis. This community does have established and explicit rules, such as "no proposing violence, not even hypothetically". It is not like one of your parties, I suspect. And while you may tell someone to leave you alone, or to get out, I wouldn't say that official punishments against breaking inofficial "norms" are the rule. At least hopefully nowhere I'd like to be. Note how this community has grappled time and again with coming up with a clearly defined norm on this, which would decohere the congruence even if LW were like a party. Meet-ups have resorted to clip-on notes whether hugging is ok with that person. So much for implicit norms for social interaction. Someone who engages in block downvoting would be sending a signal, using tools as were provided. What's the obsession with the punishment-concept? (Warning, flippant aside: Do we need a good public beating, or what?) In a word, nothing. If you use TDT going off of "I don't want agents to be punished for actions against which there are no rules", then TDT will include that when giving you the optimal course of action. If you use TDT going off of "I don't care whether agents are punished for actions against which there are no rules", then it won't include that. It's the reason why paperclippers and anti-paperclippers both can use TDT. TDT doesn't judge your preferences :-).
Those rules are not in a place where a new member would easily find them. Some people even think there a rule against politics when there's no such thing on LW. You were the person who started speaking about punishment. For my part when I was forum moderator I didn't think of myself punishing weeds when I tried to rip them out of my healthy garden. I did ban people but not to punish them but because I thought the forum would be healthier without them.
That doesn't tell you everything there to know about hugging. There are still issues like the length of the hug that isn't fixed by the rules. Especially in a community of munchkins you don't want to allow people to game the rules by moving exactly within them but violating their spirit.
The rules also explicitly include a no harassment of individual users [] clause.
At least read the explanation of that rule first, would you? There you go: Your leading OP title, including the phrase "mass-downvote harassment" is insincere reasoning, because it is circular. It has never been established whether mass-downvoting should always be considered "harassment". You'd consider it so. I don't. Come now, be so courteous as to assume other people have reasons for their behavior. Not even the wiki, which does include an example, makes mention of mass downvoting even though the topic has come up many times. The reason for that is not "well, we can't list everything, we don't list hacking a server, for example". That would be a ridiculous argument. One is using established feedback mechanisms, one isn't. New rule: You must always give reasons for each and every vote, otherwise you'll be publicly shamed for harassment. Downvotes are a user's individual and private choice. He/She can use it to confer whatever message he/she so chooses. Don't like it? Make a rule against it. Such as an upper bound on allowed downvotes. Oh wait, such an upper-bound has already been implemented? And it doesn't disallow downvoting most of a user's comments? Maybe your moral intuitions on the matter aren't as general as you'd like them to be. Signing off on the topic, though I'll leave you the last word, if you so choose.
Indeed. How is banning anyone going to provide a stronger signal than an announcement saying "this is a banworthy offence starting now"? It seems to me that all we can possible accomplish here is throwing away possibly-constructive commenters. It's highly probably that anyone with enough karma to do any sort of damage with this is a high or medium-value user; downvotes have a cap based on one's own karma total. One could argue that this sort of behavior is antisocial and implies the perpetrator is probably not someone we want on the site. But that's exactly the logic that leads to downvoting everything a person has posted! As one of the people who was downvoted, I find it highly probably that whoever was responsible (in my case, and probably others) was acting in good faith. How could they have known to abide by a rule we are just now introducing?
I don't see a public spectacle - the names were redacted, etc. And Kaj's post seems to be asking "what should our policy on this be" to me.
I was referring to an upvoted (at the time) comment calling for public shaming []. I thought this community especially would be more sensitive to the whole public shaming thing. Also, OP should a) have messaged other editors first and b) not presumed that a valid reason for redacting private information is the "presumption of innocence". The reason for not disclosing private information is that it's private. D'uh.

Also, OP should a) have messaged other editors first

I'm sure that the infamously antiauthoritarian LW community would just have loved it if the editors had just decided on a course of action behind closed doors.

I've been actively modding /r/DebateReligion (not exactly a topic which preempts drama) over on Reddit (not exactly a community which dislikes drama) for years, and at least from my experiences there I wouldn't dream of putting such questions to the community (especially with delicious "redacted" drama bait) before coming to some sort of consensus with my fellow moderators. You could of course argue (and I'd agree) that this is a more mature community. Also, I wouldn't cite "presumption of innocence" when apparently unaware of much more pertinent principles (no retroactively applied punishments, not even hinting at a disclosure of legitimately presumed-private data). I do agree that a specific rule going forward would be a good idea, given how often this topic crops up. To establish such a rule -- via public discussion, if you so choose --, dangling (however unwittingly) the allure of a witch-hunt would have best been left out entirely.
At the moment nobody is actively modding LW so the comparison doesn't really hold. The community mostly mods itself by downvoting posts it doesn't like.
But they could have just pinged the guy and said he was causing a problem they didn't want to deal with. Maybe he would have let it go. The best solution is to have the problem just go away.
no: The best solution is for the problem to go away and never come back. signs point to there being multiple sources of mass downvotes.
Yup. Can confirm there were at least two. [Cite.] []

I was referring to an upvoted (at the time) comment calling for public shaming.

Would you also object if I said (which I am not saying, just asking hypothetically) that I suggest the public shaming only for the downvotes that will happen in the future, after this rule is agreed upon? In other words, is retroactivity your true rejection?

I consider the retroactivity not a good rule for a website, because a creative person can find more behaviors that are obviously wrong, but still not forbidden yet. For example, is there an official rule against hacking the server and deleting someone else's account? (Or, as an extreme example, finding the other user in real life and hurting them?) If someone did it, would it be okay to defend them saying: "well, it wasn't said explicitly that such behavior is forbidden, therefore we should protect their privacy"?

Retroactivity and similar rules are made for countries, which have more time and resources to debug the laws, more power to apply, et cetera. LW is not a country, it does not have to follow the same rules.

This raises the question: why do we bother posting rules at all, then? And the answer, of course, is that such unwritten "rules" are not immediately obvious to everybody.
It's a good remark, but the answer is yes, I would still object. Public shaming, near-regardless of whether there was an overstepping of an explicit, an implicit, a retroactively applied or a (insert attribute) rule, is a topic I have very strong opinions on. It can cause large amounts of mental anguish, especially given a susceptible population as I suspect the LW'ers INTJ crowd tends to be. It's simply not worth it, it's toxic, especially when there are so many other options left to resort to (PM's, technical limitations, etc.). If there was one public shaming of anyone condoned by the editors (providing private information for the purpose of punishment), I'd leave this community, never looking back. Rule or no rule. Also, I object to your slippery slope argument. I see a fundamental difference in using tools as provided (downvote buttons), and hacking a server.
and getting mass downvoted isn't stressful? someone hounding another person through all their comments isn't stressful? someone doing that should be ashamed. We can't make them ashamed without public shaming. It's either that or banning them. I don't care about whether what they're doing is technically allowed by the system. They're doing something bad for the community, and they should be stopped.
ahem It's not fun, but having a single, anonymous individual express dislike through such an abstract means is nowhere near comparable to public shaming by a community you identify with, I assure you. I'm sorry, was that a rhetorical question intended to slip an unsupported hypothesis? (For the record, in case it isn't clear: if it weren't for the fact that being mass-downvoted means I'm currently unable to, I would definitely have downvoted your above comment.)
sure, that's why it works. Public shaming is supposed to be stressful, in order to get that person to STOP. One is a socially mitigated system of enforcing how the ingroup behaves, whereas mass downvoting someone you own is an individual attempt to enforce how the group behaves. My point was that it being stressful was not a good reason not to do it. If someone identifies with your ingroup and you think they're ruining it, then there is a mismatch between group identities. No group is obligated to associate with anyone who wants to be in it.
To be clear: It being stressful is a reason not to do it, but it may be outweighed by the benefits, right? Two points: one, you pretty openly compared the two. Since they are different by several orders of magnitude, I think it impacts your point somewhat: should we do A Very Bad Thing to punish/disincentivize something far less unethical or harmful? Two, I'm having a conversation with a mass-downvote-er in another tab. They seem pretty ... corrected. I seriously doubt they will do this again. And yet, amazingly, this happened without me choosing to so much as hint who they were, let alone "publicly shaming" them.
That sounds like a budding bromance. Hopefully not some kind of Stockholm syndrome.
I'm not sure why you think your own personal definitions of what's an order of magnitude more or less x or your anecdote about getting someone to change their ways is helpful. I personally think punishing someone for fucking with the community is less bad than someone taking it on themselves to scare people away. But you clearly disagree. I don't know who you're having this conversation with, but multiple people approached eugine neier and tried to talk to him about it. So clearly that's not a solution that will always work. Side note: "Funny how that works" is pure rhetorical shit. It has no place in trying to convince someone of anything. All it does is show how "superior" you are to people who already agree with you.
I assumed that some evidence would be more useful than speculation in a vacuum. Are you seriously alleging that the "personal" opinion of someone who has relevant evidence is weaker Bayesian evidence than your own personal opinion? I think it's reasonable to suggest that different punishments fall into different reference classes. If Eugine had been told that the moderators were aware of his actions, and that repeating them would result in a ban then either he would have stopped, or he would have been banned. Most LessWrong users have no desire to break the rules, as evidenced by the fact that they are still here. The rules were at best ... unclear ... in this case. You know, you have a point there. It doesn't add enough to the comment, and it's somewhat discourteous to you. I'll change it.
Eugine DID get told that. If you look at the recent thread with Kaj, he allegedly talked to Eugine, and told him to stop. You're right that my opinion isn't really more valid than yours.
Huhm. I got the impression that Eugine was asked to explain his actions, then banned when he did not reveal mitigating circumstances: I don't think he was asked to stop and refused - although I admit it would be nice to see the relevant messages, rather than Kaj's secondhand description.
Just stop with downvotes altogether then, since even smaller amounts can be stressful. Allowing spammy no-value posters to drown out the few valuable comments is also bad for the community, but whatever. What's with the whole "shut up" routine (in your other comment)? You're shaming yourself, here. Not going to engage with you anymore.
That's better, at least. People should know what they're in for. It would be a large breach of trust for the moderators to make public what had been assumed private.
Is there an explicit law against publicly and retroactively applying rules to someone? No? Shut up.

Can any user downvote, or is some karma needed? It would be good if only users with karma at least, say, 20 could downvote, because that would prevent creating a new account for safe mass downvoting. (Similar system is used at StackExchange.) I'm saying this because if we adopt a policy of detecting and punishing mass downvoters, their logical next step would be to mass-downvote using a different account.

My opinion (but I have low confidence in my ability to correctly handle these situations) is the following:

If an obvious case of mass-downvoting is detect... (read more)

I agree with most of your points, but there is absolutely no way to prevent discussion. Even if it is somehow blocked on LW, it will happen elsewhere.

Yeah, blocking topics of discussion on LW is one of those things that doesn't work out so well.

Understatement of the year! :D

One of my proudest stupid moments on the Internet was when I was chatting to Mike Godwin (I know him through Wikimedia, he was their lawyer for a while) and I compared someone to Neville Chamberlain. ... talking to Mike Godwin. He just said "don't talk to me about WWII stuff, there's no happy ending to that discussion."

I didn't mean: "you are not allowed to discuss this". I meant: "this is our decision, and it's final; you can discuss it if you wish, but it won't change the outcome". In other words, I recommend against deciding a penalty for a specific case by a community vote. Because it could easily become a poll about whether the offender's faction is more powerful than the victim's faction, or vice versa.
You can't give more than 4 * your karma number of downvotes. This will waste too much of their time and it is a bit too subjective.
That would be a lot of downvotes for someone who has been around a while. I'd get bored with downvoting long before I used up my quota.
That's exactly why I use the downvoting scripts. :-D Sorry, couldn't resist.
Only if mass downvoting is frequent. (Not sure if that's the case.)

It may be too late to influence the consensus decision here, but I'm one of the people downvoted, so what the hell.

As one of the users who was targeted - I don't know who by yet, it may have been a different person to [REDACTED 2] - I'm much more interested in being able to ask why and get new information. That is, after all, the purpose of the karma system, no? To provide information?

Sure, it feels shitty to be suddenly subject to various anti-troll measures because some anonymous individual apparently finds you sub-par. Seriously, it does. It messes up ... (read more)

As far as harming the goals of the community, is mass downvoting of a single user any different from mass upvoting of a single user?

Yes. Comments with -1 or lower karma are remarkable and generally glided over as provisionally bad unless the reader takes care not to, comments with 1-3 karma are not notable.
Has mass upvoting of a single user (by a single user -- that is what we are talking about here) ever happened?
I don't know, but mass upvoting is less likely to be complained about.
I wouldn't complain about a mysterious leap in my karma, but I'd find it unusual enough to wonder what was happening -- as when the scoring rules were changed to make votes on top-level posts in Main count for 10 points instead of 1.

What's the point of the up/down votes in the first place? If the object is reducing bias, doesn't making commenting a popularity contest run counter to this purpose?

Quality control. Ideally, people should not upvote/downvote based on conclusions they disagree with. I recall hearing that the highest-karma comment ever was criticism MIRI, which would suggest that this works as intended. I'm not sure how to check this, though. ETA: found it. []

Might a one half point penalty for down voting change the incentives enough to prevent mass down voting? Perhaps combined with Viliam_Bur's minimum karma suggestion. Generally I favor ideas that don't make more work for the moderators.

(I am not imagining having half karma points, rather docking one karma for every two (or n) down votes.)

It'd make it somewhat more salient at the very least, but technical patches like these often come with unintended side effects. The moderation burden here is pretty light as it stands; as long as the tools exist to do the analysis I don't feel it's an undue burden on the mods to empower them to deal with things like this. I'll also note that it's historically been a lot easier to get mod time than to get dev time.

A simple-ish solution is for a mod to PM the offender and ask for an explanation, and figure out a corrective (if necessary) and retributive (if necessary and appropriate for deterrent) solution. Then implement it, make a public note, and be done with it. Very imperfect, mostly due to personal impatience with forum meta, but mostly gets the job done.


If the direct victims (the mass-downvoted) are not calling for blood, warning [REDACTED 2] seems sufficient and closer to minimal recourse than, say, bannination. If an explicit policy is made against mass-downvoting, treat future offences by [REDACTED 2] or others with harsher punishment for violating an explicit policy. Do any of the mass-downvoted feel that that would be insufficient?

[This comment is no longer endorsed by its author]Reply

I suggested in another thread that successive downvotes on (1) one person's account (2) over a certain number of downvotes (3) within a set period of time should prompt the system to tell the user that they have to sacrifice personal karma until (x) days later in order to use up/downvotes.

Something like this is already in place, where a person has to sacrifice karma in order to comment on a post that itself is below a certain karma threshold.

Make all downvotes cost one karma point, and make it so downvotes are weaker - perhaps 5 or 10 downvotes needed to cancel a single upvote. This really disincentivizes downvoting, but you'll do it anyway if something is just too over the top.

The only reason I'd keep downvoting in general is for use against things like the pedophilia posts from a few months ago - if downvoting tells someone that we don't want them around, there are cases where I'm ok with that.

People already rarely downvote. (Well, except for those who do the mass downvoting.) Making the downvotes 5 times weaker, that's almost like removing them completely. Almost no one would bother.

What's your source for this? Regarding making votes weaker and more expensive, I was thinking about that from the standpoint of 'downvoting in general is bad', and I would still bother. One other possibility for downvoting might be the 'conversation of ninjutsu' trope: a person's downvoting power might decrease as the number of downvotes increases, so that one person can really only nuke 10-20 karma by block downvoting instead of an arbitrary amount.
Look at a random LW thread, or perhaps this one. Comments with positive karma are many, comments with negative karma are rare. (Someone could make a script to look at the latest N articles and determine the exact ratio, but I'm to lazy.) Maybe that just means that we have a smart and civilized discussion here, so the system is working as intended -- people upvote more than downvote because they are satisfied more often than dissatisfied. The more I think about it, the more it seems to me that the problem is that the karma system was not designed to prevent this kind of abuse (downvote-bombing an enemy), so it is vulnerable here... but that the proposed solutions would be vulnerable to other kinds of abuse. (What happened with holding off on proposing solutions []?) Perhaps we should start by declaring the properties we want the system to have, listing a few examples of possible abuse, and then could try designing a system that has the desired properties and can resist the abuse. Maybe we don't even agree on what those properties are. For example: Really bad content (disliked by most people) should be hidden before everyone has to read it. People who write really bad content should be prevented from writing much. On the other hand, the system should not allow one person to "destroy" their enemy, if other people have no problem with what the person writes. It shouldn't be possible to get more power merely by creating a dozen sockpuppet accounts. Etc. The current system is not perfect, but it seems to be close to these properties (more than many other websites). For example, even the downvote-bomber can give you only one downvote per comment, so if your average comment karma is greater than one, you will survive. And if your comments are good, then perhaps instead of a person who stupidly downvoted them, we should blame the people who liked the comments, but didn't upvote. -- I see an analogy to a country w

Has Redacted2 broken any explicit site rules? I personally feel that unwritten rules of etiquette are not punishable. For that reason I strongly oppose options 4 and 5. For comparison, if Redacted2 had hacked the site to get around the karma requirements for downvoting that would be very different. As it is Redacted2 clicked the readily-available thumbs down button while following karma requirements. This is not a punishable offense.

That doesn't make it correct, and it doesn't mean Less Wrong's policies can't change. If the policies change, then options 4 and 5 can be considered for future use.

Strongly disagree. I've been involved in user-facing administration before, and binding yourself to a narrow set of policy rules (especially on a site like LW, where they aren't well documented) is about as useful as drinking antifreeze. It's tempting, sure, since we've all been socialized to believe in the rule of law and no ex post facto punishment and all that good stuff. But the truth is that that only works in government because government runs a well-developed legal framework that's had centuries to fill in its loopholes and smooth its rough edges. And it still requires a lot of discretion on the part of its various enforcers.

You can't make loophole-free policy that's more specific than "don't be a jerk", not if users are going to be interacting with each other in a reasonably natural way. You don't have the time or the expertise. That means you'll occasionally need to extend or invent policy to deal with cases that aren't well covered, and that means you'll occasionally piss people off. It's okay. It comes with the territory.

That said, block downvoting is common enough behavior that we probably should have policy to deal with it. Ideally policy and code, but that's probably not going to happen.

I think unwritten rules of etiquette probably are punishable, in the sense that "don't be an ass" is essentially rule zero of every site ... but surely they were acting in good faith here, to discourage low-quality submissions (in however misguided a manner) as the karma system is supposed to? I really don't see why this can't simply be punished going forward. Ideally, the people responsible will know to stop and no-one will ever need to be banned for it. After all, isn't the main problem that it was driving away users?

New to LessWrong?