I've gotten sufficient evidence from support that voiceofra has been doing retributive downvoting. I've banned them without prior notice because I'm not giving them more chances to downvote.

I'm thinking of something like not letting anyone give more than 5 downvotes/week for content which is more than a month old. The numbers and the time period are tentative-- this isn't my ideal rule. This is probably technically possible. However, my impression is that highly specific rules like that are an invitation to gaming the rules.

I would rather just make spiteful down-voting impossible (or maybe make it expensive) rather than trying to find out who's doing it. Admittedly, putting up barriers to downvoting for past comments doesn't solve the problem of people who down-vote everything, but at least people who downvote current material are easier to notice.

Any thoughts about technical solutions to excessive down-voting of past material?

New Comment
221 comments, sorted by Click to highlight new comments since: Today at 6:11 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Nancy, I support Scott's (Yvain's) approach. Just say you are a dictator and ban at a whim (or perhaps ban "virtue-ethically" rather than "deontologically" -- "we don't want your type around here.") Publishing rules just invites people to bend them.

Just say you are a dictator and ban at a whim

There is a slight problem in that LW is not Nancy's personal blog to be shaped by her whims.

Voting for a new CEO is dramatically more effective than the board trying to micromanage the current CEO with rules. Find a reasonable person and let them be flexibly reasonable.

LW is not a corporation and I don't think it needs a Great Leader, especially of the CEO type.
That only works if there is a mechanism for getting rid of CEOs who abuse their power. See comment above. Also note, that the victims of said abuse are generally not in a position to defend themselves.
9Ben Pace8y
In Eliezer's post about gardens dying through pacifism, he says that in online gardens, you should either trust he moderators, or garden. A place where moderators get really worried about who to moderate is a place where trolls get in.
There is a mechanism. It will be fruitless in this case, as Nancy is not abusing her power.
I believe that Nancy is conservative enough with management that this is not a real danger.
As Romeo noted, Nancy was appointed roughly by popular acclaim (more like, a small number of highly dedicated and respected users appointing her, and no one objecting). I think it's reasonable in general to give mods a lot of discretionary power, and trust other veteran users to step in if things take a turn for the worse.
That makes sense for visible behavior, like trolling, but this ban is about the invisible behavior of mass downvoting. I don't think anyone is worried about Nancy's judgment of such actions, but she is worried about the difficulty of discovering it. Algorithms might be useful for enforcing rules on voting or discovering patterns therein.
I don't think it's going to be very difficult to discover Eugene's new account once he makes it. The real difficulty is making it not worth his while to keep coming back. -------------------------------------------------------------------------------- I don't count myself as either a rationalist or a community member here, so this is an opinion of a somewhat sympathetic outsider (take it for what it is). But I think you guys should find a way to throw the nrx out, and let them start their own community. I think they are going to do more harm than good in the long run. Yvain started to clean house already on his blog, because he noticed the same.
If we have a "no politics" rule it should apply to nrx. Nrx people can participate if they are able to do so apolitically.
...nor do anything else as dickish as downvoting the hell out of somebody's every single comment because they disagree with one of them or use the anonymous account to vote.
We don't have a no politics rule, though we may have a no politics custom. It's difficult to talk rationally about politics, but that doesn't mean it's impossible.
Seconded. FWIW, I think of anyone who posts here regularly as a Wronger! (I know, I know, you disagree with other people here about how to do causal inference and about the insightfulness/worthiness of academics — but disagreeing with the rest of the gang on some specific topic is pretty common, I reckon, and not nearly enough to get you kicked out of the treehouse.) This I disagree with. The only neoreactionaries I remember being obnoxious enough here to raise a real stink are Eugine_Nier and Jim, and Jim hasn't posted here since 2012. That's too thin a basis for kicking out a particular political group, especially since Eugine_Nier being here has had some benefit. (I have occasionally seen them shake people out of misconceptions.) It's just that Eugine_Nier's abuse of the voting system outweighed/outweighs that benefit. (That wasn't Eugine_Nier's only downside, but it was the big one.)
I don't think I have substantive disagreements with folks here who know about the topic. I try to do outreach with others, not the same as disagreement :).
Why? Because you'd rather have an echo-chamber than a rationalist community? Because you secretly suspect the NRx's are correct and are worried their arguments will persuade more people to agree with them?
Because nrx attracts the type of people like you or "Jim."
You mean people willing to say things likely to be true even if it isn't socially acceptable to admit they are. Yes, I can see why people who are uncomfortable with reality would have a problem with that.

You mean people willing to say things likely to be true even if it isn't socially acceptable to admit they are.

People holding similar positions to yours but expressing them in much less dickish ways have included, off the top of my head, Konkvistador (whose total karma is 88% positive), nydwracu (91% positive), Vladimir M (93% positive). Nyan Sandwich and Moss Piglet appear to have deleted their accounts, but I don't recall them being downvoted much either -- nor can I recall many people lamenting the presence of any of said commenters.

For comparison, advancedatheist is 59% positive and sam0345 (most likely James Donald) is 53% positive; also eridu, who expressed radical feminist opinions in a way almost as obnoxious as Jim expresses his, has since deleted his account, but IIRC his % positive was also in the mid 50s.

So no, the social acceptability of a statement does not just depend on its factual content.

This may sound like an intricate Song of Ice and Fire fan theory, but has anybody checked whether eugine_nier and Jim Donald are the same person? For example, can we compare the IP of sam0345 and Eugine's accounts? Alternatively, Yvain probably has access to the IP address for both posters. (I am not the same username2 as above. This is my first post using the anonymous username2 account)

It seems very unlikely (< 10%) to me, given they have regularly commented on the same Armed and Dangerous threads for years, with no obvious reason for one person to use two aliases at the same time (also, Eugine has commented on Jim's blog e.g.).

Very unlikely, I think. Eugine is, or claims to be, from somewhere ex-Soviet, and writes like a non-native. Jim seems straightforwardly American. I don't see any obvious reason for either of those to be fake.
Consider the possibility that Ilya doesn't mean what you say he means.
Or because you and Jim are being tedious assholes nobody likes to hang out with, while going on about the same predictable set of not socially acceptable stuff for years and years without having anything new and interesting to say after a while.
The universal counterargument of crackpots.
First, stop putting words in people's mouths. Second, as rationalists, we'd convert to NRx in an instant if we had any sufficiently strong reason to believe NRx is correct.
This isn't obvious to me, or at least would benefit from a separation between NRx critiques and NRx proposals / attitudes. One can think that the NRx view of liberal democracy is much more correct than the liberal democracy view of liberal democracy without thinking that the NRx prescriptions are correct.

Any guesses on what his next handle will be? I'm thinking CthuluWillEatYourBabies.

Someone has upvoted and downvoted a lot of comments in this thread using this account. I have manually reverted them. Please don't vote using this account.

Thank you. I could ban that account. Thoughts about whether that would be a good idea?

Please, don't ban anonymous account, there are at least couple people who regularly use it. It is rare that anyone would use it for voting, it was the first time I have logged in and noticed so much upvoting and downvoting in a single thread. Sometimes I find a couple of votes in a thread, and I often revert them, but that's it. Maybe there were previous incidents in the past, but I haven't noticed them. Of course, things like that relies on goodwill. If someone started abusing it, there would be no choice except to ban it.

By the way, thank you Nancy. You do a job that is often unpleasant, but necessary.

Definitely don't unless strong evidence emerges of an actual serious problem. Having an effectively anon account is valuable.

OK. It won't be banned.

I don't know whether it would be possible to make it into a non-voting account, but if so, would that be a good idea?

It would certainly be a good idea. The account has no business of casting votes.

Counterexample: someone uses this account to ask a question and upvotes people who give helpful replies.
I'm concerned you're underestimating VoiceofRa. Consider the worse case scenario: That he is the same handle behind the several antisocial accounts he has been associated with. He's streaks of valid contributions indicate a degree of sophistication and high function. The harm he causes, in spite of several attempted interventions in the past that have been unsuccessful, earmarks his malignance. Most importantly, his persistence in contributing here indicates that he has found a sense of community here. This issue will not easily be resolved by banning every account he makes, or catching every work around. It will be resolved by helping him resolve the underlying issue: the alleged incompatibility of his style of forum usage with that of the broader community. Has anyone suggested alternative communities he may prefer, for instance? Forgotten the sequences so quickly? Worst case scenario, this is only the beginning. Do not make enemies annoying gnat's for they can be a million times more annoying than dragons.
Likelier explanation: this is a trolling space too fun for him to abandon.
He isn't really into trolling. He just reiterates his views and heavily downvotes his opponents.
Troll is one of those words tailor made for tabooing.
And, while we're at it, to make it impossible to change its password as someone did to the Username account.
Alternatively, make a script for (a) undoing all votes, and (b) resetting the password, and run it every midnight.
(b) can't be done if someone has changed the password before the script ran... unless the script has access to the e-mail the account was created with so it can do the "password forgotten" thing, now that I think about it... Does anybody know the e-mail of the Username account?
A script can talk directly to the database. It doesn't need passwords to access an account.
+1 to the anon who used the "username2" account to post the parent comment. IIRC, there was an original "username" account, the use of which of collectively discontinued several months ago because its value to the community was co-opted by abusive anonymous users logging into it. I would hate to see that happen a second time, to "username2". If it were to happen a second time, that would disincentivize necessary, honest, and respectful use of the anonymous account. If that became the norm, even if there were a "username3", the perception might become there isn't a general anonymous account on LessWrong for users. If people stopped trusting the resource completely, that would be sad. Also, it might send the signal to disrespectful anonymous users they can make or use as many anonymous accounts as they want, and the moderators would do little or nothing to stop them. This last point strikes me as unlikely, though, especially if LW mods have the power to block specific IP addresses.
Actually, someone just changed the password. Kind of incredible it lasted the 2-3 years it did without that happening previously.

Nancy, thank you for the hard work you do and the tough calls you have to make. The admin's job is a lonely one, and not sufficiently appreciated. As someone who has done and is currently doing lots of admin stuff, I know that from personal experience. So thank you!

Any thoughts about technical solutions to excessive down-voting of past material?

Stack Overflow attempts to discover and reverse serial voting with a script that runs daily. It seems likely we can do a similar thing.

Very much this. Undo the problem. I think I would just zero all the votes on any account caught doing this. If you're going to game the system, you can't play, and your prior votes get consigned to the bit bucket.
The Stack Overflow script basically looks like a rate limiter. If we don't want to bother with reversing votes, we can just put the rate limiter upfront, similar to how many login programs would start to throw in delays and time limits for repeated bad-login attempts. Something along the lines of "You are allowed 8 votes within one minute, 32 votes within one hour, 64 votes within 24 hours, and 128 votes within a week". These numbers are arbitrary, of course, and the real limits should come out of the statistical analysis of actual voting patterns.
SO also has direct rate limiting (40 votes a day). I do think that it makes sense to have a separate rate limit for user-user links; maybe I can vote 100 times a day and have it be normal, but voting even 10 times a day on a particular user might means something funny is going on.
Yes, it would be reasonable to have separate per-user/user link limits. Though the limits could be a function of the number of comments that user made recently -- if someone gets into a manic mode and posts dozens of trash comments in a few hours...
Right now, users with sufficiently high karma have access to the vote buttons on the userpages of users with sufficiently low upvote percentage, as far as I can tell to enable this sort of downvoting. It seems likely that exemptions could be baked it to these rate limits just as easily.

I think that impact of an upvote or a downvote should be inversely proportional to how often that person votes.

new accounts would get better weighting votes. would need a fix for the case of new users.
It should also be proportional to a voter's karma. An upvote from established users like gwern or NancyLebovitz should carry more weight than an upvote from a lurker.
sounds like the road to an echo chamber.
Depends. If the initial set of high karma users isn't homogeneous or if they aren't willing to upvote comments just because they agree with them then not necessarily.
those sound like difficult maxims to impart.
That would be simple to implement in the form of giving each user, say, 10 votes per day, non-accumulating.

I've banned them without prior notice because I'm not giving them more chances to downvote.

I think a "We've observed X. It appears to be bad behavior. Do you have an alternative explanation?" discussion should be started in any case. Otherwise there will be no justice for false positives.

Is the "because I'm not giving them more chances to downvote" a real argument? It won't be if it's technically possible to prohibit downvoting (maybe by temporarily taking away their Karma, so that the Karma-based voting limits would kick in), or if it's possible to eventually retract their (recent) votes, so that current votes won't matter as much.

I don't think you get effective forum moderation by having public discussions about every moderation action. Are you aware of a functioning online community which does things like that?
No, not public of course. The currently 122 comments to the present post illustrate how it's very distracting to announce moderation actions in a way that invites public discussion.

The user most likely to engage in retributive downvoting are those who engage in hostile debate and subsequently have low karma ratio's themselves (VoiceofRa has 68% favourability). Perhaps you could disable downvoting functionality for those with a karma ratio lower than 80%? Considering that poor quality of contributions is another big factor for low karma ratios this measure would have the added benefit that our most competent users have more power.

Unless your goal is exclude folks like me (which could be your goal - I could be considered a marginal user), 80% is too high.
You are coming to this conclusion on the basis of a single data point, right? "Please provide proof of your complete integration with the hive-mind before being allowed near the downvote button" X-D Do you want LW to become an echo chamber?
Lumifer, your karma ratio is 80%.
I'm only at 70% because of massive downvoting from Eugene Nier, who may very well also be VoiceOfRa. I'd be at over 80% just without Nier's downvotes, even not also excluding VoiceOfRa's downvotes. The solution to downvoting is not to make it easier to hurt people like me with downvotes.
Think about how much effort everyone spends talking about karma, and trying to fix karma, and protect karma from abuse. Is all that effort worth it for the low signal karma provides? Who cares about amassing internet points, let quality speak for itself.
Suppose some comment thread has a thousand comments. Without a karma system (as e.g. on Slate Star Codex), I can either waste several hours reading them all, or quickly scroll the page looking for something catching my eye and hope I don't miss something interesting. With a karma system, I can entrust the readership with the task of telling me which comments are the most worth reading, and read them first.
Entrusting the readership to tell you what's good is what you want the karma system to do, but it's not what the karma system is actually doing. Aside from sockpuppets, tribal voting, etc., it's just not possible to generate good recommendation systems from karma systems (otherwise everyone would be doing it). Even more sophisticated "recursive" systems like pagerank don't really work due to collusion, link farms (sockpuppets, basically), and related issues. Google moved away to a more complex system and has a full time police force to try to make the more complex thing work due to the constant threat of sabotage. I think in practice you have to go by name or direct judgement of content. Karma gives you an illusion of a rank, but it's a pretty terrible rank. -------------------------------------------------------------------------------- Slatestar's comment system has lots of other problems aside from lacking karma, that make it difficult to follow what's happening.
Yes, but it's not as terrible as ranking comments chronologically.
That's a false dilemma. You don't have to rank comments either chronologically or karmically. You can just look at what the comments say (or go by name, if people made a name for themselves). In other words, have a you-specific karma in your own brain. I mean what did we expect, it's not so easy to have a ranking of quality. -------------------------------------------------------------------------------- At least chronological order is objective, karma's reliability is inversely proportional to how busy the idiots are.
I don't always have that much time on my hands.
Well, you know what they say, for every problem there is a solution that is simple, obvious, and wrong.
Maybe karma should be hidden. Hacker News doesn't show it.
What is the point of having it at all?
I mean, there's sound psychological reasons that having karma would increase participation and quality. That's why reddit overtook classic newsboards
That sounds like a causal claim to me! Are you sure reddit took over newsboard due to karma? Or is it accident + rich-get-richer (power law) effects? Something else? How do you know how much karma helps?
It is! No, but I would be willing to bet that it had an effect (Digg also over took newsboards, and it had karma in common with reddit). No, I think karma had something to do with it. No, I think karma had something to do with it. I don't.
Slashdot had Karma years before Reddit and was not nearly as successful. Granted it didn't try to do general forum discussions but just news articles, but this suggests that karma is not the whole story.
slashdot was very succesful... at least enough that I know it's name.
There's already too much of a pull towards the consensus opinions here, would punish us Nrxer's quite a bit.

Downvotes on posts/comments older than X time affect the downvoters karma the same way as they do the downvoted.

Downvotes made after X time from the original posting affect karma at a rate of y%

Downvotes from users with karma below X don't affect the downvoted's karma score

All of these are made on the assumption that malicious downvoters are engaged in a E-Peen measuring contest using Karma as the measuring tool.

I would rather just make spiteful down-voting impossible

That would require presumably automatic distinguishing between "spiteful" and "non-spiteful" :-/

A very simple solution is to follow Reddit and block any kind of voting on old (="archived") content.

my impression is that highly specific rules like that are an invitation to gaming the rules.

LOL. Would you like to apply this generally, e.g. as in "The principle of Rule of Law is a bad idea because it's an invitation to gaming the laws. Much better to have a tyrant...err... benevolent philosopher-king decide matters because it's harder to game him".

"Spiteful" was vague. "Mass down-voting" (I assume it to be spiteful) would be better. How fast does reddit archive content? Given my druthers, I'd permit upvoting on old content-- we don't seem to have a big problem with it being abused.
Reddit archives threads after six months. At that point, you can't comment or vote, but you can edit and delete your own comments.
That's still not a technical definition. I don't like introducing asymmetries into voting. That "so what is your affirmative answer?" slope is quite slippery.
I favor more of a polycentric legal system. Call on someone agreeable to all parties to solve disputes when they happen on a mostly case-by-case basis with some generally agreed guidelines.
The mono- or polycentricity of the legal system doesn't have much to do with the Rule of Law, aka how hard the rules are. If the rules are soft and are bent on a regular basis, it doesn't matter how many people are doing the bending.

Same technical solution I always offer: An upvote or downvote should add or subtract the number of bits of information conveyed by that vote, conditioned on the identity of the voter and the target.

In the simplest version, this would mean that if person X upvotes or downvotes everything written by person Y, those votes count for nothing. If X upvotes half of every comment by person Y, and never downvotes anything by Y, those votes count for nothing (if we assume X missed the comments he didn't vote on), or up to 1 bit (if we assume X saw all the other comments).

Better would be to use a model that blended X's voting pattern overall with X's voting on Y's posts and comments.

I'm not sure what the exact mathematical proposal here is, but I shall guess the following rule: If X has voted positively on Y p times out of n votes so far, if X's next vote is an upvote it will confer a karma score of -log((p+1)/(n+2)), and if it is a downvote, log((n-p+1)/(n+2)). X voting positively on each of Y's n posts will give a total karma of log(n+1), negatively on everything gives -log(n+1). Logs are base 2. Votes never count for nothing, because X's votes on Y so far are only a sample from which we cannot conclude that X will vote with certainty either way. This actually rates newbies' votes above everyone else's in importance: X's first vote on Y is always worth the maximum possible, +/- 1. The general principle of the proposal is that to the extent that you can predict an opinion, you are less incrementally informed by finding out what it is, which as a matter of information theory is true. How far might one take this? For example, it suggests ignoring anyone's political views once one has identified them. SJWs and NRXs alike would be the first to be tuned out. If they want to be paid attention to they would have to find ways of saying new things, although (since they Have Views that determine all their views on individual things) this is likely to converge on finding new ways to say old things, i.e. writing clickbait. On the reader's side, one should primarily read people one knows nothing about, at least until one has "solved" them and can predict all their further output well enough to get diminishing returns. Personal relationships likewise: they can't last if they're based on novelty. Once you have solved a potential partner, then you can decide whether you want to continue to spend time with them for what they are, rather than what they may be. This is the purpose of the rituals of dating and courtship. I'm not expressing an opinion for or against this, just following the idea. ETA: Some mathematical simulation shows that if half of X's vote
I didn't think of that, but do you think karma shouldn't depend on the order in which votes are made? Shouldn't a person who gets 20 downvotes followed by 20 upvotes have higher karma at the end than a person who had 20 upvotes followed by 20 downvotes? The first indicates improvement; the second indicates getting less interesting over time. I am confused by how you're doing the computation, though. If half of X's votes on Y are positive and half are negative, I would expect to compute X's total contribution to Y as zero. I wouldn't keep a running sum of X's contribution to Y's karma on each thing Y has said. We can also go back and recompute the contribution to previous comments as X makes more comments. But I'd probably rather have an adaptive algorithm so that the score on individual comments reflects the situation at the moment the rating was made. Even if we did it that way, though, this sensitivity is not a real problem. Nearly every adaptive algorithm or learning algorithm has that kind of sensitivity. It never matters in practice when there's enough data. Text compression algorithms don't have drastically different compression ratios if you swap text input blocks around.
This doesn't work for new posters.
There no good reason for the votes of new posters to count much. If they don't there are less sockpuppet problems.
Why do you think that? When you have no prior, either assume P(up) = P(down), or (better) use the priors gotten by averaging all votes by all users. That's standard practice.
So if someone pops up that everyone thinks posts utter dreck and votes accordingly, those votes would count for nothing?
What are the odds that every person on LessWrong will see and vote on every comment made by this one person? This is not a real scenario. If you're worried about it, though, "a model that blended X's voting pattern overall with X's voting on Y's posts and comments" will solve that problem.
(Note: my earlier comment was nonsense, based on a misreading of what Richard wrote.) That does seem to be what Phil says, but in the the scheme I have in my head after reading Phil's proposal, things go a little differently. For the avoidance of doubt, I am claiming neither that Phil would want this nor that it's the right thing to do. * Suppose A votes on something B wrote. They have some history: A has voted +1, 0, -1 on u,v,w of B's things in the past. Here u+v+w is the total number of things B's ever written. * I think we probably want to ignore the ones A hasn't voted on. So we care only about u and w. * What should our prediction be? One simple answer: we assign probabilities proportional to u+1,w+1 to votes +1,-1 on A's next vote. (This is basically Laplace's rule of succession, or equivalently it's what we get if we suppose A's votes are independently random with unknown fixed probabilities and start with a flat prior on those probabilities.) * We might actually want to start with a different prior on the probabilities, which would mean offsetting u and w by different amounts. * Now along comes A's vote, which is (let's call it) a, which is either +1 or -1. The score it produces is - a log(Pr(A votes a | history)); that is, - log (u+1)/(u+w+2) if A votes +1, and + log (w+1)/(u+w+2) if A votes -1. This is added to the score for whatever it is B wrote, and to B's overall total score. With this scheme, an upvote always has positive effect and a downvote always has negative effect, but as you make the same vote over and over again it is less and less effective. For instance, suppose A upvotes everything B posts. Then A's first upvote counts for -log(1/2); the next for -log(2/3); the next for -log(3/4); etc. The total effect of n upvotes (and nothing else) is to contribute log n to B's score. There are some things about this that feel a little unsatisfactory. I will mention three. First: although "vote counts
That seems pretty reasonable to me. [EDITED to add: except that what I was saying "seems pretty reasonable" was not in fact what Richard wrote; I misread. See comments below.]
Why should a posting by someone who everyone else agrees has never had anything useful to say be judged less bad than the same posting by someone who does on occasion post upworthy things?
Oh, I beg your pardon -- I misread what you wrote as "... that thinks everyone posts ..." rather than "... that everyone thinks posts ...", and answered accordingly. Having now (I hope) read the words you actually wrote, my intuition agrees with yours, but I suspect that it may only be artificial extreme cases that produce such counterintuitive outcomes. I will think about it some more.

Making voting public would go a long way.

Making voting public would go a long way

...towards LOTS of drama, enemy lists, political intrigue, etc.

Stack Overflow makes cast vote counts public, so one could get an idea of who is doing the voting. Figuring out who is voting on what would only be possible if you're watching everyone always (and even then it won't work if user displays only update periodically). I do think that there's value in someone looking at the vote graph, be it humans or automated processes.
That value depends on how much value there is in karma to start with. Preoccupation with karma is a bad sign.
The map is not the territory, but that doesn't mean there's no value in cartography!
Cartography is valuable, but if an explorer spends all his time reading and redrawing maps without venturing outside, something is wrong :-/

+1 for something like "no more than 5 downvotes/week for content which is more than a month old", but be careful that new comment on an old article is not old content.

Does he have any known or suspected sockpuppets?

If some people are able to see who downvoted a specific person the most, maybe they can also see who upvoted someone the most?

Messaging you privately, as my own solution works best if people are unaware of the specifics. (Posting this here to remind others with potential suggestions to consider whether the same issue applies to their ideas.)

Ah, security through obscurity.
All security is through obscurity. Obscurity of a bitting pattern, of a private key, of a password, of an algorithmic optimization strategy. That phrase, and the sneer encoded in it, does have some wisdom, but it's more complex than "If you're relying on obscurity you're doing it wrong", being a criticism of a much more specific fault than relying on obscurity, which is, after all, the only thing that has ever worked. That criticism is this: Suppressing knowledge of security faults provides only a false sense of security in a system whose faults thus are held unknown to everybody except those with the strongest specific interest in possessing knowledge of those faults. Which is to say, if your strategy to a security fault is to hide it, rather than fix it, you're setting yourself up for failure later. But that is all moot, because my suggestion had more in common with bypassing a lock than in protecting one. Secrecy is desired because of something most like other people's poor security, and the desire they not improve it.

I'm thinking of something like not letting anyone give more than 5 downvotes/week for content which is more than a month old.

No likey.

People should not be discouraged from actively reading older posts and voting on them. Quite the opposite.

I've gotten sufficient evidence from support that voiceofra has been doing retributive downvoting.

Roughly how many downvotes are we talking here? Seeing a proposed limit of 5 in a week makes me wonder. 5 seems quite low to get exercised about.

People should not be discouraged from actively reading older posts and voting on them. Quite the opposite.

My feeling is that people should be able to reply to older posts. And I think upvoting helps bring attention to good comments and posts. I'm inclined to think that there's enough downvoting in some modest number of months to give an adequate signal.

Voiceofra did over 800 downvotes to just three posters. I'm sick of dealing with this stuff. I want it to not happen. 5 downvotes per week on old posts doesn't seem like a really onerous restrictions, but I don't downvote a tremendous amount, so I might be typical-minding things.

5 downvotes per week is well below trouble, I think. 15 starts looking like karma-vampirism to me if someone is doing a vendetta.

Some people get dispirited if their karma is dropping, especially if there's no apparent reason for it.

I think this is a reasonable rule.
I agree that 800 is too much, and appropriate for banning. Since you can "unvote" any particular karma vote you've made, wouldn't it be easy enough to implement limits on downvotes of a particular person in day, week, month, year? You reach your max, and the next time you try, you are prevented, and you get a message saying "It is a bannable offense to karma bomb other users". That could be a rollover and a triggered message sent to your account. (Note that the limits could be parameterized in increasingly complicated ways (scaled to karma of "victim", perhaps). The point is not "the perfect set of limits", but to find something better than the current limits. The problem can be ameliorated, not annihilated in all hypothetical cases. Life is full of tradeoffs. ) Problem limited and offenders who try to game the system are warned (I think the second is important too). That should take care of all but the most committed douchebags without any required intervention from you. As one of the Powers That Be Who Does Something Useful Around Here, I'd hope that your needs in your chosen useful duties would have pull with the feature development queue. (EDIT: Maybe easier to run a nightly scan notifying people when they have gone over their limits. )

If I didn't already trust Nancy, and was unaware of the VoiceOfRa's post/comment history and discussion on his forum behaviour over a long period of time, this post would frighten me.

Or to put that another way, if you had less knowledge of the situation you would draw incorrect conclusions. What of it?
Most non-regular users of LessWrong would draw that incorrect conclusion. For instance, prospective users, journalists and critics.
What kind of fear do you think it would produce in you?

I think it is better if banning decisions are not made public, even (especially) to the banned user.

The banned user should not notice anything, but their posts, messages, and votes do not appear to anyone else.

Wouldn't it impose a huge load on the servers to maintain multiple versions of the website for each banned user?
No. This is called 'shadowbanning' and is a standard practice.
Where is it standard practice? I'm surprised that people don't notice and come back under new names.
Here's a description of it on Reddit. Apparently they also now do account suspensions. Shadowbanning is also used on Hacker News and Craigslist, according to wikipedia.
The folks at reddit weren't happy with shadow-banning humans (as distinct from spammers), and eventually started suspending accounts.
That would depend on how many banned users there were. Also, I don't think there would need to be whole versions of the site for each banned user-- there would just be dif versions computed on the fly. I'm quite uncomfortable with the suggestion for another reason. It wouldn't work-- an active user would probably notice something was wrong in less than a day. If they were banned for excessive hostility, they'd presumably come back under another name. My first thought was that it might be bad for the group to have people disappear for no apparent reason, but then it occurred to me that people stop posting for all sorts of reasons.
I actually think it would work pretty well. The banned user sees all of their contributions and any IP used by the banned user also sees their contributions. All other users and IPs do not see it.