Just a short post to highlight an issue with debate on LW; I have recently been involved with some interest in the debate on covid-19 origins on here. User viking_math posted a response which I was keen to respond to, but it is not possible for me to respond to that debate (or any) because the LW site has rate-limited me to one comment per 24 hours because my recent comments are on -5 karma or less. 

So, I feel that I should highlight that one side of the debate (my side) is simply not going to be here. I can't prosecute a debate like this. 

This is funnily enough an example of brute-force manufactured consensus - there will be a debate, people will make points on their side and the side I am arguing for will be missing, so observers will conclude that there are no valid counterarguments rather than that there are, but they were censored. 

I think this is actually quite a good model of how the world has reached the wrong conclusion about various things (which may include covid-19 origins, assuming that covid-19 was actually a lab leak which is not certain). This is perhaps even more interesting than whether covid-19 came from a lab or not - we already knew before 2019 that bioerror was a serious risk. But I feel that we underestimate just how powerful multiple synergistic brute-force consensus mechanisms are at generating an information cascade into the incorrect conclusion. 

I'm sure these automated systems were constructed with good intentions, but they do constitute a type of information cascade mechanism - people choose to downvote, so you cannot reply, so it looks like you have no arguments, so people choose to downvote more, etc. 

 

New Comment
52 comments, sorted by Click to highlight new comments since:
[-]Dagon12-1

I've complained before about the ridiculous over-impact of strong votes on low-total comments.  A single voter having a bad reaction can EASILY take one or more comments from positive to negative with zero accountability or repercussions.  

And, of course, now that we're using this measure as a control, that causes undesirable impact - throttling of otherwise-positive members.  Also, the recency bias (karma on recent posts used for throttling) can cause very outsized impact on "bursty" posters, who can go a week or three with very few comments, then get into a conversation and have a dozen.

On the flip side, part of the motivation for it was to reduce debate-syle comment cascades.  They're often annoying, and only rarely better than having an explicit dialog(ue) that's posted at the top level, or just fewer and more complete comments and postings.  I have to admit that repeated accusations (and indignant responses to) of missing the point or not reading the material don't add much value to me, and I'm happy to have less of it on LW.  

[-]habryka161

In order for a rate limit to trigger the user needs to be downvoted by at least 4 different users for users below 2000 karma, and 7 different users for users above 2000 karma (relevant line of code is here). 

This failsafe I think prevents most occasional commenters and posters from being affected by one or two people downvoting them.

I do think it fails to trigger for Roko here, since I think we only check for "total downvoter count", which helps with new users, but of course over the hundreds of comments that Roko has acquired over the years he has acquired more than 7 downvoters. I think replacing that failsafe with "downvoters in the last month" is a marginal improvement, and I might make a PR with that.

[-]RobertM123

(We check for "downvoter count within window", not all-time.)

Oh, I am an idiot, you are right. I got mislead by the variable name. 

Then yeah, this seems pretty good to me (and seems like it should prevent basically all instances of one or two people having a grudge against someone causing them to be rate-limited).

[-][anonymous]4-2

Update: -2 disagree on this. Extremely frustrating to receive anonymous general feedback.

Trying my best here but I get downvotes a lot and often it feels like it's based on the opinion expressed in a comment. I seem to get upvoted a lot when I put pictures of evidence in the comment with simple empirical cites.

Downvoters never reply. I suspect because they are obviously afraid I will retaliate their downvotes with my own...

Your moderators have also disciplined me several times, giving general tips that sometimes I can't find a single message that satisfies their claim. Guess I can't see my own mistakes. It would be really help to have a policy of citing or making some kind of list of the comments that are unsatisfactory and providing it to the user. You also could provide this list before punishing...

If that's too labor intensive, you could expand the reaction system to send a separate notification to the user each time a mod reacts to a users content.

Actually ok now that I am thinking, why don't downvoters have to select the text and provide the negative feedback in order to issue a downvote? Contributing to a temporary ban without feedback is cruel...

[This comment is no longer endorsed by its author]Reply

Actually ok now that I am thinking, why don't downvoters have to select the text and provide the negative feedback in order to issue a downvote?

Forcing people to write a whole sentence or multiple paragraphs to signal that they think some content is bad would of course have enormous chilling effects on people's ability to express their preferences over content on the site, and reduce the signal we have on content-quality a lot.

Downvoters never reply. I suspect because they are obviously afraid I will retaliate their downvotes with my own...

I would be quite surprised if it's about vote-retaliation. I think it's usually because then people ask follow-up questions and there is usually an asymmetric burden of proof in public communication where interlocutors demand very high levels of precision and shareable evidence, when the actual underlying cognitive process was "my gut says this is bad, and I don't want to see more of this". 

I would think that everyone with 2000 karma has been downvoted by at least 7 users. That's a lot of posts and comments.

It seems like maybe this algorithm deserves a little rethinking. Maybe the past month is all you need to change, but I don't know what the rest of the algorithm is. -5 is a very low bar for limiting a high net karma user, since that can be produced by one angry big downvotes from another high karma user.

It's net karma of your last 20 comments or posts. So in order for one person to rate limit you, you would have needed to write 20 comments in a row that got basically no votes from anyone but you, at which point, I probably endorse rate-limiting you (though the zero vote case is a bit tricky, and indeed where I think a lot of the false-positives and false-negatives of the system come from).

I do think the system tends to fire the most false-positives when people are engaging in really in-depth comment trees and so write a lot of comments that get no engagement, which then makes things more sensitive to marginal downvotes. I do think "number of downvoters in the last month" or maybe "number of downvoters on your last 20 comments or posts" would help a bunch with that.

I would love to see it switch from being based on votes on your most recent n comments to being votes in a time window. If someone has posted one comment a month for 20 months and none of them got votes except for one strong downvote six months ago, that doesn't seem like it should get rate limited.

[-]habryka2-7

Yeah, it's not crazy, but I currently am against it. I think if a user only comments occasionally, but always comments in a way that gets downvoted, then I think it's good for them to maintain a low rate-limit. I don't see how calendar time passing gives me evidence that someone's comments will be better and that I now want more of them on the site again.

the purpose of the system is to give people a breather if they get upset, yeah? that emotional activation fades with time.

That's a nice to have, and I do think it reduces the correlation across time and so is a case for having the rate-limit decay with just time, but mostly the point of the rate-limit is to increase the average comment quality on the site without banning a bunch of people (which comes with much more chilling effects where their perspectives are not at all represented on the site, and while still allowing them to complain about the moderation and make the costs to them known)

...okay, but there have in fact been quite a number of people who make high quality comments normally, who have complained and made the costs to them known, expressed that time based decay would have been better... and you haven't changed it.

in particular, someone who's name escapes me right now who was new to the site and wrote carefully reasoned comments every time, but who was saying things highly critical of most things she commented on - and who was quite careful to not use emotive language - was getting downvoted consistently, got rate limited, and nearly immediately left the site.

We definitely need to separate "some types of comments get incorrectly downvoted" from "throttling is harmful in some cases".  It drives me nuts that some kinds of criticism get downvoted, even when they're well-made and relevant.  But I don't see any solution that doesn't have very large reduction in the overall information content of voting.

There's no software solution but when you actually see such criticism you can vote it up strongly. If we have enough experienced people in this community who have the karma to cast strong votes and willingness to do it, the problem is solvable.

Not sure who you are referring to, but we made some tweaks to various parts of the system of the last few months, so decent chance it wouldn't happen again.

I currently am reasonably happy when I review who gets rate limited when, though it's definitely not easy to see the full effects of it. I think a time decay would make it a lot worse.

That makes more sense, thanks.

This is placing a high bar on the tone of comments. But the culture of collegiality is valuable in a subtle and powerful way, so I'd probably endorse it.

I would like to see what Roko has to say about my post, so now I'm very curious how this works. Is this saying that you get rate-limited if you have at least 7 people downvoting you in the past 20 comments, regardless of how many people upvote you or how many times those 7 people vote? Also, does this count both overall and agreement karma? 

No, it's if at least 7 people downvote you in the past 20 comments (on comments that end up net-negative), and the net of all the votes (ignoring your self-votes) on your last 20 comments is below -5 (just using approval-karma, not agreement-karma).

7 people downvoting you in the past 20 comments

… and a net negative (in the last 20 comments). See details: link

[-]Raemon70

And notably, the 7 people have to have downvoted you on a comment that got below 0. 

I love the mechanism of having separate karma and agree/disagree voting, but I wonder if it's failing in this way: if I look at your history, many of your comments have 0 for agree/disagree, which indicates people are just being "lazy" and just voting on karma, not touching the agree/disagree vote at all (I find it doubtful that all your comments are so perfectly balanced around 0 agreement).  So you're possibly getting backsplash from people simply disagreeing with you, but not using the voting mechanism correctly. 

I wonder if we could do something like force the user to choose one of [agree, disagree, neutral] before they are allowed to karma vote? In being forced to choose one, even if neutral, it forces the user to recognize and think about the distinction. 

(Aside: I think splitting karma and agree/disagree voting on posts (like how comments work) would also be good) 

Also, I see most of your comments are actually positive karma. So are you being rate limited based on negative karma on just one or a few comments, rather than your net? This seems somewhat wrong. 

But I could also see an argument for wanting to limit someone who has something like 1 out of every 10 comments with negative karma; the hit to discourse norms (assuming karma is working as intended and not stealing votes from agree/disagree), might be worth a rate limit for even a 10% rate. 

It's a pity we don't know the karma scores of their comments before this post was published. For what it's worth, I only see two of his comments with negative karma this and this. The first one among these two is the one recent comment of Roko I strong-downvoted (though also strong agree-voted), but I might not have done that if I knew that only a few comments with a few negative karma is enough to silence someone.

(People upvoted Roko's comments after making this post, so presumably he is no longer being rate-limited. I think there were more negative comments a few hours ago)

On the topic of improving the voting mechanism, I propose that strong votes, up or down, be public, like reactions are.

Sounds reasonable -- with greater (voting) power comes greater responsibility.

(And there is always the option to only use the normal votes.)

I typically use the karma button to express that I think the comment is generally good or generally bad, and the second button when I want to send a more nuanced signal -- for example, if I disagree with your opinion, but there is nothing wrong about the fact that you wrote it, that would be "×".

My opinion is that the "lazy" upvote/downvote system is useful, because the more costly you make it, instead of voting more carefully, most people will simply vote less.

force the user to choose one of [agree, disagree, neutral]

I bet even just flipping the order of the buttons would do it.

How about a dialogue on this, with no (asymmetric) posting rate limits?

[-]habryka114

Dialogues don't run into any rate limits, so that is definitely always an option (and IMO a better way to have long conversations than comment threads).

You can still write posts, it doesn't look like brute-force manufactured consensus to me. Your original post got over 200 karma which seems pretty high for a censorship attempt (whether intentional or not).

I agree that the automated rate limiting system is extremely overeager and frustrating. Also, dude, chill a little. But yeah, I've told them before it's turned up too high and needs to decay in terms of hours rather than in terms of posts.

Why not post your response the same way you posted this? It's on my front page and has attracted plenty of votes and comments, so you're not exactly being silenced.

So far you've made a big claim with high confidence based on fairly limited evidence and minimal consideration of counter-arguments. When commenters pointed out that there had recently been a serious, evidence-dense public debate on this question which had shifted many people's beliefs toward zoonosis, you 'skimmed the comments section on Manifold' and offered to watch the debate in exchange for $5000. 

I don't know whether your conclusion is right or wrong, but it honestly doesn't look like you're committed to finding the truth and convincing thoughtful people of it.

Looking at your profile, I can only see a single comment which is below -5 karma in the past 2 weeks. Are they hidden, or did some people take a corrective measure after reading this post?
It's funny how I've never experienced this, despite having less than 1 karma per comment. I remember seeing some of my comments at like -12 karma and then having them flip to being positive, which is surprising.

Anyway, I suggest changing this system for long-term users with high net karma counts. For accounts like mine, it's fine if they're limited, but long-term users don't suddenly go rogue unless their accounts get stolen or something.

Am I allowed to point out that the negative response you've gotten is likely due to propaganda? The consensus about conspiracies is not based on science, it's actually anti-scientific. It's fabricated by the news media and amplified by the masses. Even if a set of facts are supported by science, the popular attitude is not. who said questioning the consensus was a crime? Who decided that skeptism of popular beliefs should be associated with low social status and prosecuted mercilessly? Who decided that "misinformation" was better combated by censorship than by discussion, and who decided that it should be treated as intentional deception and not innocent ignorance? These are all steps in the wrong direction, especially if the goal is eliminate wrong information.

If you're bold and more concerned with the contents of your comments than with their appearance, they will invoke a bad feeling in people, which results in downvotes. There seems to be a popular bias which says "painful = harmful = bad = unpopular = immoral = incorrect". These are all different in reality, but in "social reality" they're basically the same.

[-]Viliam1911

Looking at your profile, I can only see a single comment which is below -5 karma in the past 2 weeks. Are they hidden, or did some people take a corrective measure after reading this post?

I am similarly confused. Either the downvotes are very rare, in which case I think we do not need to change the automated moderation system. Or they were frequent before this was posted, which means it is trivial to manipulate LW readers into giving you lots of karma -- you just have to accuse them of censorship.

This post would be way more convincing for me with specific examples of comments that were downvoted but shouldn't be. We could agree or disagree, because there would be something specific to agree or disagree about. As it is now, it is basically just begging for more karma or having the karma restrictions lifted.

It could also be that only recent votes are counted, so that the delta karma over the past week being negative triggers the rate-limiting.

I don't see the problem with his comments though. Roko's commenting guideline are "Easy Going", meaning that his words likely has less emotional weight to himself than to the average person. It's the norm to interpret comments on the own (in isolation), using a shared understanding of their 'weight', meaning and implications, but I personally dislike this way of doing things (it smells like conformity and social instincts). In my own interpretation, I don't dislike his comments

If you check the moderation logs, Roko deleted a recent comment, which probably garnered the downvotes that lead to the rate-limiting.

you just have to accuse them of censorship.

You think he faked the screenshot of being rate-limited? If not, it seems a perfectly reasonable characterization. But more importantly, your observation suggests that most people here don't want to participate in censorship, but are doing so accidentally, and actively work to undo it when it's brought to their notice.

You think he faked the screenshot of being rate-limited?

No.

most people here don't want to participate in censorship, but are doing so accidentally

I think that many people simultaneously (1) have their preferences about what content they want to see more of, and what content they want to see less of, and (2) do not want to be accused of "censorship".

So they upvote the things they like and downvote the things they don't like, but if you accuse them of doing "censorship", it hurts their self-image, so they go and give a few upvotes to feel better about themselves. Not because they realize that they actually like the content they accidentally downvoted, but because they want to protect their self-image from an association with "censorship", even if it makes the website slightly less fun.

because they want to protect their self-image from an association with "censorship"

Aren't there a bunch of Litanies (Tarski, Gendlin, Hodgell) denouncing precisely this kind of self-deception?

If they engage in censorship and believe they are the kind of people who don't, they ought to either stop, or change their belief. 

Yea, that's what I tried to say. If you want to have a debate better than 4chan, but also feel bad whenever someone accuses you of censorship, you need to think about it and find a solution you would be satisfied with (while accepting that it may be imperfect), considering both sides of the risk.

Disabling the voting system or giving someone dozen "balancing" upvotes whenever they accuse you of censorship / manipulation / hive mind, that only incentivizes people to keep accusing you of censorship / manipulation / hive mind. And maybe I am overreacting, but I think I already see a pattern:

  • Zack cannot convince us of his opinions on the object level, so he instead keeps writing about how the rationalists are not sufficiently rational to accept his politically incorrect opinions (if you disagree with him, that only proves his point);
  • Trevor keeps writing about how secret services are trying to manipulate the AI safety community, and how they like to use "clown attacks" i.e. manipulate people to associate the beliefs they want to suppress with low status (if you tell him this is probably crazy, that only proves his point);
  • now Roko joined the group by writing a few comments that got downvoted (possibly rightfully), and then complaining that if you downvote him, you participate in the system of censorship (so if you downvote him, that only proves his point).

We have a long history of content critical of Less Wrong getting highly upvoted on Less Wrong. Which alone is a good thing -- if that criticism makes sense, and if the readers understand the paradoxes involved (such as: more tolerant groups will often get accused of intolerance more frequently, simply because they do not suppress such speech). Famously, Holden Karnofsky's criticism of Singularity Institute (previous name of Yudkowsky's organization) was among the top upvoted articles in 2012. And that was a good thing, because it allowed a honest and friendly debate between both sides.

But recently it seems to me that this is devolving into people upvoting cheap criticism, which seems optimized to exploit this pattern. Instead of writing a well-reasoned article whose central idea disagrees with the current LW consensus and letting the readers appreciate the nuances of the fact that such article was posted on LW, the posts are lower-effort and directly include some form of "if you disagree with me, that only proves my point". And... it works.

I would like to see less of this.

Zack cannot convince us [...] if you disagree with him, that only proves his point

I don't think I'm doing this! It's true that I think it's common for apparent disagreements to be explained by political factors, but I think that claim is itself something I can support with evidence and arguments. I absolutely reject "If you disagree, that itself proves I'm right" as an argument, and I think I've been clear about this. (See the paragraph in "A Hill of Validity in Defense of Meaning" starting with "Especially compared to normal Berkeley [...]".)

If you're interested, I'm willing to write more words explaining my model of which disagreements with which people on which topics are being biased by which factors. But I get the sense that you don't care that much, and that you're just annoyed that my grudge against Yudkowsky and a lot of people with Berkeley is too easily summarized as being with an abstracted "community" that you also happen to be in even though this has nothing to do with you? Sorry! I'm not totally sure how to fix this. (It's useful to sometimes be able to talk about general cultural trends, and being specific about which exact sub-sub-clusters are and are not guilty of the behavior being criticized would be a lot of extra wordcount that I don't think anyone is interested in.)

Sorry for making this personal -- I had only 3 examples in mind, couldn't leave one out.

Would you agree with the statement that your meta-level articles are more karma-successful than your object-level articles?

Because if that is a fair description, I see it as a huge problem. (Not exactly as "you doing the wrong thing" but rather "the voting algorithm of LW users providing you a weird incentive landscape".) Because the object level is where the ball is! The meta level is ultimately there only to make us more efficient at the object level by indirect means. If you succeed at the meta level, then you should also succeed at the object level, otherwise what exactly was the point?

(Yours is a different situation from Roko's, who got lots of karma for an object-level article, and then wrote a few negative-karma comments, which was what triggered the censorship engine.)

The thing I am wondering about is basically this: If you write an article, saying effectively "Yudkowsky is silly for denying X", and you get hundreds of upvotes, what would happen if you consequently abandoned the meta level entirely, and just wrote an article saying directly "X". Would it also get hundreds of upvotes? What is your guess?

Because if it is the case that the article saying "X" would also get hundreds of upvotes, then my annoyance is with you. Why don't you write the damned article and bask in the warmth of rationalist social approval? Sounds like win/win to everyone concerned (perhaps except for Yudkowsky, but I doubt that he is happy about the meta articles either, so this still doesn't make it worse for him, I guess). Then the situation gets resolved and we all can move on to something else.

On the other hand, if it is the case that the article saying "X" would not get so many upvotes, then my annoyance is with the voters. I mean, what is the meaning of blaming someone for not supporting X, if you do not support X yourself? Then, I suspect the actual algorithm behind the votes was something like "ooh, this is so edgy, and I identify as edgy, have my upvote brother" without actually having a specific opinion on X. Contrarianism for contrarianism's sake.

(My guess is that the article saying "X" would indeed get much less karma, and that you are aware of that, which is why you didn't write it. If that is right, I blame the voters for pouring gasoline into fire, supporting you to fight for something they don't themselves believe in, just because watching you fight is fun.)

Of course, as is usual when psychologising, this all is merely my guess and can be horribly wrong.

Would you agree with the statement that your meta-level articles are more karma-successful than your object-level articles? Because if that is a fair description, I see it as a huge problem.

I don't think this is a good characterization of my posts on this website.

If by "meta-level articles", you mean my philosophy of language work (like "Where to Draw the Boundaries?" and "Unnatural Categories Are Optimized for Deception"), I don't think success is a problem. I think that was genuinely good work that bears directly on the site's mission, independently of the historical fact that I had my own idiosyncratic ("object-level"?) reasons for getting obsessed with the philosophy of language in 2019–2020.[1]

If by "object-level articles", you mean my writing on my special-interest blog about sexology and gender, well, the overwhelming majority of that never got a karma score because it was never cross-posted to Less Wrong. (I only cross-post specific articles from my special-interest blog when I think they're plausibly relevant to the site's mission.)

If by "meta-level articles", you mean my recent memoir sequence which talks about sexology and the philosophy of language and various autobiographical episodes of low-stakes infighting among community members in Berkeley, California, well, those haven't been karma-successful: parts 1, 2, and 3 are currently[2] sitting at 0.35, 0.08 (!), and 0.54 karma-per-vote, respectively.

If by "meta-level articles", you mean posts that reply to other users of this website (such as "Contra Yudkowsky on Epistemic Conduct for Author Criticism" or "'Rationalist Discourse' Is Like 'Physicist Motors'"), I contest the "meta level" characterization. I think it's normal and not particularly meta for intellectuals to write critiques of each other's work, where Smith writes "Kittens are Cute", and Jones replies in "Contra Smith on Kitten Cuteness". Sure, it would be possible for Jones to write a broadly similar article, "Kittens Aren't Cute", that ignores Smith altogether, but I think that's often a worse choice, if the narrow purpose of Jones's article is to critique the specific arguments made by Smith, notwithstanding that someone else might have better arguments in favor of the Cute Kitten theory that have not been heretofore considered.

You're correct to notice that a lot of my recent work has a cult-infighting drama angle to it. (This is very explicit in the memoir sequence, but it noticeably leaks into my writing elsewhere.) I'm pretty sure I'm not doing it for the karma. I think I'm doing it because I'm disillusioned and traumatized from the events described in the memoir, and will hopefully get over it after I've got it all written down and out of my system.

There's another couple posts in that sequence (including this coming Saturday, probably). If you don't like it, I hereby encourage you to strong-downvote it. I write because I selfishly have something to say; I don't think I'm entitled to anyone's approval.


  1. In some of those posts, I referenced the work of conventional academics like Brian Skyrms and others, which I think provides some support for the notion that the nature of language and categories is a philosophically rich topic that someone might find significant in its own right, rather than being some sort of smokescreen for a hidden agenda. ↩︎

  2. Pt. 1 actually had a much higher score (over 100 points) shortly after publication, but got a lot of downvotes later after being criticized on Twitter. ↩︎

Who decided that skeptism of popular beliefs should be associated with low social status and prosecuted mercilessly? Who decided that "misinformation" was better combated by censorship than by discussion

Eliezer Yudkowsky: Well-Kept Gardens Die By Pacifism

I've read that in the past, it's a good point and I largely agree with it. I support gatekeeping, but what that allows you to do is to have a pocket of something which is different from its surroundings, an isolated reality so to speak. Without gatekeeping, everything tends towards the average.

But the qualities that you're here attributing to the garden is actually filth from the outside. Appeal to authority and popularity is a kind of status game and intellectual laziness. And it's a fact that censorship is less effective effective than free discussion at arriving at the correct belief. It's a good way to minimize conflict, but I don't think that's what this community is actually about.

Don't confuse politics/cultur war and rationality. You may not know this, but the "ostracization is free speech" worldview has only been popular for perhaps 10 years, and it was made popular solely by online liberal communities as they became the local majority (due to the parent companies becoming political). "Tolerance" is also being redefined to support bias rather than being a metric for the absence of bias, all so that a majorit can bully a minority

I understand that you feel censored, but I think it is indeed better called "manufactured consensus". 

An automated moderation mechanism that doesn't look at content (as indeed it doesn't) but only at agreeability is not exactly censoring but a different problem.  

[-]Roko50

I not only feel censored, I am censored in the sense that my ability to speak is being taken away. The causality seems to be people downvoting --> negative karma --> algo prevents posting, but that's still censorship

OK. In a wider sense of censorship I agree. You can't speak. 

[-]aphyer-10

On the object-level of your particular case, I don't see how you've ended up rate-limited.  The post of yours that I think you're talking about is currently at +214 karma, which makes it quite strange that your related comments are being rate-limited - I don't understand how that algorithm works but I think that seems very odd.  Is it counting downvotes but not upvotes, so that +300 and -100 works out to rate-limiting?  That would be bizarre.

In the general case, however, I'm very much on board with rate-limiting people who are heavily net downvoted, and I think that referring to this as 'censorship' is misleading.  When I block a spam caller, or decide not to invite someone who constantly starts loud angry political arguments to a dinner party, it seems very strange to say that I am 'censoring' them.  I agree that this can lead to feedback loops that punish unpopular opinions, but that seems like a smaller cost than communities having to listen to every spammer/jerk who wants to rant at them.

(The algorithm aggregates karma over the last 20 comments or posts a user has written. Roko has written 20 comments since publishing that post, so it's no longer in the averaging window.)