The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy. See . However, at the same time, it seems that Eliezer's observation that trolling and related problems have over time gotten worse here may be correct. It may be that this an inevitable consequence of growth, but it may be that it can be handled or reduced with some solution or set of solutions. I'm starting this discussion thread for people to propose possible solutions. To minimize anchoring bias and related problems, I'm not going to include my ideas in this header but in a comment below. People should think about the problem before reading proposed solutions (again to minimize anchoring issues). 

New Comment
236 comments, sorted by Click to highlight new comments since: Today at 3:02 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Can someone please provide hard data on trolling, to assess its kind and scale? I can only remember a single example of repeated apparent trolling - comments made by private_messaging and presumed sockpuppets. I'm not very active though, and miss many discussions while they're still unvoted-on.

Seconded. The first time that I saw any indications of LW having a troll problem was a couple of days back, when people started complaining about us having one.

Well now that you guys have started talking about a trolling problem, I'm quite happy to provide one for you. (Eliezer, this is what happens when you use words whose meanings you don't understand.)
Point taken. But you're a cute troll, so you only count partially.
Can someone please provide hard data on Will_Newsome's cuteness, to assess its kind and scale? []
According to Google Images & Twitter [], he's dangerously Bieber-licious []!
Yes, that surprised me. For some reason, my mental image of him was that of a man in his late forties.
I'm not even sure he's a troll--my impression is that he isn't very fluent in English and inadvertently got himself anti-LW-mindkilled.
Can anyone "name that troll?" (Rumplestiltskin []?)

This rule is asinine.

If I see a post at -3 that I desire to reply too, I am incentivized to upvote it so that I may enter my comment.

Furthermore, it stifles debate.

Look at this post of Eliezer's at -19 In the new system, the worthwhile replies to that post are not encouraged.

In the new system, instead of people expressing their disagreement, they will not want to reply. The negatives of this system grossly override any upsides.

I have not noticed a worsening trolling problem. Does anyone have any evidence of such a claim?

[-][anonymous]11y 25


General signal to noise. Tags on articles are used very badly. This makes finding interesting content on a topic harder than it needs be.


Let's let users with enough karma edit tags on articles! Seriously, why aren't we doing this already?

This seems to assume that having correct tags will reduce noise, actual or perceived. That's not clear to me. I never look at the tags when deciding what to read, only the titles. Why would accurate tags be useful?
[-][anonymous]11y 13

It would make research for writing new articles. It would help people interested in a particular topic read more about it. I use tags a lot and even as poorly used as they currently are, I've found a lot of interesting material through them.

Also most importantly it would be a step towards better indexing.

It allows people to read only the things they are interested in; essentially it could provide multiple topic-based discussion areas. I don't think one can look at how tags have previously been used on LW to answer "why would accurate tags be useful?", since the tagging situation is pretty horrible. On StackOverflow the tags can be edited (and are, all the time) by "high"[1] reputation user and they are used to filter content extensively. However the infrastructure there is a little different, since users can select "Favorite Tags" and "Ignored Tags" to control what shows up on their front page. (Granted that the comparison isn't necessarily a good one, since StackOverflow is much higher volume, and it has a slightly different purpose.) [1]: 500 reputation, but reputation is basically received 5 or 10 per vote (depending on the situation), so it is actually quite low. But the possible tags come from a finite set, and one needs 1500 reputation to add a new tag to this set.
I have several times tried to search for something by tag and failed because most articles are poorly tagged, including my own.
One thing we can do now is use the wiki to index articles.

Instead of trying to stop noise, you can filter it. Instead of designing to prevent errors, you can design to be robust to them.

I'll repeat something I said in the other thread:

To the extent that all the griping over signal to noise is about a desire to control what you see, and not control what others see or say, there are decades old solutions to discussion filtering. The fancy shmancy Web has been a marked deevolution of capabilities in this regard. It's pitiful. No web discussion forum I know of has filtering capabilities even in the ball park of Usenet, which was available in the 80s. Pitiful.

I also suggest that any solution which is not fundamentally about user customization is a failure under my assumption above, because one man's noise is another man's signal.

You've made me understand the root of one of my own dissatisfactions with the current system. If I look through my post history and roughly group my posts into bins based on how I would summarize them, this is what I see:

  • Silly posts in the HP:MOR threads: ~ +20 karma

  • Posts of mine having little content except to express agreement with other high-karma posts: ~ +10 karma

  • Important information or technical corrections in serious discussions: ~ +1 karma

  • Posts which I try to say something technical which I retrospectively realize were poorly worded but could have been clarified if someone pointed out an issue instead of just downvoting: ~ -5 karma

Perhaps I exaggerate slightly but my point is that if I were to formulate a posting strategy aimed at obtaining karma, then I would avoid saying anything technical or discussing anything serious and stick to applause lights and fluff.

On top of this, I tend to watch how the karma of my most recent comments behaves, and so I notice that, for example, a comment might have +5 upvotes and -3 downvotes, with no replies. This is just baffling to me. Was there something wrong with the post that three people noticed? Were the three separa... (read more)

Perhaps I exaggerate slightly but my point is that if I were to formulate a posting strategy aimed at obtaining karma, then I would avoid saying anything technical or discussing anything serious and stick to applause lights and fluff.

That's about right. Also, stick to high traffic threads. Hit the HPMOR threads hard!

As I pointed out that people want different things out of the list, you finish by pointing out that the karma votes themselves are clearly used differently by different people. They're also used to a different extent by different people.

One nice thing that Slashdot does is limit your karma votes. That keeps individual Karma Kops from have a disproportionate effect on total score. But I don't think the Slashdot system of multiple scores is that helpful.

From my experience in the grand old days of Usenet, the most useful filters were on people, and the important ease of use features were a single screen view of all threads, expand and contract, sort by date or thread, and sort by date for a subset of threads.

I think you might be falling prey to a sort of fundamental attribution error for comments... thinking of all votes on a comment as being about the internal traits of the comment itself. I generally vote to enact a total ordering on all current content, aiming to raise valuable/unique/informative/pro-social content to reader-serving prominence. This involves determining an ideal arrangement of content and voting everything down that is too high, and voting up everything that is too low... except, I try to keep the floor at zero total except where content is (sadly) above the sanity waterline of LW, as with some discussions of gender relations and politics. About the only "pure knee jerk voting" I engage in is upvoting of content that isn't mind-killing in itself but that has a negative total. Sometimes I upvote a comment simply because someone said something really awesome as a rebuttal to it, and that comment/rebuttal pair is worth reading, and the way to give the conversation the order-of-reading-attention it deserves is to upvote the parent.
(+1) I rarely downvote, but from now on, I will accompany any downvote with a reply stating "-1: reason for downvote."
I will downvote every such comment because I oppose this being used as a general policy. I don't want to see the spam and sometimes it just isn't useful to criticize explicitly. Even when downvoting is accompanied by such an criticism it is sometimes better to just speak directly rather than dragging in talk about your downvotes as part of the conversation.
This seems like a reasonable approach. The reason for the downvote could force a defamatory statement, which I prefer to avoid. Otherwise, you are right that dragging in a downvote mention doesn't add anything to just saying what you want to say. Thanks for the comment, by the way. I was thinking that upon downvoting, maybe an option (not a requirement) should be given to state a reason why. Then I realized that there is no need to program such a thing; this option exists already.
Too much information can be ignored, too little information is sometimes annoying. I'd always welcome your reason for explaining your downvote, especially if it seems legitimate to me. If we were going to get highly technical, a somewhat interesting thing to do would be to allow a double click to differentiate your downvote, and divide it into several "slider bars." People who didn't differentiate their downvotges would be listed as "general downvote" Those who did differentiate would be listed as a "specific reason downvote." A small number of "common reasons for downvoting that don't merit an individualized comment" on LessWrong would be present, plus an "other" box. If you clicked on the light gray "other", it would be replaced with a dropdown selection box, one whose default position you could type into, limited to 140 characters. Other comments could be "poorly worded, but likely to be correct" "Poorly constructed argument," "well-worded but likely incorrect" "ad hominem attack" "contains logical fallacies" "bad grammar" "bad formatting" "ignores existing body of thought, seems unaware of existing work on the subject" "anti-consensus, likely wrong" "anti-consensus, grains of truth." There could also be a "reason for upranking," including polar opposite options that were the opposites of the prior options, so one need only adjust one slider bar for "positive and negative" common reasons. This would allow a + and - value to be associated with comments, to obtain a truer picture of the comment more quickly. "Detailed rankings" (listed next to the general ranking) could give commentators a positive and a negative for various reasons, dividing up two possible points, and adjusting remaining percentages for remaining portions of a point as the slider bar was raised. "General argument is true" could be the positive "up" value, "general argument is false" could be its polar opposite. It also might be interesting to indicate how long people took to write their commen
I strongly share your opinion on this. LW is actually one of the better fora I've come across in terms of filtering, and it still is fairly primitive. (Due to the steady improvement of this forum based on some of the suggestions that I've seen here, I don't want to be too harsh.) It might be a good idea to increase comment-ranking values for people who turn on anti-kibbitzing. (I'm sure other people have suggested this, so I claim no points for originality.) ...What a great feature! (Of course, then that option of "stronger karma for enabled anti-kibbitzers" would give an advantage the malevolent people who want to "game the system" who could turn it on and off, or turn it on on another device, see the information necessary to "send out their political soldiers" and use that to win arguments at a higher-ranking karma. Of course, one might want to reward malevolent players, because they are frequent users of the site, who thus increase the overall activity level, even if they do so dishonestly. They then become "invested players," for when the site is optimized further. Also, robust sites should be able to filter even malevolent players, emphasizing constructive information flow. So, even though I'm a "classical liberal" or "small-L libertarian," this site could theoretically be made stronger if there were a lot of paid government goons on it, purposefully trying to prevent benevolent or "friendly" AGI that might interfere with their plans for continuing domination.) A good way to defeat this would be to "mine" for "anti-kibbitzing" karma. Another good idea would be to allow users to "turn off karma." Another option would be to allow those with lots of karma to turn off their own karma, and show a ratio of "possible karma" next to "visible karma," as an ongoing vote for what system makes the most sense, from those in a position of power to benefit from the system. This still wouldn't tell you if it was a good system, but everyone exercizing the option would indica

The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy.

How about we wait a couple weeks to try the new feature; instead of jumping up in outrage and proposing even more complicated schemes?

I'd be in favor of an official "no complaining about feature X for the first two weeks" rule, after which a post could be created for discussion. Like that the discussion could be about what actually happened, and not about what people imagine might happen.

It's not as if two weeks of using an experimental feature was some unbearable burden.

I'm much more comfortable with this sort of intervention as a "We think this will improve the forums, let's test this for a month or two" rather than "Lesswrong sucks but this will fix it guys, trust us"

It's not a huge burden, but we're already seeing some negative effects worth discussing. For example I have now twice paid the 5 karma penalty replying to downvoted comments which were not trolling at all; they were downvoted because people disagreed with what they were proposing.

If we decide to wait two weeks, we need to decide on specific criteria that we will judge in two weeks' time to decide whether to modify or remove the new feature. If the new feature stays anyway because a few people decide unilaterally, then we might as well discuss it now.

[-][anonymous]11y 19


We would benefit from more and better wiki articles since they seem the best way to compress information that is often scattered across several articles and dozens of comments. This should help us maintain our level of discussion by making it easier to bring users up to speed on topics.

I used to think the most straigthforward fix would be:

  • Eliminate the trivial inconvenience of creating a separate count for the wiki. Lets just make it so you use your LW log in. Also perhaps limit edits to people with more than 100 karma, since I hear they had some problems with spamming.

  • Let people up and down vote edits. Let karma whoring work for us!

But when talking about this on IRC with gwern and he thought it probably wouldn't do much good and isn't worth the effort to implement. What do fellow rationalist think might be a good way to encourage more quantity and quality in the wiki?

If we're seriously having troll issues, then wiki + trolls = edit wars. Also polarization over schools of editing styles, the way Wikipedia has its Deletionists.
I think [] we have much more problems with the signal to noise ratio than trolls.
I'm not saying this is wrong, in fact I haven't yet seen significant evidence of either problem and was about to ask. But I hadn't gone looking for evidence either.
If most of the topics will generate disagreement, growing a wiki make this site less dynamic. Or maybe is fine to change(or improve) definitions all the time.

(as I noted in the buried thread)

The mental model being applied appears to be sculpting the community in the manner of sculpting marble with a hammer and chisel. Whereas how it'll work will be rather more like sculpting human flesh with a hammer and chisel. Giving rather a lot of side effects and not quite achieving the desired aims. Sculpting online communities really doesn't work very well. But, geeks keep assuming the social world is simple, even when they've been in it for years.

"Technical solutions for social problems almost never work. []"

Why on Earth do people keep saying this? Sending out a party invite via email is a technical solution to a social problem, and it's great! For God's sake, taking the train to see a friend is a technical solution to a social problem. This phrase seems to have gained currency through repetition despite being trivially, obviously false on the face of it.

How about if you substitute "nontrivial social conflict" with "social problem"?

Burglar alarms, voting, Pagerank? Pagerank is definitely a very technological solution to a serious conflict of interest problem, and its effectiveness is a key driver of Google's initial success. Why would you expect technology not to be helpful here?

Ok, those are pretty good examples. Though none of them are quite complete in the sense that there's still a bunch of human messiness with circumvention and countermeasures involved. Burglar alarms need human security personnel to back up the threat, voting is being gamed with gerrymandering and who knows what, and PageRank is probably in a constant arms race between SEO operators and Google engineers tweaking the system. They don't work in a way where you just drop in the tech and go to sleep and have the tech solve the social conflict, though they obviously help managing the conflict, possibly in a very large degree. The idea with discussion forums, where people spout the epigram, often seems to be that the technical solution would just tick away without human supervision and solve the social conflict. Stuff that does that is extremely hard. Stuff that's more a tool than a complete system will need a police department or a Google or full-time discussion forum moderators to do the actual work while being helped by the tool. Modern Bayesian spam filters are another example of a well-working technical solution to a social conflict though. Don't know how much of an arms race something like Gmail's filter is. This is something that is giving me the vibe of a standalone system actually solving the problem, even more than PageRank, though I don't know the inner details of either very well.

When I hear people say "you're proposing a technical improvement to a social problem", they are not cheering on the effort to continually tweak the technology to make it more effective at meeting our social ends; they are calling for an end to the tweaks. From what you say above, that's the wrong direction to move in. Pagerank got worse as it was attacked and needed tweaking, but untweaked Pagerank today would still be better than untweaked AltaVista. "This improvement you're proposing may be open to even greater improvement in the future!" doesn't seem like a counter argument.

In many instances, the technology doesn't directly try to determine the best page, or candidate; it collects information from people. The technology is there to make a social solution to a social problem possible. That's what we're trying to do here.

I mostly agree with you that the statement against technical solutions is false on the face of it. How about this: if you want to prevent certain types of discussion and interaction in an online community, the members need to have some kind of consensus against it (the "social" part of the solution). Otherwise technical measures will either be worked around (if plenty of communication can still happen) or the community will be damaged (if communication is blocked enough to achieve the stated aim). Technical measures can change the required amount of consensus needed from complete unanimity to something more achievable. In our case, we may not have had the required amount of consensus against feeding trolls, or of what counts as a troll to avoid feeding.
9Paul Crowley11y
Because this involves conflict of interest, it is a security issue, and people aren't very good at thinking about those. Often they fail to take the basic step of asking "if I were the attacker, how would I respond to this?". See Inside the twisted mind of the security professional [].
When you think of discussion forum design as a security issue, determining just what should be considered an attack can get pretty tricky. Trying to hack other people's passwords, sure. Open spamming and verbal abuse in messages, most likely. Deliberate trolling, probably, but how easy is it to tell what the intent of a message was? Formalizing "good faith discussion" isn't easy. What about people sincerely posting nothing but "rationalist lolcat" macro pictures on the front page and other people sincerely upvoting them? Is a clueless commenter a 14-year-old who is willing to learn forum conventions and is a bit too eager to post in the meantime, or a 57-year-old who would like to engage you in a learned debate to show you the error of your ways of thought and then present you the obvious truth of the Space Tetrahedron Theory of Everything?
0Paul Crowley11y
I'm not sure how what you say above is meant to influence what we recommend wrt possible changes to LW.
Basically that discussion forum failure modes seem to be very complex compared to what an autonomous technical system can handle, and the discussion on improving LW seems to often skirt around the role of human moderators in favor of trying to make a forum work with simple autonomous mechanisms.
ADBOC. Email and trains might be “technical solutions for social problems” in the literal sense, but that's not what that phrase normally means.
5Paul Crowley11y
What does the phrase normally mean? Risto_Saarelma had one go in reply to me at restricting it to the relevant domain but that didn't work. Could you describe what the phrase does normally mean? I'm not asking for a perfect, precise definition, but just a pointer to a cluster of correlations that it identifies.
I think it's a misidentification of the reason why a certain class of proposed solutions to social problems do not work. The class consists of solutions which fail to take into account that people will change their behaviour as necessary to achieve whatever their purposes are, and will simply step around any easily-avoided obstacles that may be placed in their way. The famous picture that Bruce Schneier once posted as an allegory of useless security measures, of car tracks in the snow going around barriers across the road, is an excellent example. "The Internet routes around censorship" is another.
9Paul Crowley11y
(from The Weakest Link []) That seems like a plausible story! And was also the message I was pointing at here [].
“I know when I see it”, but I'd say that e-mail and trains enable people to do what they want to do (namely, communicate and travel), whereas the prototypical “technical solutions for social problems” try to discourage people from doing what they want to do (e.g. the Prohibitionism).
Making Light [] and Ta-Nehisi Coates [] have notably good comment sections, and they have strong moderation by humans.
LessWrong could use better and/or more moderation by humans.
This seems intuitively likely, and is likely true in many cases. In the end, if you don't have good commenters, there may not be much to be done about it on a technical level. However, it's not obvious to me it applies here. For example, the entire karma system is a technical solution that seems to be, if not ideal, better than nothing in dealing with the social problem of filtering content on this site and Reddit.

Consider the prior art, here. The first place I saw the "reply to any negatively-scored comment inherits the parent's score" concept was at SensibleErection. That policy has been in place there longer than LW has been a forum (possibly longer than reddit), so it seems to work for them.

Prior art for special "oldschool/karma-proven" sections: hacker news. Paul Graham is intensely interested in keeping a high-quality forum going, and is very willing to experiment. Here's the normal frontpage, here's the oldschool view, and here's the recent members view. Hacker news also has several threshholds for voting privileges.

One more step HN took is hiding comment scores, while continuing to sort comments from highest to lowest. It's dramatic, almost draconian, but it definitely had an effect on the karma tournament system

As far as I understand, the "noobstories" list is all the articles submitted by new accounts in reverse chronological order, while HN Classic is the current front page sorted according to the votes of "old" members.

Could someone please point out some examples of trolling to me? I find this discussion surprising because I perceived the trolling rate as low to non-existent. Perhaps I've frequented the wrong threads.

The most obvious example of trolls right now is this post [] and some of its comments, although as far as trolling goes, neither are very effective.
That post wouldn't exist if the karma penalty hadn't been implemented.
Agree denotationally... but I hope you are not proposing a policy of avoiding things that could make trolls unhappy with this site.
And even so, it's only 15 comments.
Oh, yeah... nevertheless, the history of the post & comment authors contains some "trolling".

I'm not proposing a solution. I'm thinking about the problem for five minutes.

edit: Well, it didn't even take five minutes!

We need a reliable predictor of troll-nature. I mean, I'm not even sure that P( troll comment | at -3 ) is above, say, 0.25 - much less anywhere high enough to be comfortable with a -5 penalty.

Of course, I'd be comfortable with asserting that P( noise comment | at -3 ) is pretty high, like 0.6 or something. Still not high enough to justify a penalty, in my opinion, but high enough that I can see how another's opinion might be that it justifies a penalty. If that is the case, well, the discussion is being severely negatively impacted by conflating noise and trolling.

I might go and figure out how to get some data off of LessWrong commenting system, to try and determine a good indicator for troll-nature. (I don't plan to try and figure out noise-nature. That's the problem that the Internet has faced for the last 15 years, I'm not that hubristic.) That in turn would would put some numbers into this discussion. I don't know that arguing over how many genuine comments can be inadvertently caught in a filter is any better than arguing over whether there should be a filter at all, but to my mind it's more constructive.

Master, you have mediated on this for under five minutes, so I wish to ask two things: * Does not asking about what has the troll-nature bring one closer to the troll-nature? * If you meet a Socrates on the road does it have the troll-nature?
* No - I know you aren't serious, but... seriously? * If you meet a Socrates anywhere it has troll-nature. That's why he got permabanned from the universe. It also has other less irritating natures.
I have often seen trolls trolling by discussing the troll-nature.
Trolls can troll on any topic at hand. Where there are trolls, trolling will often be a topic at hand. That doesn't make the nature of trolls a trollish topic. You're going to have to do a lot better than a correlation.
Asking what has the troll-nature brings one closer to being a noisemaker. Asking what distinguishes troll-nature from noisemaker brings one closer to having the troll-nature. Notes Ask not what separates noise from trolling; instead ask for that which makes a thing neither.
The proposals here exist outside the space of people who will "solve" any problems that they decide are problems. Therefore, they can still follow that advice, and this is simply a discussion area discussing potential problems and their potential solutions. All of which can be ignored. My earlier comment to the effect of "I'm more happy with LessWrong's forum than I am unhappy with it, but that it still falls far short of an ideally-interactive space" should be construed as "doing nothing to improve the forum" is definitely a valid option. "If it aint broke, don't fix it." I don't view it as either totally broken, or totally optimal. Others have indicated similar sentiments. Likely, improvements will be made when programmers have spare time, and we have no idea when that will be. Now, if I was aggressively agitating for a solution to something that hadn't been clearly identified as a problem, that might be a little obnoxious. I hope I didn't come off that way.

Proposed solution: remove the karma penalty and do exactly the same thing we were doing before. That is, if someone is pretty sure that they will not benefit from reading the replies to a particular thread, they don't read them. No disincentives from posting such a reply needed. What is the problem with that system?

Edit: As of this edit, if one more person decides they don't like my comment, then no one can tell me why they don't like my comment without loosing 5 karma. One of many reasons the new system is terrible.

To address the Big Bad Threads problem specifically (as opposed to other problems), what we need is ability to close threads in some sense, but not necessarily as a side effect of voting on individual comments.

For example, moderators could be given the power to declare a thread (or a post) a Toll Thread, so that making a comment within that thread would start costing 5 Karma or something like that, irrespective of what you reply to or what the properties of your user account are. This would work like a mild variant of the standard (but not on LW) closed thread feature.

Nesov, you are a particularly active and helpful moderator. I'm less familiar with how much effort is invested by other moderators. I believe you could do this well, but I'm not sure this solution can be scaled, or even run without you (right now).

I'm active (I read literally everything on Less Wrong, or at least skim) but I'm timid. I don't know what I am and am not supposed to be banning/editing, so I confine banning to spam and editing to obvious errors of formatting or spelling/grammar.

In June I asked Eliezer for moderation guidelines, since there has been an uptick in trolling or just timewasting poorly-informed ranters, but he just said that he thought it needed a software fix (the recent controversial one).

Thanks for your contributions. I scanned this whole thread and am talking to Eliezer about possible solutions. Right now the troll toll isn't enough [], but maybe that's because nothing will deter a SuperTroll like Will Newsome []. ETA: I should clarify that I like Will Newsome in person, but on Less Wrong his comments very often seem to be deliberately obscurantist, unhelpful, and misleading.

You don't deter SuperTrolls. You ban them and move on. This is a very simple problem that you guys are vastly over-complicating.

Ban him and ostracize him socially.

Ban him and ostracize him socially.

You're right. It seems silly to say that nothing, with emphasis, will stop Will when banning him and any obvious sockpuppets hasn't even been tried. (This isn't particularly advocating that course of action, just agreeing that Luke's prediction is absurd.)

"Ostracize" does not work well online. You don't get direct feedback on how many people read what. (Even the downvotes are evidence that someone did read the comment, and expended some of their energy to downvote it -- which supposedly is part of what the trolls want.) There is no online equivalent of a group turning their backs on someone in ice-cold silence. Just "not answering" is not the same thing... that happens to many normal comments too.
As far as I can tell Will Newsome hangs out in Berkely with SI folks.
The distinction between "toll threads" and "closed threads" was an attempt to make the action easier, bear less responsibility and provoke less agitation if applied in controversial cases (it could be un-applied by a moderator as well), so that the button could be more easily given to more people. Right now the only tool anyone has is the banhammer that either destroys a post with all its comments completely (something I strongly disapprove of being at all possible, but my discussion in the tickets [] didn't evoke much support) or needs to be applied in a whac-a-mole manner to each new comment, and neither decision should be made lightly. Since there are no milder measures, nothing at all can be done in milder or more controversial cases. I don't believe there is much of a fixable-by-moderation signal-to-noise problem right now except for the occasional big bad threads, so most of the motivation for this tool is to make their inhibition more effective than it currently is. Since big bad threads are rare, you don't need a lot of moderators to address them. (It's probably not worth the effort to implement it right now, so bringing it up is mostly motivated as being what I see as a better alternative to the punish-all-subcomments-automatically measure Eliezer was suggesting, although I still expect the current punish-direct-replies to suffice on its own.)
By whom?
Why vote down this simple question? Is it a point of sensitivity--sufficient to drive Nesov to the passive voice? Don't other readers want to know who decides forum policies?

Warning: a rant follows!

The general incompetence of the replies to the OP is appalling. Fantastically complicated solutions with many potential harmful side effects are offered and defended. My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down. This reminds me of the many pieces of software I had a misfortune to browse the source code of: half-assed solutions patched over and over to provide some semblance of the desired functionality without breaking into pieces.

For comparison, I have noted a trivial low-risk one-line patch that would fix a potential exploit in the recent (and also easy to implement) anti-troll feature: paying with 5 karma to reply to comments downvoted to -3 or lower (patch: only if the author has negative 30-day karma). Can you do cheaper and better? if not, why bother suggesting something else?

After a long time in the software business, one of the lessons I have learned (thanks, Steve McConnell) is that every new feature can be implemented cheaply or expensively, with very little effect on its utility. Unfortunately, I have not heard of any university teaching design simplification beyond using some Boolean a... (read more)

Changes to a functioning system that is in use should be done with care akin to pruning a bonsai tree, not by introducing sweeping changes that are then potentially scaled back. I very much agree with your proposal, it should have been an obvious first step (and if there had been some public discourse in which you would probably have suggested it, it might well have been). Upon unsatisfactory results, it could have been escalated to a more profound change. How long is it going to take for some forum regulars to pick up a dedicated downvoter with two sockpuppets, strongly impinging the discussion on their new +0 -> -3 comments, at least temporarily? The karma hit is a negligible inconvenience for old-timers, but the temporary obstruction of their conversational flow isn't. Empowering trolls inadvertently, how ironic. Who would have thought there are such unforeseen consequences when non-professionals implement such changes without discussions? Why, I suspect you would have, as someone experienced with software design failure modes. It is a bit disconcerting that such an a-priori type proposal was implemented not only without considering expertise from LW's very own base of professionals, but without first gathering some evidence on its repercussions as well. From the resident Bayesianists, you'd think that there'd be some more updating on evidence through trials first, e.g. by implementing similar (such as yours) but less impactful changes first. Concerning your software development paradigm: With the caveat of not having spent a long time in the software industry, there is an argument for the converse case as well. While I'm all for using k-CNFs, DNFs and all sorts of normal forms, penalising lines of code can easily lead to a shorter, but much more opaque piece of software, regardless of doxygen or other documentation. Getting rid of unnecessary conjuncts in a boolean formula sounds good in theory, but just spelling out each case, with e.g. throwing an except
Yes indeed, hence the weighting:
It seems like it's your estimate of the programming knowledge of the commenters that should go down. Most of the proposed solutions have in common that they sound really simple to implement, but would in fact be complicated - which someone with high general intelligence and rationality, but limited domain-specific knowledge, might not know. Should people who can't program refrain from suggesting programming fixes? Maybe. But maybe it's worth the time to reply to some of the highly-rated suggestions and explain why they're much harder than they look. (I agree with your proposed solution to attempt simplifications.)
There seem to be two functions of this discussion: * come up with practical solutions * diagnose the problem Terrible to implement comment tweaks can still spur helpful discussion. A poster may have articulated a problem or an incentive in a way most of us haven't considered yet. Not everyone who has an interesting description of the problem may have that much coding-fu. Better to throw up your diagnosis without worrying too much about the cure and let everyone critique, counter-suggest, and fix the implementation.
Why was it so high to begin with? What mistakes do you think you made? (Honest questions.)
I never said it was very high to begin with :) Though the level of discourse here is much higher than on most other online forums I followed, and that threw me off.

The proposals I have (not all of which are mutually exclusive) are :

  1. Make comments within highly downvoted subthreads not appear on recent comments. Since the main problem with trolling is drowning out of recent comments, this will solve many of the issues. Moreover, it will discourage continued replies.

  2. Have a separate section of the website where threads can be moved to or have a link to continue to. This section would have its own recent changes section. Moderators could move threads there or make it so that replies went to that section, and would be used for subthreads that are fairly downvoted. This has the advantage of quarantining the worst threads. This is a variation of an old system used at The Panda's Thumb which works well for that website.

  3. Use the -5 penalty system but adjust either the trigger level or the penalty size. It isn't obvious that -3 and -5 are the best values for such a system if it is a good idea. The fact is that -3 isn't that negative as comment scores go, so something like -3 can be obtained without saying that much about a comment's quality. -5 and -5 or or -5 and -1 may be better values. The second would offer softer discouragement for more

... (read more)

Downvoted for putting more than one suggestion in a single comment.

Punish me for this anti-social act if you must, but as one of the dudes who tries to act after reading these suggestions (and tries hard to discount his own opinion and be guided by the community) this practice makes it much harder for me to judge community support for ideas. Does your comment having a score of 10 suggest 2.5 points per suggestion? ~10 points per suggestion? 15 points each for 3 of your suggestions and --35 for one of them (and which one is the -35?)?

Can we please adopt a community norm of atomicity in suggestions?

Sorry, yes, that was obviously a bad action on my part. (Kindly's highly upvoted comment suggests that #1 is of the four the only one of these that people seem to like.)

I think #1 is the way to go here, and the only method that will have any effect in most cases.

As long as there is a separate "uncensored" recent comments page where they still show up.
The point is to get people to stop replying to those threads.
Although if one did have such a separate recent comments page, people who cared more about the signal/noise ratio could simply not read the full comment page.
I agree that if I try to extract coherent beliefs from Eliezer's claim, particularly the claim [] that people are fleeing the invisible threads, this is what I must conclude. But I'm not sure I should try to extract coherent beliefs from Eliezer's claims. Do you directly claim this? Do you agree with his claim that trolling has increased in recent months? Do you think invisible comments are a good proxy for trolls? I stopped looking at recent comments long ago for reasons of volume. I think it keeps large threads going. But I think large threads tend to be equally worthless, regardless of average karma or karma of the initial comment.
I guess that I agree with Eliezer that the signal to noise ratio is not as good as it used to be, at least not as good as when I first joined here. But I'm not that highly active a user, and my karma is only around 9000, so my impression may not be that important in this context.
9000 is more karma than I have... and I've been here since the beginning! (I mostly write comments rather than posts, though, and you get more karma from posts, especially posts to Main.)

The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy. See .

Be sure to distinguish between the controversy surrounding Eliezer's provocative comments and the policy as he declares he wishes it and actual disagreement with the implementation as it currently stands. I, for example, are tentatively in favor of the current implementation---but not in favour of the policy as he intends it to be implemented.

Under the new rule, if I reply to a post that is later downvoted to -3, am I docked the 5 points when that happens? No chance to opt in to the deduction at that point.

No. The toll applies to commenting on posts that are below -3, not having comments on posts that are below -3.

Surely this makes it very tough for a non-trolling user to figure out what was wrong with his post? Few people are going to explain it to him. You need to be familiar with LW jargon before you can expect to write a technical comment and not be downvoted for it, so this would very easily deter a lot of new users. "These guys all downvoted my post and nobody will explain it to me. Jerks. I'll stick to rationalwiki."

As it's currently implemented it appears that replies to the -3 comments still start at a rating of 0. Why not match the -5 karma and set the new comment's rating to -5 as well? This would be a strong disincentive to others replying to the new 0-rated comment and extending the thread at no cost.

This would be an improvement since then one's karma would still remain in principle obtainable by summing the karma of all one's comments and posts. But then, why have the arbitrary numbers -3 and -5? Wouldn't it be better if a reply to a negatively rated comment started at the same karma as the parent comment? Smooth rewarding schemes usually work better than those with thresholds and steps.

(I still don't support karma penalties for replies in general.)

Reading troll comments has negative utility. Replying to a troll means causing that loss of utility to each reader who wants to read the reply (times the probability that they read the troll when reading the reply). Perhaps giving the reply rating the same rating as the troll would be a more equitable utility cost to karma.

Reading troll comments has negative utility. Replying to a troll means causing that loss of utility to each reader who wants to read the reply (times the probability that they read the troll when reading the reply)

That's exactly the kind of consideration that should lead people to downvote responses to "trolls." If you think someone is stupidly "feeding trolls," you should downvote them.

It seems that E.Y. is miffed that readers aren't punishing troll feeders enough and that he's personally limited to a single downvote. As an end-run around this sad limitation, he seeks to multiply his downvote by 6 by instituting an automatic penalty for this class of downvotable comment.

Nothing is so outrageously bad about troll feeding that it can't be controlled by the normal means of karma allocation. The bottom line is that readers simply don't mind troll feeding as much as E.Y. minds it; otherwise they'd penalize it more by downvotes. E.Y. is trying to become more of an autocrat.

Thank you. The last paragraph perfectly articulates why I disagree with this feature.
It sounds like the real fix is a user-defined threshold. Anyone who only likes the highest rated comments can browse at +3 or whatever, and anyone who isn't bothered by negatively rated comments can browse at a lower threshold.
Isn't it already there?
Thanks, I had only looked on the article's page for something like the "sort by" dropdown, but found the setting in the preferences.
(Now, if it also hid replies to downvoted comments in the Recent Comments page, it'd fully solve the ‘problem’, IMO.)
As your comment stands now, you are just one point above the reply penalty threshold. You aren't a troll. I think it illustrates well that the problem with reply penalties isn't particularly strongly related to trolling. Since the penalty was introduced I have already twice refrained from answering a fairly resonable comment because the comment had less than -3 karma. I have seen no trollish comments for weeks.
Also, the thresholds for "simple majoritarianism" are usually required to be much higher in order to obtain intelligent results. No thresholds should be possible to be reached by three people. Three people could be goons who are being paid to interfere with the LW forum. That then means that if people are disinterested, or those goons are "johnny on the spot" (the one likely characteristic of the real life agents provocateurs I've encountered), then legitimate karma is lost. Of course, karma itself has been abused on this site (and all other karma-using sites), in my opinion. I really like the intuitions of Kevin Kelly, since they're highly emergence-optimizing, and often genius when it comes to forum design. :) Too bad too few programmers have implemented his well-spring of ideas []!
There you go.
Intelligently replying to trolls provides useful "negative intelligence []." If someone has a witty counter-communication to a troll, I'd like to read it, the same way George Carlin slows down for auto wrecks. Of course, I'm kind of a procrastinator. I know: A popup window could appear that asks [minutes spent replying to this comment] x [hourly rate you charge for work] x.016r = "[$###.##] is the money you lost telling us how to put down a troll. We know faster ways: don't feed them." Of course, any response to a troll MIGHT mean that a respected member of the community disagrees with the "valueless troll comment" assessment. --A great characteristic to have: one who selflessly provides protection against the LW community becoming an insular backwater of inbred thinking. Our ideas need cross pollination! After all, "Humans are the sex organs of technology []." -Kevin Kelly
In some situations it may be worth replying to a comment with negative value. Imagine a comment which was made in a good faith, but just happens to be incredibly stupid, or is heavily downvoted for some other reason. Now imagine a reply that contains just a word or two and a hyperlink to an article which explains why the parent comment was wrong. Does this reply deserve an automatic downvote? Generally: It is a bad idea to think about one specific example [] and use it to create a rule for all examples. For instance, not all negative-karma comments are trolling; yet we create a rule for all negative-karma comments based on our emotional reaction to trolling.

Solution: Ban their IP addresses. This actually works, I'll tell you why. Not because they can't get new ones, but because they can't infinitely get new ones. If you've ever sought an unsecured proxy (a key way of obscuring your IP address) you'll know that it's tough to find a good proxy, they're slow, and they frequently leak your IP address regardless. Even programs like Tor only have so many IP addresses. To make it worse, (for them) it's no fun to use proxies that are far away - they're slow as all get out. This technique worked on spammers on a... (read more)

There should be a different discussion forum which is readable to all but can only be posted in by those with over, say 1000 karma. This solution seems "obvious" as we already have a robust karma system, and it's very difficult to acquire that much karma without actually being a good poster (I don't have that much and I've been here for over a year).

This system could lead to the use of the "open" discussion forum as a kind of training grounds where you prove your individual signal-to-noise ratio and "rationality" is high enou... (read more)

[-][anonymous]11y 24

There should be a different discussion forum which is readable to all but can only be posted in by those with over, say 1000 karma.

Actually I'd find restrictions on who can or can't on vote on the comments to be a more interesting option. What would a forum look like if only those with over 1000 karma on LW could vote?

The Stack Exchange sites provide new users with an increasing amount of privileges based on their karma (example []). In principle, something similar could be implemented here, with separate privileges such as (in no particular order): * Vote comments up * Vote comments down * Vote posts up * Vote posts down * Create Discussion posts * Create Main posts * Create meetups
Meetup creation doesn't seem to need a barrier. Perhaps a useful privilege that could come with enough karma would be to allow users to edit tags on articles. Separating voting on comments and main seem reasonable, but I don't quite see why separating down voting and up voting would do any good.
This would make it very difficult for people who aren't already over 1000 to get there, because there would be so much less upvoting happening.
[-][anonymous]11y 15

I didn't originally propose this for LW in general, but a different forum or section. People can earn their LW karma elsewhere. But let us for the sake of this exchange suppose here we make this a general rule. I actually like it much more than what I had in mind at first!

It should be emphasised the reverse of what you describe is constantly happening. It is easier and easier to amass 1000 karma as LessWrong grows. Comparing older to newer articles shows clear evidence of ongoing karma inflation.

There aren't that few people with karma over 1000, I'd guesstimate there are at least 100 of them. Many of those are currently active. But again making it harder to get over 1000 karma in order to vote might be a good think. A key feature of the Eternal September problem is that when you have newcomers of a community interacting mostly with other new members old norms have a hard time taking root. And yes since users takee the karma mechanism, especially negative votes, so seriously it is a very strong kind of interaction. Putting the karma mechanism in the hands of proven members should produce better poster quality. It somewhat alleviates the problems of rapid growth.

It also further subsidizes the creation of new articles. Recall your karma from writing a Main Article is boosted 10 fold.

It especially control how easy it is to post to main. 20 karma from +1000 people is worth way more than 20 random karma.
Getting about 10 karma from introductory posts in the Welcome to LW threads wouldn't be hard. Also people can publish a draft [] in comment form or just ask for karma in order to write a particular article. What do you think of the idea in general, for some other karma limit? Perhaps 500 which is probably close to what the average LWer has.
I like it but then again I have around a thousand karma so it wouldn't impact me very hard. On the other hand, I don't think it does a lot of work to actually fix the Monkeymind situation that EY and company seem to be so distressed by.
I'm not at all convinced the Monkeymind situation is nearly as serious a problem as EY and company seem to think.
Ah, okay. Never mind then, sounds like an interesting idea.
I hope this says what I wanted it to say: Your interpretation was an interesting question in itself. So please talk criticism of this modified idea!
Okay, in regards to the misinterpretation: The reverse is happening precisely because there are so many new users who are voting. I'd say that the way LW started out could be used as an estimate of what that would look like. It was very rare for a comment to reach as many as 5 upvotes, and if you see an old comment that has more than that, most likely it had help from someone more recently upvoting it. Obviously, it would not be entirely the same, and I would place more weight on the up and downvotes being more accurate if this were put into place now, but it would make it much more difficult to get to that point.
[-][anonymous]11y 16

The reverse is happening precisely because there are so many new users who are voting. I'd say that the way LW started out could be used as an estimate of what that would look like. It was very rare for a comment to reach as many as 5 upvotes, and if you see an old comment that has more than that, most likely it had help from someone more recently upvoting it.

I agree LW in say 2010 seems an ok proxy for what it would be like. With one key difference, posting Main articles is much more karma rewarding than it was back then. Articles did get over 10 or 20 karma even back then

We should remember that we don't really care how many of the lurkers become posters. Growing the number of users is not a goal in itself, though I think for some communities it becomes a lost purpose. What we actually care about is having as much high quality content that has as many readers as possible.

I would argue the median high quality comment is already made by a 1000+ user. In any case the limit is something we can easily change based on experience and isn't something that should be set without at least first seeing a graph of karma distribution among users.

There's probably a corollary to Löb's theorem that says a community of rationalists can't add new members to the community and guarantee that it remains a rational community indefinitely. Karma from ratings is probably an especially poor way to indicate a judgement of rationality because it's also used to signal interest in humor (to the point that slashdot doesn't even grant karma for Funny moderations), eloquence, storytelling, and other non-rational things. Any karma-increasing behavior will be reinforced and gain even more karma, and the most efficient ways of obtaining karma will prosper contrary to the goal of creating high quality content. Does every user with more than 1000 karma understand that concept sufficiently to never allow a user who does not understand it to reach 1000 karma? To be honest I didn't fully grasp the concept until just now. I was ready to start talking economics with karma as the currency until I realized that economics can not solve the problem.
I agree. This idea is better than my originally proposed idea. Easier to implement too, and with fewer drawbacks.
This seems like one of the best ideas in this thread to me. It's a simple rule (low drama, low meta), and is a bit like a distributed sponsorship system (where instead of needing to be sponsored by one member, you get partial sponsorship by several).
Hmmm. My unease with this idea would be entirely resolved if the upvotes were cached until the user reached 1000 karma rather than merely prohibited/lost. Consider EYs article on how we fail to co-operate; I'd like to be able to stand-up and say "yes, more of this please". I don't mind at all if the effect of that upvoting is delayed but if I reach 1000 karma I don't expect to find the energy to go back over all the old threads to up vote those I liked in the past - so in that world my expression of support will be forever missing. That said, something really is necessary - on more recent posts the comments have had such a disheartening effect that I was beginning to decide that I should only read articles.
The thing is your early up votes and down votes are probably different than your later ones.
My expectation is that there would be a significant degree of similarity. This may be a testable hypothesis, but we'd have to be gathering the data.
I'm all in favour of that. []
Edited: I got the wrong impression from reading too quickly. Corrected comment: If needed we can choose a different level than 1000 karma, and change it over time in response to experience, so it's a flexible system. However, I'm not certain the idea itself is sound. I don't have the feeling that mutual upvoting by new users is a real problem that needs solving. Can you give links to example comments where you think the proposed rule would have helped?
I feel like if implemented generally, this would punish lurkers who have presumably been contributing to voting patterns for quite some time.
It would limit or remove the voting capability of lurkers. I'm not sure this is a bad thing (even though some of the people who do not comment probably do have good judgement). Either way "punishment" isn't the right the right word.

On prior forums I have been on, attempts to split into a only some posters and all posters forums have ended badly.

When there are enouph high class posters, everything goes into the high class forum and the open forum collapses leaving no worthwhile "in" for new users. When there are too few high class users, everyone double posts to both forums in order to get discussion and you wind up with a functional 1 forum system except with lots of links and more burden and top level menus.

I have not seen an open / closed forum system with exactly the goldilocks number of high class users to maintain stable equilibria in both forums.

The main objection I'd have to that is that standards might drop in the "open" section to the point where some users might just start ignoring it, fracturing the community.
Alternately: a (very low) karma requirement for comments and posts to the discussion subreddit (~20?), a somewhat higher karma requirement for comments and posts to Main (~200?), and a newbie subreddit for new posters to post in. Regular newbie forum topics would be welcome threads and stupid question threads; anybody could post in the newbie forum, so there would responses from LW veterans, and anybody would still be able to ask Stupid Questions. (Meetup threads possibly should go there too.) (Depending on how many other topics there are, it might be worthwhile for every introduction, meetup, and question to get its own thread there.)

I think we need to have a better way to separate true trolls (an admittedly loose category) from people who can be reasoned with and/or raise interesting points but are being down voted for other reasons (like poor grammar/writing). Once we have this we need to convince people to stop feeding the trolls. One way that I have seen proposed is tagged karma (eg +1 insightful -1 trolling). Additionally sockpuppets haven't been a major problem here, but given the role they tend to play in website decline we should have a strategy for dealing with them in advance.

Two more variations on the penalty system:

  1. Have the penalty dependent on history, e.g. replies to a comment with -3 score are penalised only if the original commenter has negative karma over the last 30 days (or maybe if the commenter has total karma less than 10 or 50 etc.). (also suggested by shminux)

  2. Use comment score to compute the penalty, so a comment with -2 only takes 2 karma to reply to, while one with -10 takes 10. (Obviously some other proportionality factor could be used, or even a different relationship between comment karma and reply penalt

... (read more)
I like the proportionality idea but still want to be able to express a vote that says "I want less posts like this" without also saying "I don't want replies to this". My proposal is that replies to comments at -5 or better are free, and replies to other comments start at a score 5 better than the parent.

So, you read all the spam in your email inbox?

This may already have been suggested, but wouldn't it be better to have the comment itself automatically loose five karma, rather than the user who posted it?

EDIT: link

Please keep your suggestions programmatically very simple. There are all sorts of bright ideas we could be trying, but the actual strong filter by which only a very few are implemented is that programming resources for LW are very scarce and very expensive.

(This filter is so strong that it's the main reason why discussion of potential LW features didn't in-advance-to-me seem very publicky - most suggestions are too complicated, and the critical discussion is the one where SIAI/CFAR decides what we can actually afford to pay for. It hadn't occurred to me that anyone would dislike this particular measure, and I'll try to update more in that direction in the future.)

Do we really need another layer of verbal obfuscation, particularly of the cutesy variety?

can't you just not read the replies to downvoted comments? How is it hurting anybody when someone replies to a comment with a score at or below -3? I don't see a reason to disincentivise it.

Many people use the recent comments to see what is being discussed. So off topic or replies to trolls that show up there make that more difficult to use efficiently.

If the problem is with spam in the 'recent comments' sidebar, then it seems like we should change fix that. I would be on board with a rule that posts in hidden sub-threads don't show up on the 'recent comments' sidebar. If we can remove posts from the sidebar, then perhaps posts that drop to -3 are removed from the 'recent comments' sidebar as well.

Bump. ;-)

If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once). Troll too much, lose your account and pay $5 for a new one. Hurts a lot more than downvotes.

This will deter some (a lot?) of non-trolls from making new accounts. It will slow community growth. On the other hand, it will tighten the community and align interests. Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists. Requiring a small donation will make it easier for casual users to make the leap to FAI philanthropist/ac... (read more)

If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once).

I don't know if I would have made my account here if I had to pay $5 to do so. I would pay $5 now to remain a member of the community- but I've already sunk a lot of time and energy into it. I mean, $5 is less cost to me than writing a new post for main!

I am deeply reluctant to endorse any strategy that might have turned me away as a newcomer.

What if you had to associate your account with a mobile phone number, by getting an activation code by text message? It still has the effect of requiring some real resource to make an account, but the first one is effectively free. There may be some concern about your number being sold to scammers.

If I encountered an unfamiliar blog or forum and wanted to leave a comment, I wouldn't give my phone number to do so, even if it seemed quite interesting. Then I would probably leave the site.

I suspect getting that to work in all countries would be a bit of a hassle.

Hard to say. So far, I've only given out my phone number to online services like gmail (woo 2 factor authentication!) or banks, but that's because my email and bank accounts are more powerful than my phone number and because very few services ask for it. I think there's a chance I wouldn't give out my phone number, and I can't clearly feel whether that chance is larger or smaller than my reluctance to pay $5. (Modeling myself from over a year ago is tough.) This also runs into the trouble that instead of getting resources from users, you're spending them on users- texting activation codes is cheap but not free.
I'd consider $5 but I would not have an account here if I had to buy a new phone in order to do so.
Do you know how many offers of free SIMs I get here in the UK? Really quite a lot. Phone numbers are as easy as email accounts.
7Paul Crowley11y
Err really? I'd like to make some sort of bet on this - how many phone numbers you can receive texts from verses how many email addresses I can receive texts from by some deadline. Interested? You wouldn't have to actually receive on them all of course, we'll both use sampling to check.
You are, of course, correct. There'd be a bit of a delay - I was thinking of different email providers, not creating lots on one domain. And SIMs are sorta slow to turn over. But accumulating a pile of phone numbers for trolling would not be hard.
2Paul Crowley11y
"A pile", sure, but not millions. The "different email providers" thing is an interesting caveat, but how are you proposing to make use of that caveat in software? It's not that it's impossible on the face of it, but any software that wanted to make use of it would AFAICT have to have a painstakingly hand-crafted database of domain rules, so that you accept lots of addresses but not lots of addresses.
It's not like that in all countries. In Italy (unless the law has recently changed) you have to provide an identity document in order to activate a new SIM.
[-][anonymous]11y 23

An unintended side-effect: readers without credit/debit cards may find it harder to join the site. This disproportionately affects younger people, a demographic that may be more open to LW ideas.

Another unintended side-effect is that it may increase phyg pattern-matching. Now new recruits have to pay to join the site, and surely that money is being secretly funneled into EY's bank account.

That said, I think that on balance this is a good policy proposal. I also think that the similar proposal using phone verification is plausible, and doesn't run into the above two problems.

Heck, there's no pattern-matching about it. It will increase phyg.

No, it pattern matches Metafilter, like the top post said. Also SomethingAwful.
I never said anything about what it pattern-matches to. (Edit: did you mean to reply to ParagonProtege's comment?)

I don't think anyone at SI agrees with you about Less Wrong's mission. The site is supposed to be about rationality. There is hope (and track record) of the Less Wrong rationality community having helpful spinoffs for SI's mission, but those benefits depend on it having its own independent life and mission. An open forum on rationality and a closed board for donors to a charity aren't close substitutes for one another.

we need more FAI philanthropist/activists

Who is "we"?

I think the percentage of "casual" users who participate on this site because they enjoy intelligent conversations on rationality-related topics while having no FAI agenda is non-negligible. I suspect that reinforcing the idea of equality between LW and FAI activism will make many of them leave. It may be a net negative even if LW's mission is FAI activism as there are positive externalities of greater diversity of both discussion topics and participant opinions (less boredom, more new ideas, better critical scrutiny of ideas, less danger of community evaporative cooling, greater ability to attract new readers...)

Also, I don't like the idea of LW's mission being FAI activism. There is still written in the header: "A community blog devoted to refining the art of human rationality", and I'd appreciate if I could continue believing that description. Of course I realise that the owners of the site are FAI enthusiasts, but that's not true about the community as a whole. LW is a great rationality blog even without all its FAI/philanthropy stuff, not only for the texts already written, but also for the productive debating standards used here and a lot of intelligent people around. I would regret if I had to leave, which I would if LW turned to a solely FAI activist webpage.


Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists.

The tagline is still "A community blog devoted to refining the art of human rationality". If you want FAI and philanthropy, you should I suspect be asking for those specifically up front.


If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once). Troll too much, lose your account and pay $5 for a new one. Hurts a lot more than downvotes.

It's a good idea. Some variations, like associating accounts with mobile phone numbers, may slow good growth less. Maybe it would help to have multiple options to signal being a legitimate new user.

Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists.

I would like to see more x-risk philanthropists/activists, but I don't want to make that a requirement for LW users. It would be good to have more users who want to be stronger because they have something to protect, rather than thinking rationality is shiny.

associating accounts with mobile phone numbers

I don't have a phone, and if I did I would refuse to give it out in case someone did something horrible like call me. I'm not the only phone-hater around; we overlap with phone-hater demographics a fair amount.

How would you feel about the $5 per account option? Any other ideas on how someone could signal that the account they are creating is not yet another sock puppet or identity reset that you would be comfortable with? Maybe associating your account with your website? I'm thinking the phone idea, if it is used at all, should be one of several options, so the user can choose one that works for them.
By strong default, I do not pay money for Internet intangibles, but $5 is low enough that I think we might see people buying accounts for their likely-valuable-commenter friends or something, so I'm not quite so opposed (but I think it would sharply slow community growth, and prevent people who we'd love to have around - like folks whose books get reviewed here - from dropping in to just say a few things). I wouldn't mind associating my website with my account - I already do, now that that's an available field. But even fewer people have websites than phones. Wouldn't some kind of IP address thing suffice to rule out casually created socks?
This is an important point, we should be welcoming to people we talk about, and I'm not sure how that fits in to any scheme. Send out preemptive invitations when we talk about people? Who would keep on top of that? Well, that was the result of me trying to find a mechanism that wouldn't exclude you. But if we let people associate their account with a phone or a website, we include more people. It would be better to have more options to be more inclusive, if we can think of more specific options. Yes, for certain values of casual. You can hide your IP address by going through proxies.
It would have false positives due to people sharing public IPs (but not computers) on workplace or campus networks.
And due to e.g. family members sharing IPs and computers.
That would show up on my credit card bill, which may cause certain inconveniences and I suspect we have people for whom that would cause a lot more than an inconvenience.
Strongly agree. There are also several objections raised on that comment [].
I think this is a VERY BAD IDEA. Charging $5 would have kept me out. It also keeps out everyone who doesn't have a credit card, which includes basically every high school student.
What percentage of current posters do you estimate are FAI philanthopists/activists? Can you give a couple specific examples of what distinguishes them from casual users? (donates to SI? works in a relevant field? volunteers for SI? etc)
Now that I think about that, neither poll asked takers how much they had donated for existential risk mitigation. (In case you're wondering, the answer in my case would be “zero”.)
Where did you get that idea about "Less Wrong's mission" from? Actually, when LW was created, discussing AI wasn't even allowed [].
Also discourages sockpuppetry
[-][anonymous]11y 2


Karma inflation due to more users means old articles aren't as up voted as they should. Also because they are old they don't get read or updated as much as they should. We tried to at least correct people not reading the sequence with reruns. It didn't exactly work.


Currently karma earned from posting a Main article is boosted by a factor of 10. Lets boost the value of karma by a factor of 2 or some other low value for any new comments on articles older than 2 years.

[-][anonymous]11y 22

We tried to at least correct people not reading the sequence with reruns. It didn't exactly work.

I also didn't like how they fragmented the commentary. Many found that a feature rather than a bug. I found it plain annoying when reading an old article I had to do a search to see if there was any recent discussion in the rerun threads too.

I mean surely eventually some of the things we wrote back in 2007 or 2009 will turn out to have been plain wrong, obsolete or incomplete right? It would be neat to see that noted at least in their comment section.

Many people read through the sequences much like they would a textbook. We practically encourage them to do so. New well written comments to old article might be very useful.

I liked this idea until I realized I only liked it because I've been around longer than lukeprog.
Yes, tolerating tolerance is important. But also well kept gardens and all that. Indeed, part of this discussion is an attempt to try to find a way which will improve the signal to noise ratio without methods that run afoul of your concern.

I am soon to post a well thought out solution to "endless September" that will cover this. It's nearly finished in my drafts right now.

[-][anonymous]11y 0


There are many many polite or on topic posts that are not very good or even inane which hover at 0 or 1 karma. For many readers they simply aren't worth the opportunity cost .


Set the default visible level not to 0 but to 2 karma or some such number, much like people can currently set negative comments to unhidden. The exception to this should be when "Sort By" is set to "New".

I think that could severely inhibit the growth of discussion. Every new comment starts at 0 karma, and if some people didn't see those comments they wouldn't get a chance to upvote them, and they would stay at 0.
And when it is set to "Old".
[+][anonymous]11y -7

New to LessWrong?