How can we get more and better LW contrarians?

I'm worried that LW doesn't have enough good contrarians and skeptics, people who disagree with us or like to find fault in every idea they see, but do so in a way that is often right and can change our minds when they are. I fear that when contrarians/skeptics join us but aren't "good enough", we tend to drive them away instead of improving them.

For example, I know a couple of people who occasionally had interesting ideas that were contrary to the local LW consensus, but were (or appeared to be) too confident in their ideas, both good and bad. Both people ended up being repeatedly downvoted and left our community a few months after they arrived. This must have happened more often than I have noticed (partly evidenced by the large number of comments/posts now marked as written by [deleted], sometimes with whole threads written entirely by deleted accounts). I feel that this is a waste that we should try to prevent (or at least think about how we might). So here are some ideas:

  • Try to "fix" them by telling them that they are overconfident and give them hints about how to get LW to take their ideas seriously. Unfortunately, from their perspective such advice must appear to come from someone who is themselves overconfident and wrong, so they're not likely to be very inclined to accept the advice.
  • Create a separate section with different social norms, where people are not expected to maintain the "proper" level of confidence and niceness (on pain of being downvoted), and direct overconfident newcomers to it. Perhaps through no-holds-barred debate we can convince them that we're not as crazy and wrong as they thought, and then give them the above-mentioned advice and move them to the main sections.
  • Give newcomers some sort of honeymoon period (marked by color-coding of their usernames or something like that), where we ignore their overconfidence and associated social transgressions (or just be extra nice and tolerant towards them), and take their ideas on their own merits. Maybe if they see us take their ideas seriously, that will cause them to reciprocate and take us more seriously when we point out that they may be wrong or overconfident.
I guess these ideas sounded better in my head than written down, but maybe they'll inspire other people to think of better ones. And it might help a bit just to keep this issue in the back of one's mind and occasionally think strategically about how to improve the person you're arguing against, instead of only trying to win the particular argument at hand or downvoting them into leaving.
P.S., after writing most of the above, I saw  this post:
OTOH, I don’t think group think is a big problem. Criticism by folks like Will Newsome, Vladimir Slepnev and especially Wei Dai is often upvoted. (I upvote almost every comment of Dai or Newsome if I don’t forget it. Dai makes always very good points and Newsome is often wrong but also hilariously funny or just brilliant and right.) Of course, folks like this Dymytry guy are often downvoted, but IMO with good reason.
To be clear, I don't think "group think" is the problem. In other words, it's not that we're refusing to accept valid criticisms, but more like our group dynamics (and other factors) cause there to be fewer good contrarians in our community than is optimal. Of course what is optimal might be open to debate, but from my perspective, it can't be right that my own criticisms are valued so highly (especially since I've been moving closer to the SingInst "inner circle" and my critical tendencies have been decreasing). In the spirit of making oneself redundant, I'd feel much better if my occasional voice of dissent is just considered one amongst many.
328 comments, sorted by
magical algorithm
Highlighting new comments since Today at 8:24 PM
Select new highlight date
Moderation Guidelinesexpand_more

I have significantly decreased my participation on LW discussions recently, partly for reasons unrelated to whatever is going on here, but I have few issues with the present state of this site and perhaps they are relevant:

  • LW seems to be slowly becoming self-obsessed. "How do we get better contrarians?" "What should be our debate policies?" "Should discussing politics be banned on LW?" "Is LW a phyg?" "Shouldn't LW become more of a phyg?" Damn. I am not interested in endless meta-debates about community building. Meta debates could be fine, but only if they are rare - else I feel I am losing purposes. Object-level topics should form an overwhelming majority both in the main section and in the discussion.
  • Too narrow set of topics. Somewhat ironically the explicitly forbidden politics is debated quite frequently, but many potentially interesting areas of inquiry are left out completely. You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted. But there is only so much one can say about AI and ethics and Bayesian epistemology and self-improvement on a level accessible to general internet audience. When I discovered Overcoming Bias (whose half later evolved into LW), it was overflowing with revolutionary and inspiring (from my point of view) ideas. Now I feel saturated as majority of new articles seem to be devoid of new insights (again from my point of view).

If you are afraid that LW could devolve into a dogmatic narrow community without enough contrarians to maintain high level of epistemic hygiene, don't try to spawn new contrarians by methods of social engineering. Instead try to encourage debates on diverse set of topics, mainly those which haven't been addressed by 246 LW articles already. If there is no consensus, people will disagree naturally.

I'm not trying to spawn new contrarians for the sake of having more contrarians, nor want to encourage debate for the sake of having more disagreements. What I care about is (me personally as well as this community as a whole) having correct beliefs on the topics that I think are most important, namely the core rationality and Singularity-related topics, and I think having more contrarians who disagree about these core topics would help with that. Your suggestion doesn't seem to help with my goals, or at least it's not obvious to me how it would.

(BTW, I note that you've personally made 2 meta/community posts out of 7, whereas I've only made about 3 out of 58 (plus or minus a few counting errors). So maybe you can give me a pass on this one? :)

I note that you've personally made 2 meta/community posts out of 7, whereas I've only made about 3 out of 58

I plead guilty and promise to avoid making meta posts in the future. (Edit: I don't object specifically to your meta-posts but to the overall relative number of meta discussions lately.)

Nevertheless, I doubt calling for more contrarians is helpful with respect to your purposes. The question how to increase the number of contrarians is naturally answered by proposals to create more contrarian-friendly environment, which, if implemented, attract disproportionally high amount of people who like to be contrarians, whatever the local orthodoxy is. My suggestion is, instead, to try to attract more diverse set of people, even those who are not interested in topics you consider important. You would profit indirectly, since some of them would get eventually engaged in your favourite discussions and bring fresh ideas. Incidentally they will also somewhat lower the level of discourse, but I am afraid it is an inevitable side effect of any anti-cult policy.

Do you also think that having more contrarians who disagree that "2+2=4" would increase our likelihood of having correct beliefs? I mean, if they are wrong, we will see the weakness in their arguments and refuse to update, so there is no harm; but if they are right and we are wrong, it could be very helpful.

More generally, what is your algorithm for deciding for which values of X we need more contrarians who disagree with X?

If people come to LessWrong thinking "2+2 != 4" or "computer manufacturing isn't science", is saying "You're stupid" really raising the sanity line in any way? In short, we should distinguish between punishing disagreement and punishing obstinate behavior/contrarianism.

"computer manufacturing isn't science"

Well, computer manufacturing isn't science, it's engineering.

If someone says, "I believe in computers and GPS, but not quantum mechanics or science" then they are deeply confused.

Has there been a glut of those on LessWrong?

This. It's obviously very possible that this was a troll, but that's not my read.

Edit: There were one or two others talking a lot without contributing much that seemed to be the impetus for this discussion post. Wei Dai's post seems to be a reaction to that post.

LW seems to be slowly becoming self-obsessed.

It waxes and wanes. Try looking at all articles labeled "meta"; there were 10(!) in April of 2009 that fit your description of meta-debates (arguing about the karma system, the proper use of the wiki, the first survey, and an Eliezer post about getting less meta).

Granted, that was near the beginning of Less Wrong... but then there was another burst with 5 such articles in April 2010 as well. (I don't know what it is about springtime...) Starting the Discussion area in September 2010 seems to have siphoned most of it off of Main; there have been 3-5 meta-ish posts per month since then (except for April 2011, in which there were 9... seriously, what the hell is going on here?)

Maybe April Fools day gets people's juices going?

LW seems to be slowly becoming self-obsessed.

I don't see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it's what we're best at, and that's always been so.

You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted.

Whut?

Links or it didn't happen.

LW seems to be slowly becoming self-obsessed.

I don't see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it's what we're best at, and that's always been so.

Yes, but the real question is why we love going meta. What is it about going meta that makes it worthwhile to us? Some have postulated that people here are actually addicted to going meta because it is easier to go meta than to actually do stuff, and yet despite the lack of real effort, you can tell yourself that going meta adds significant value because it helps change some insight or process once but seems to deliver recurring payoffs every time the insight or process is used again in the future...

...but I have a sneaking suspicion that this theory was just a pat answer that was offered as a status move, because going meta on going meta puts one in a position of objective examination of mere object level meta-ness. To understand something well helps one control the thing understood, and the understanding may have required power over the thing to learn the lessons in the first place. Clearly, therefore, going meta on a process would pattern match to being superior to the process or the people who perform it, which might push one's buttons if, for example, one were a narcissist.

I dare not speculate on the true meaning and function of going meta on going meta on going meta, but if I were forced to guess, I think it might have something to do with a sort of ironic humor over the appearance of mechanical repetitiveness as one iterates a generic "going meta" operation that some might naively have supposed to be the essence of human mental flexibility. Mental flexibility from a mechanical gimmick? Never!

Truly, we should all collectively pity the person who goes meta on going meta on going meta on going meta, because their ironically humorous detachment is such a shallow trick, and yet it is likely to leave them alienated from the world, and potentially bitter at its callous lack of self-aware appreciation for that person's jokes.

Related question: If the concept of meta is drawn from a distribution, or is an instance of a higher-level abstraction, what concept is best characterized by that distribution itself / that higher-level abstraction itself? If we seek whence cometh "seek whence", is the answer just "seek whence"? (Related: Schmidhuber's discussion about how Goedel machines collapse all the levels of meta-optimization into a single level. (Related: Eliezer's Loebian critique of Goedel machines.))

I laughed this morning when I read this, and thought "Yay! Theism!" which sort of demands being shortened to yaytheism... which sounds so much like atheism that the handful of examples I could find mostly occur in the context of atheism.

It would be funny to use the word "yaytheism" for what could be tabooed as "anthropomorphizing meta-aware computational idealism", because it frequently seems that humor is associated with the relevant thoughts :-)

But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about "natural selection" (mechanistic) over either "Azathoth, the blind idiot god" (anthropomorphic with negative valence) or "Gaia" (anthropomorphic with positive valence).

Edited To Add: You can loop this back to the question about contrarians, if you notice how much friction occurs around the tone of discussion of mind-shaped-stuff. You need to talk about mind-shaped-things when talking about cogsci/AI/singularity topics, but it's a "mindfield" of lurking faux paus and tribal triggers.