I'm worried that LW doesn't have enough good contrarians and skeptics, people who disagree with us or like to find fault in every idea they see, but do so in a way that is often right and can change our minds when they are. I fear that when contrarians/skeptics join us but aren't "good enough", we tend to drive them away instead of improving them.

For example, I know a couple of people who occasionally had interesting ideas that were contrary to the local LW consensus, but were (or appeared to be) too confident in their ideas, both good and bad. Both people ended up being repeatedly downvoted and left our community a few months after they arrived. This must have happened more often than I have noticed (partly evidenced by the large number of comments/posts now marked as written by [deleted], sometimes with whole threads written entirely by deleted accounts). I feel that this is a waste that we should try to prevent (or at least think about how we might). So here are some ideas:

  • Try to "fix" them by telling them that they are overconfident and give them hints about how to get LW to take their ideas seriously. Unfortunately, from their perspective such advice must appear to come from someone who is themselves overconfident and wrong, so they're not likely to be very inclined to accept the advice.
  • Create a separate section with different social norms, where people are not expected to maintain the "proper" level of confidence and niceness (on pain of being downvoted), and direct overconfident newcomers to it. Perhaps through no-holds-barred debate we can convince them that we're not as crazy and wrong as they thought, and then give them the above-mentioned advice and move them to the main sections.
  • Give newcomers some sort of honeymoon period (marked by color-coding of their usernames or something like that), where we ignore their overconfidence and associated social transgressions (or just be extra nice and tolerant towards them), and take their ideas on their own merits. Maybe if they see us take their ideas seriously, that will cause them to reciprocate and take us more seriously when we point out that they may be wrong or overconfident.
I guess these ideas sounded better in my head than written down, but maybe they'll inspire other people to think of better ones. And it might help a bit just to keep this issue in the back of one's mind and occasionally think strategically about how to improve the person you're arguing against, instead of only trying to win the particular argument at hand or downvoting them into leaving.
P.S., after writing most of the above, I saw  this post:
OTOH, I don’t think group think is a big problem. Criticism by folks like Will Newsome, Vladimir Slepnev and especially Wei Dai is often upvoted. (I upvote almost every comment of Dai or Newsome if I don’t forget it. Dai makes always very good points and Newsome is often wrong but also hilariously funny or just brilliant and right.) Of course, folks like this Dymytry guy are often downvoted, but IMO with good reason.
To be clear, I don't think "group think" is the problem. In other words, it's not that we're refusing to accept valid criticisms, but more like our group dynamics (and other factors) cause there to be fewer good contrarians in our community than is optimal. Of course what is optimal might be open to debate, but from my perspective, it can't be right that my own criticisms are valued so highly (especially since I've been moving closer to the SingInst "inner circle" and my critical tendencies have been decreasing). In the spirit of making oneself redundant, I'd feel much better if my occasional voice of dissent is just considered one amongst many.

New to LessWrong?

New Comment
335 comments, sorted by Click to highlight new comments since: Today at 4:26 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have significantly decreased my participation on LW discussions recently, partly for reasons unrelated to whatever is going on here, but I have few issues with the present state of this site and perhaps they are relevant:

  • LW seems to be slowly becoming self-obsessed. "How do we get better contrarians?" "What should be our debate policies?" "Should discussing politics be banned on LW?" "Is LW a phyg?" "Shouldn't LW become more of a phyg?" Damn. I am not interested in endless meta-debates about community building. Meta debates could be fine, but only if they are rare - else I feel I am losing purposes. Object-level topics should form an overwhelming majority both in the main section and in the discussion.
  • Too narrow set of topics. Somewhat ironically the explicitly forbidden politics is debated quite frequently, but many potentially interesting areas of inquiry are left out completely. You post a question about calculus in the discussion section and get downvoted, since it is "off topic" - ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of t
... (read more)

I'm not trying to spawn new contrarians for the sake of having more contrarians, nor want to encourage debate for the sake of having more disagreements. What I care about is (me personally as well as this community as a whole) having correct beliefs on the topics that I think are most important, namely the core rationality and Singularity-related topics, and I think having more contrarians who disagree about these core topics would help with that. Your suggestion doesn't seem to help with my goals, or at least it's not obvious to me how it would.

(BTW, I note that you've personally made 2 meta/community posts out of 7, whereas I've only made about 3 out of 58 (plus or minus a few counting errors). So maybe you can give me a pass on this one? :)

8prase12y
I plead guilty and promise to avoid making meta posts in the future. (Edit: I don't object specifically to your meta-posts but to the overall relative number of meta discussions lately.) Nevertheless, I doubt calling for more contrarians is helpful with respect to your purposes. The question how to increase the number of contrarians is naturally answered by proposals to create more contrarian-friendly environment, which, if implemented, attract disproportionally high amount of people who like to be contrarians, whatever the local orthodoxy is. My suggestion is, instead, to try to attract more diverse set of people, even those who are not interested in topics you consider important. You would profit indirectly, since some of them would get eventually engaged in your favourite discussions and bring fresh ideas. Incidentally they will also somewhat lower the level of discourse, but I am afraid it is an inevitable side effect of any anti-cult policy.
1Viliam_Bur12y
Do you also think that having more contrarians who disagree that "2+2=4" would increase our likelihood of having correct beliefs? I mean, if they are wrong, we will see the weakness in their arguments and refuse to update, so there is no harm; but if they are right and we are wrong, it could be very helpful. More generally, what is your algorithm for deciding for which values of X we need more contrarians who disagree with X?
7TimS12y
If people come to LessWrong thinking "2+2 != 4" or "computer manufacturing isn't science", is saying "You're stupid" really raising the sanity line in any way? In short, we should distinguish between punishing disagreement and punishing obstinate behavior/contrarianism.
5Eugine_Nier12y
Well, computer manufacturing isn't science, it's engineering.
1TimS12y
If someone says, "I believe in computers and GPS, but not quantum mechanics or science" then they are deeply confused.
0[anonymous]12y
Has there been a glut of those on LessWrong?
1TimS12y
This. It's obviously very possible that this was a troll, but that's not my read. Edit: There were one or two others talking a lot without contributing much that seemed to be the impetus for this discussion post. Wei Dai's post seems to be a reaction to that post.

LW seems to be slowly becoming self-obsessed.

It waxes and wanes. Try looking at all articles labeled "meta"; there were 10(!) in April of 2009 that fit your description of meta-debates (arguing about the karma system, the proper use of the wiki, the first survey, and an Eliezer post about getting less meta).

Granted, that was near the beginning of Less Wrong... but then there was another burst with 5 such articles in April 2010 as well. (I don't know what it is about springtime...) Starting the Discussion area in September 2010 seems to have siphoned most of it off of Main; there have been 3-5 meta-ish posts per month since then (except for April 2011, in which there were 9... seriously, what the hell is going on here?)

5JenniferRM12y
Maybe April Fools day gets people's juices going?
9thomblake12y
I don't see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it's what we're best at, and that's always been so. Whut? Links or it didn't happen.

LW seems to be slowly becoming self-obsessed.

I don't see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it's what we're best at, and that's always been so.

Yes, but the real question is why we love going meta. What is it about going meta that makes it worthwhile to us? Some have postulated that people here are actually addicted to going meta because it is easier to go meta than to actually do stuff, and yet despite the lack of real effort, you can tell yourself that going meta adds significant value because it helps change some insight or process once but seems to deliver recurring payoffs every time the insight or process is used again in the future...

...but I have a sneaking suspicion that this theory was just a pat answer that was offered as a status move, because going meta on going meta puts one in a position of objective examination of mere object level meta-ness. To understand something well helps one control the thing understood, and the understanding may have required power over the thing to learn the lessons in the first place. Clearly, ther... (read more)

9Will_Newsome12y
Related question: If the concept of meta is drawn from a distribution, or is an instance of a higher-level abstraction, what concept is best characterized by that distribution itself / that higher-level abstraction itself? If we seek whence cometh "seek whence", is the answer just "seek whence"? (Related: Schmidhuber's discussion about how Goedel machines collapse all the levels of meta-optimization into a single level. (Related: Eliezer's Loebian critique of Goedel machines.))
6JenniferRM12y
I laughed this morning when I read this, and thought "Yay! Theism!" which sort of demands being shortened to yaytheism... which sounds so much like atheism that the handful of examples I could find mostly occur in the context of atheism. It would be funny to use the word "yaytheism" for what could be tabooed as "anthropomorphizing meta-aware computational idealism", because it frequently seems that humor is associated with the relevant thoughts :-) But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about "natural selection" (mechanistic) over either "Azathoth, the blind idiot god" (anthropomorphic with negative valence) or "Gaia" (anthropomorphic with positive valence). Edited To Add: You can loop this back to the question about contrarians, if you notice how much friction occurs around the tone of discussion of mind-shaped-stuff. You need to talk about mind-shaped-things when talking about cogsci/AI/singularity topics, but it's a "mindfield" of lurking faux paus and tribal triggers.
4Will_Newsome12y
The following was hastily written, apologies for errors. (I would go farther, and suggest not even thinking about "natural selection" in the abstract, but specific ecological contingencies and selection pressures and especially the sorts of "pattern attractors" from complex systems. If I think about "evolution" I get this idea of a mysterious propelling force rather than about how the optimization pressure comes from the actual environment. Alternatively, Vassar's previously emphasized thinking of evolution as mere statistical tendency, not an optimizer as such;—or something like that.) I think one thing to keep in mind is that there is a reverse case of the anthropomorphic error, which is the pantheistic/Gnostic error, and that Catholic theologians were often striving hard to carefully distinguish their conception of God from mystical or superstitious conceptions, or conceptions that assigned God no direct role in the physical universe. But yeah, at some point this emphasis seems to have hurt the Church, 'cuz I see a lot of atheists thinking that Christians think that God is basically Zeus, i.e. a sky father that is sometimes a slave to human passions, rather than a Being that takes game theoretic actions which are causally isomorphic to the outputs of certain emotions to the extent that those emotions were evolutionary selected for (i.e. given to men by God) for rational game theoretic reasons. The Church traditionally was good at toeing this line and appealing to people of very different intelligences, having a more anthropomorphic God for the commoners and a more philosophical God for the monks and priests, but I guess somewhere along the way this balance was lost. I'm tempted to blame the Devil working on the side of the Reformation and the Enlightenment but I suppose realistically some blame must fall on the temporal Church. Alternatively, maybe you do accept Neoplatonist or Catharian thinking where we have infinitely meta-aware computational agents as abst
0[anonymous]12y
Damn. You just got metametameta.
8orthonormal12y
I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.
2thomblake12y
Yeah, both of those are low-quality.
5prase12y
As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards. "Low-quality" is too general a justification to recognise the detailed reasons of downvotes. Among the more concrete criticisms I recall many "this is off-topic, hence my voting down" reactions. My memories may be subject to bias, of course, and I don't want to spend time making a more reliable statistics. What I am feeling more certain about is, however, that there are many people who wish to keep all debates relevant to rationality, which effectively denotes an accidental set of topics, roughly {AI, charity donations, meta-ethics, evolution psychology, self-improvement, cognitive biases, Bayesian probability}. No doubt those topics are interesting, even for me. But not so much to keep me engaged after three (or how much exactly) years of LW's existence. And since I disagree with many standard LW memes, I suppose there may be other potential "contrarians" (perhaps more willing to voice their disagreements than I am) becoming slowly disinterested for reasons similar to mine.
0[anonymous]12y
Yes, it's sitting at +1 here and sitting at +2 at physics stackexchange. This supports the opposite of your view, suggesting that physics questions are almost as on-topic here as they are at physics stackexchange -- which is surely too on-topic.
2wedrifid12y
Wow. The first one is only at -2? That's troubling. Ahh, nevermind.
0Viliam_Bur12y
Do we love going meta? Yes, we do. Are we good at it? Sometimes yes, sometimes no; it also depends on the individual. But going meta is good for signalling intelligence, so we do it even when it's just a waste of time. Has it always been so? Yes, unpracticality and procrastination of many intelligent people is widely known.
0h-H12y
The Akrasia you refer to is actually a feature, not a bug. Just picture the opposite: Intelligent people rushing to conclusions and caring more about getting stuff done instead of forsaking the urge to go with first answers and actually think. My point is, we decry procrastination so much but the fact is it is good that we procrastinate, if we didn't have this tendency we would be doers not thinkers. Not that I'm disparaging either, but you can't rush math, or more generally deep, insightful thought, that way lies politics and insanity. In an nutshell, perhaps we care more for thinking about things -or alternatively get a rush from the intellectual crack- so much that we don't really want to act, or at least don't want to act on incomplete knowledge, and hence the widespread procrastination, which given the alternative, is a very Good thing.
0TheOtherDave12y
It seems to follow from this model that if we measure the tendency towards procrastination in two groups, one of which is selected for their demonstrable capability for math, or more generally for deep, insightful thought, and the other of which is not, we should find that the former group procrastinates more than the latter group. Yes?
2h-H12y
Yes & I'd modify that slightly to "the former group needs to more actively combat procrastination".
0TheOtherDave12y
Upvoted for not backing away from a concrete prediction. I would be very surprised by that result.
0h-H12y
Upvoted for good reasons for upvoting :) For data, we could run a LW poll as a start and see. And out of curiosity, why would you be surprised?
0TheOtherDave12y
Hm. You seem to have edited the comment after I responded to it, in such a way that makes me want to take back my response. How would we tell whether the former group needs to more actively combat procrastination? I would be surprised because it's significantly at odds with my experience of the relationship between procrastination and insight.
0h-H12y
I have a habit of editing a comment for a bit after replying, actually I didn't see your response until after editing, I don't see how this changes your response in this instance though? I added that caveat since the former group might have members who originally suffered more from procrastination as per the model, but eventually learned to deal with it, this might skew results if not taken into account.
0TheOtherDave12y
It changes my response because while I kind of understand how to operationalize "group A procrastinates more than group B" I don't quite understand how to operationalize "group A needs to more actively combat procrastination than group B." Since what i was approving of was precisely the concreteness of the prediction, swapping it out for something I understand less concretely left me less approving.
7John_Maxwell12y
This is a good point. Maybe future meta-discussions could be on talk pages for wiki articles, about specific changes to those articles, especially the about page and the FAQ? These actually represent how LW culture is being codified for new users, but unfortunately none of the recent debates seem to of resulted in substantial modification to them. It's too bad that automatic wiki editing privileges don't come with a certain level of karma; would remove a trivial inconvenience and eliminate wiki spam.
2matt12y
Hmmm... you know that wouldn't be too hard to arrange. Keeping the passwords in sync after a change to one account would be much more work, but might be ignorable.
2John_Maxwell12y
Ideally it seems like you would get your wiki authentication cookie automatically after logging into Less Wrong, so you could log in once and use both. I don't know if that changes things regarding passwords.
3John_Maxwell12y
Do you have examples of this sort of stuff so I can go vote it up?
2prase12y
For example there are many posts tagged "physics", most of which hover around zero. A moderately interesting puzzle stands now at -7.

Having more contrarians would be bad for the signal to noise ratio on LW, which is already not as high as I'd like it to be. Can we obtain contrarian ideas more cheaply? For example, one could ask Carl Shulman for a list of promising counterarguments to X, rated by strength, and start digging from there. I'd be pretty interested to hear his responses for X=utilitarianism, the Singularity, FAI, or UDT.

I made a post on a personal blog on one of the more significant points against utilitarianism in my view. It's very rough, but I could cross-post it to Discussion if people wanted.

6cousin_it12y
I really like how you frame the choice between altruism and selfishness as a range of different "original positions" an agent may assume. Thanks a lot, and please do more of this kind of work!
6Vladimir_Nesov12y
To generalize, this suggests re-purposing existing LWers to the role of contrarians, rather than looking for new people.

Or designing a mechanism or environment that makes it easier for existent LW contrarians to express their ideas.

(My personal experience is that trying to defend a contrarian position on LW results in a lot of personal cheap shots, unnecessarily-aggressively-phrased counter-affirmations, or needless re-affirmations of the LW consensus. (E.g., I remember one LWer said he was trying to "tar and feather [me] with low-status associations". He was probably exaggerating, but still.) This stresses me out a lot and causes me to make errors in presentation and communication, and needlessly causes me to become adversarial. Now when discussing contrarian topics I start out adversarial in anticipation of personal cheap shots et cetera. Most of the onus is on me, but still, I think higher general standards or some sideways change in the epistemic environment could make constructive contrarianism a less stressful role for LWers to take up.)

-1siodine12y
Require X amount of karma to pay Y amount for an anonymous comment? Require X amount of karma to pay for Y amount of karma added to your post so that it's more likely to be seen, or to counteract downvotes?
3lukeprog12y
Yes, a list of Carl's best arguments against standard positions is going to be of vastly higher quality than anything we would be likely to get from the best contrarians we can find.

(FWIW Vassar, Carl, and Rayhawk (in ascending order of apparent neuroticism) are traditionally most associated with constructing steel men. (Or as I think Vassar put it, "steel men, adamantium men, magnetic monopolium men", respectively.))

0philh12y
If it's less signal but also less noise, it might be better overall. (And if we can't work out how to get more contrarians, this might be a useful suggestion anyway.) Sarcasm is hard to respond to, because I don't know what your actual position is other than "not-that".
5thomblake12y
I seriously doubt that was sarcasm.

Mm, on second reading I think you're right. "Vastly higher quality than anything we would be likely to get from the best contrarians we can find" comes across to me as having too many superlatives to be meant seriously. But "not-sarcastic" fits my model of lukeprog better.

(I was also influenced by it being at -1 when I replied. There's probably a lesson in contrarianism to be taken from that...)

3David Althaus12y
Keep in mind that we're talking about Carl Shulman. If you know the guy it's pretty obvious that Lukeprog was dead serious.

I disagree with quite a lot of the LW consensus, but I haven't really expressed my criticisms in the few comments I've made. I differ substantially from Sequence line on metaethics, reductionism, materialism, epistemology, and even the concept of truth. My views on these things are similar in many respects to those of Hilary Putnam and even Richard Rorty. Those of you familiar with the work of these gentlemen will know how far off the reservation this places me. For those of you who are not familiar with this stuff, I guess it wouldn't be stretch to describe me as a postmodernist.

I initially avoided voicing my disagreements because I suspect that my collection of beliefs is not only regarded as false by this community, but also as a fairly reliable indicator of woolly thinking and a lack of technical ability. I didn't want to get branded right off the bat as someone not worth engaging with. The thought was that I should first establish some degree of credibility within the community by restricting myself to topics where the inferential distance between the average LWer and me is small. I think wannabe contrarians entering into any intellectual community should be encouraged to expe... (read more)

0wedrifid12y
He can still be found on the SingInst about us page. You do your name justice.
9Will_Newsome12y
(In case it's not obvious the description is not at all currently accurate. I am currently in the process of doing nothing. At some point I firmly decided that doing things is evil, so I try not to do things anymore, at least as a stopgap solution till I better understand the relevant motivational dynamics and moral philosophy. I still talk to people sometimes though, obviously, but to some extent I feel guilty about that too.)

Would it help you behave more morally by your lights if nobody replied to you?

0Will_Newsome12y
Good question. I don't think so.
5wedrifid12y
I still act socially as a Christian in much of my social life so in a certain (not epistemically literal) sense hearing this from 'another believer' strikes me as sacrilege. The Parable of the Talents has a clear point to make on this subject! You are defying His will and teachings.
1Will_Newsome12y
If only it were so easy to tell righteous exploration from liberal folly. But anyway, it's just a stopgap solution. Likely preparation for a sojourn in the desert, and after that, God knows.
-2wedrifid12y
40 days and 40 nights?
0Will_Newsome12y
I don't yet understand the (Kabbalistic?) significance of the number 40. Haven't looked into it. Maybe if I figured it out then I'd find 40 days, 40 nights uniquely appealing.
3wedrifid12y
Worked for Elijah, Moses and Jesus. (I'd recommend eating food though - or at least drink gatorade.)
0[anonymous]12y
Many languages, especially in antiquity, have colloquial ways of phrasing "forever" or "a long time" with a superficially-specific count. In Japanese, "ten thousand years" can be used to indicate an indefinitely long period; in Ancient Hebrew, "40 days and 40 nights" does that job.
-1Will_Newsome12y
But is there any known reason for picking 40 specifically? I wouldn't expect the Jews to choose their numbers arbitrarily.
4[anonymous]12y
Given the number of such numerically-precise-but-pragmatically-vague sayings in many languages, and the apparent failure of them to converge beyond shared cultural contact (Classical Arabic has the same use pattern for "40", as do many Middle Eastern languages from antiquity, though I'll admit that my linguistic knowledge doesn't do more than touch on this region superficially, other'n a few years of Modern Hebrew), I don't think "arbitrary" quite captures it -- they simply adopted a use pattern that was widespread in the time and place where they were.
0Hul-Gil12y
What do you think about Kabbalah? 40 is sometimes used, in the Torah, to indicate a general large quantity - according to Google. It also has associations with purification and/or wisdom, according to my interpretation of the various places it appears in the Bible as a whole. (There are a lot of them.)
4michaelsullivan12y
After a long hiatus from deep involvement in comment threads here -- I actually can't tell if this is serious, or a brilliant mockery of Eliezer's decisions around creating AGI [*]

There's one tactic that's worked well to get LW posts on neglected topics: having a competition for the best post on a subject. A $100 prize resulted in some excellent posts on efficient charity, and the Quantified Health Prize (substantially more money) led to some good analyses of the data on dietary supplementation.

What about having a contest for the best contrarian post on topic X? Personally, I'd chip in a few bucks for a good contrarian post on intelligence explosion, the mathematical universe, the expected value of x-rationality, and other topics.

(I had this idea after reading this comment, and now that I think of it I'm reminded of ciphergoth's survey of anti-cryonics writing as well.)

Stream of consciousness. Judge me that ye may be judged. If you judge it by first-level Less Wrong standards, it should be downvoted (vague unjustifiied assertions, thoughtlessly rude), but maybe the information is useful. I look first for the heavily downvoted posts and enjoy the responses to them best.

I found the discussion on dietary supplementation interesting, in your link and elsewhere. As I recall, the tendency was for the responses (not entrants, but peoples comments around town) to be both crazy and stupid (with many exceptions, e.g., Yvain, Xacharaiah). I recall another thread on the topic where the correct comment ("careful!") was downvoted and its obvious explanation ("evolution works!") offered afterward was upvoted. Since I detected no secondary reasons for this, it was interesting in implying Less Wrongians did not see the obvious. Low certainties attached since I know I know nothing about this place. I'm deliberately being vague.

In general, Less Wrongians strike me as a group of people of impaired instrumental rationality who are working to overcome it. Give or take, most of you seem to be smarter than average but also less trustworthy,... (read more)

A whole lot of Less Wrong seems to be going for less detail, less knowledge, more use of frameworks of universal applicability and little precision. The sequences seem similar to me: Boring where I can judge meaning, meaningless where I can't. And always too long. I've read about four paragraphs of them in total. The quality of conversation here is high for a blog, of course, but low for a good academic setting. Some of the mild sneering at academics around here sounds ridiculous (an AI researcher believes in God). AI's a weak field. All round, papers don't quite capture any field and are often way way behind what people roughly feel.

This. A thousand times this. As a lawyer, LessWrong pattern matches with people outside a complicated field who are convinced that those in the fields are idiots because observers think that "the field is not that complicated."

That said, "Boring where I can judge meaning, meaningless where I can't." is an unfair criticism. Lots of really excellent ideas seem boring if you had already internalized the core ideas.

Reminds me of part of a comment on Moldbug's blot, by Nick Szabo:

[legal reasoning]

It's a disciplined and competitive (dialectic, in the true original sense of that term) use of analogies, precedents, and emergent rules, far more sophisticated than normal use of analogy and metaphor. I learned it my first year of law school and it's a radically different kind of thinking I had never encountered before in school. The Bayesian bloggers seem to be completely oblivious to it, and to the tremendous value of tradition generally. That makes them, from my POV, culturally illiterate and incompetent to opine on law or politics. Yes, legal training also made me stuck up. :-)

If you can't afford law school, you can learn most of what you need to know from Legal Method and Writing by Charles R. Calleros and a first year law school common law casebook (Torts, Property, or Contracts).

The extremely short description of legal or scholastic reasoning is to think of a proposition or dispute as Schrodinger's Cat, both true and false at the same time, or each party at fault or not at the same time, or the appropriate dichotomy. Then gather all the moral or legal disputes that are similar to this one.

... (read more)
2TimS12y
This is a moderately reasonable model of litigation, but it isn't complete. For example, Thurgood Marshall litigated separate-but-equal in the law school context specifically because every judge has a gut feeling of how to compare law schools, which just isn't true about other educational institutions. In law school, I heard the apocryphal story that the law for the State of Texas argued that the new segregated law school was just as good as UT Law School, and Justice Clark - a graduate of UT - passed a note to a colleague that read "Bullshit" That's clever lawyering and has nothing to do with arguing from precedent. Further, not all law is litigation. The legislature empowered to make new laws that have no relationship to old laws. In short, there's a fair amount more to the practice of law than reasoning by analogy, even if reasoning by analogy is an important skill for a lawyer.
8Viliam_Bur12y
I like your style of writing. Though: too many ideas, difficult to rate and respond. Karma always has a random component. Karma of one comment is not significant. Karma of 10 comments shows a trend. I have once received a negative karma for a comment showing an obvious error in reasoning of others; but it only happened once in maybe hundred comments, so I don't make a drama of it. But yeah, it might be painful if that happened to someone's first comment on LW. Instrumental rationality is a known problem of intelligent people. My worst experience was Mensa: huge signalling, almost nothing ever done; and if something is done, it's usually always done by the same two or three people, who could just as well have it done on their own. Compared with that, people at LW are relatively high in instrumental rationality -- they have a working website, they write good articles, they do research, they organize meetups and seminars. But yes, we could do a lot better. Instead of going meta, people could focus and write about things they care about. Not doing this on a web discussion is probably a symptom of not doing it in the real life. Yes, being convinced of one's own rationality can lead to overconfidence. I don't know a cure. Perhaps repeated exposure to disagreement of other rational people will eventually move one to update. Another reason for people focusing on what they are good at -- providing more evidence for their rationalist friends. Re: last three paragraphs -- the choice to stay or leave is on you. Don't participate in the discussions you consider worthless, write something about the real things you work on. (And perhaps I should do the same.) But this is not a new idea -- we have regular threads "what are you working on" here.
2twolier12y
Same dude here, despite the name. Hypothetical: Should a prof at, say, Harvard working on the genetics of longevity post and spend time here? Discussing his own work would be identifying and probably not very productive. Let's further say he's pre-tenure. Top places have a very different tenure success rate than even very good places, so it's an iffy point in his career. Does Less Wrong have anything to offer him? And doesn't he serve Less Wrong best by staying away and working? (or even "playing" elsewhere) My central criticism of this place may well be that some of you won't see there really is no question what the right answer is. Incidentally, perfectly agree with your comment TimS, but the point is that I internalized those ideas independent of LessWrong. ViliamBur, you misunderstood my Karma point. I was merely acknowledging that my comment's being upvoted and Dmitry's downvoted means I can't use it to indict the community at large (and instead was offering is as illustration of my mindset). Luke: yup. But I did skim through the papers from the institute. Not very good. I suspect I can mostly infer the sequences from very basic background knowledge in game theory, philosophy, physics, neuroscience, psych, etc, and reading current comments threads. I don't see anything too fancy implied by the secondary sources (I enjoy reading the back-and-forth more). Uh, what else. I enjoy HPMOR. What I like about it, however, is bad about me: Basically what Robin feared in his comment on OvercomingBias. I should (and will) go. It goes without saying that you wish me well. I just felt like saying hello because I like you. And if you can make it so I can talk to you profitably, I'd like that. Not your fault and I'm sorry to have said it, but I thought you should know.
1orthonormal12y
You should reply to different commenters individually, since then it will send them each notifications that you're replying. Few readers check all branches of the thread that they replied to.
0Viliam_Bur12y
He could discuss the less crtitical parts of his work. If there is a meetup near his home, he could go there and try to find someone to cooperate with. Or if he is expert at genetics but less expert on math, he could ask someone to help him with statistics. Also, he could just spend here his free time, if he prefers company of rational people and has problem finding it outside of his work. That question is relevant for all of us, experts or not. Even for me there are many things I should be doing rather than procrastinating on LW. However, I know myself -- I spent a lot of time online, so given that, at least I can choose a site that gives me intelligent discussions. If you spend your time better, keep doing what works for you. Maybe visiting LW once a month and reading the articles in the "Main" part would be a reasonable compromise, if you want to participate. (I don't know if there is an RSS feed for "Main".)
4asr12y
Suppose you were a professional researcher looking for statistical help. Would you (A) go to a LessWrong meetup, (B), give a talk at the Statistics department of your hypothetical university, or (C) ask your colleagues which statisticians or statistically-literate graduate students they have collaborated with recently? I'm sure the LessWrong community believes in statistics, which is good. But I don't believe the average member of this crowd is any better at the humdrum practicalities of statistical hypothesis testing than your average working scientist. I would guess LessWrong skews younger and less expert. You will not have a hard time finding smart rational people on the Harvard campus! Or, for that matter, near any major university. I'm with twolier -- LessWrong is fun, but I don't see it being all that professionally valuable for people in most technical fields.
5Luke_A_Somers12y
??? Seriously?
2jsalvatier12y
I like this idea and am even willing to put money towards it, but some other similar experiments (of mine; maybe others would be better at this) didn't turn out so well ( this one got no entries, spaced repetition turned out okay, it but only got one good submission). Let me know if you're interested in putting effort into this (it wouldn't be hard to convince me to also do so, but I probably need someone else to help).

One relevant dynamic is the following: if an idea is considered "absurd" to the mainstream, there will be very few people who take the idea seriously yet disagree with it. Social pressure forces polarization: if you're going to disagree with it, you might as well agree with all your normal friends that the idea is kooky.

Thus it's especially hard to find good contrarians for a forum that takes several "absurd" positions.

Upvote if you generally no longer post or discuss opinions that disagree with LW consensus.

Feel free to leave a comment on your experiences and reasons for this.

(If you would like to downvote this poll, please downvote the karma balance below instead, so that we can still get an accurate idea of the number of people who have this reaction.)

3pedanterrific12y
(consensus) And what do you mean "no longer"? Is the idea "upvote if your contrarianism has been downvoted out of you", or what?
0daenerys12y
silly typos. fixed, thanks!
3Multiheaded12y
I'm curious, do you? If you do, why?
2Larks12y
This poll is poorly designed; karma balances often get downvoted less than the vote options get upvoted, so this will tend to over-estimate how many people no longer dissent. For example, when I loaded this page, this comment was at 5 and the karma balance was at -3
6daenerys12y
To me, when a karma balance is downvoted less than poll options are upvoted, it means that people think running the poll deserves some karma. This does not overestimate people who have reacted to voting patterns, since that number does not come from the karma balance. If someone (who has NOT reacted to voting patterns) wants to give karma for running the poll, they would upvote the karma balance, not the voting comment Also, the purpose of the poll is to see whether a relatively high or relatively low amount of people have reacted to the voting patterns this way. Exact numbers are not needed.
4Random83212y
I have a proposal for a new structure for poll options: The top-level post is just a statement of the idea, and voting has nothing to do with the poll. This can be omitted if the poll is an article. A reply to this post is a "positive karma balance" - it should get no downvotes, and its score should be equal to the number of participants in the poll. Two replies to the "positive karma balance" post, you downvote one to select this option in the poll. This way voting either way in the poll has the same cost (one downvote), the enclosing post will have a high score (keeping it from being lost), and the only way to "corrupt" the poll results without leaving a trace [downvote the count post and upvote one of the option posts] simply cancels someone's vote without allowing you to make your own.
7HonoreDB12y
Or just embed a poll.
0NancyLebovitz12y
Nitpick-- that should be the LW consensus, not LW census.
-25daenerys12y

If we have less contrarianism than is optimal, it seems like the root of the problem is that people often vote for agreement rather than for expected added value. I would start looking there for a solution.

Also, the site would be able to absorb more contrarians if their bad contributions didn't cause as much damage. It would help if we exercised better judgment in deciding when a criticism is worth engaging with and when we should just stop feeding the trolls.

Change the mouseovers on the thumbs-up/thumbs-down icons from "Vote up"/"Vote down" to "More like this"/"Less like this". I've suggested this before and it got upvotes, I suggest now it might be time to implement it.

Stupid alternative: Instead of up/down, have blue/green. Let chaos reign as people arbitrarily assign meaning.

Classic Will_Newsome. Greenvoted.

5David_Gerard12y
BLUE!! ... well, it said blue when I clicked on it ...

Predicted outcome: within a couple of weeks, blue/green will have understood but undocumented positive/negative associations. Votes will be noisier, though, thanks mostly to confused newcomers and the occasional contrarian pursuing an idiosyncratic interpretation. Complaints about downvotes, and color politics jokes, will both become more common.

p = 0.7 contingent on implementation for core claim, .5-6 range for corollaries.

0.7 strikes me as low.

Proposed chaotic refinement: Blue/green, but switch them every 18 to 30 hours (randomly sampled, uniform distribution).

(ETA: Upon reflection days or weeks would be better, to increase chaos/noise ratio. Would also work better with prominent "top contributors for last 30 days" lists for both blue and green, and more adulation/condemnation based on those lists.)

0shokwave12y
Other refinements: each person is randomly permanently assigned either: blue/green OR they see blue/green but it's actually green/blue behind the scenes. This makes any explicit discussion of blue/green more difficult. Or: Each person actually has grue and bleen buttons. At some time t, they are suddenly voting for the other colours. An extended form of this looks similar to your ETA.
1Multiheaded12y
And you call yourself an anti-liberal traditionalist? :)
2Will_Newsome12y
Am I an anti-liberal traditionalist? Humans are so silly. I have an idea. If you want to hit the right-wingers with something out of left field, try Rigorous Intuition, especially those posts over on the right under the heading "The Military-Occult Complex, ritual abuse/mind control, and 'High Weirdness'". I guarantee a few WTFs.
2Multiheaded12y
Heh, thanks. Probably won't work on the local right-wing technocrats, however, as they are simply not interested in many such issues like the workings of the Bush regime or the military-industrial complex. I'm curious enough to take a look, though. Edit: heh, that blog quotes Dick's novels - already a good sign to me.
-1Eugine_Nier12y
I'm curious why you picked this conspiracy theorist in particular.
3Will_Newsome12y
Availability heuristic; I haven't read many conspiracy theorists. He struck me as more careful and more cogent than the few others I'd read; like, he bothers to explicitly bracket certain ideas as having a good chance of being wrong, and he emphasizes giving up on a thread if it doesn't seem to be fruitful. He's generally pragmatic. He also has a healthy skepticism about the motives and natures of claimed demonic/alien entities, not in the sense of categorically doubting that they're supernatural/alien/"weird", but in the sense of not assuming that just because they say they want to help humanity and so on that that is strong evidence of actual benevolence: "I find it a fascinating frustration that many of those convinced of a massive government cover-up fall over themselves to accept the words of non-human entities." — this post on Fatima. Being pseudo-Catholic and schizotypal I naturally worry about demons—in fact that's part of why I'm pseudo-Catholic and not, say, pseudo-Tibetan-Buddhist. So Jeff Wells scores a lot of points with me for his caution on that front. Do you have recommendations for other conspiracy theorists, or conspiracy theorist debunkers? 'Cuz honestly I think Jeff Wells makes a compelling, coherent case for High Weirdness, which is worth keeping in mind as a live hypothesis, though I don't think we'll have the collaborative argumentation tools necessary to rationally assess the hypothesis for at least another five years.
2Jayson_Virissimo12y
I visited Fatima in 2007 with my family. It was...spooky...and in a way that the Vatican was not (that is to say, not in the same way as any old, massive, historically-important thing is). On the other hand, my Portuguese isn't very good, so I may not have understood as much as I thought.
-1Eugine_Nier12y
I clicked around a little on his site. Most of his conspiracy theories appear to be political and he's clearly been mind-killed by politics. As for evaluating "conspiracy theories", I recommend you start by reading this blog post by Eric Raymond, also this comment by Konkvistador if you haven't already seen it.
3Will_Newsome12y
Sounds like you might not have read enough to see where his strengths and weaknesses are. Politics is his weakness and I mostly ignore that stuff, but I'm more interested in his paranormal stuff including the military-occult stuff, where he seems to have less of an ax to grind and sometimes presents a bunch of interesting source material without trying too hard to spin a story out of it. E.g. I like his report on Fatima, linked in my previous comment; what do you think of that one? (Though I suppose I should have told Multiheaded that Wells' political stuff is bad and that his High Weirdness stuff is way better. Oh well.) In my previous comment I for some reason conflated High Weirdness with conspiracy theory; in reality I suspect they're not that connected. I'm more interested in High Weirdness than conspiracy, so any critiques of High Weirdness would be useful. I'm really unimpressed with standard "skeptic" arguments. Re conspiracy theories, Konkvistador and Raymond make the obvious points, I suppose there might be nothing more insightful to be said about the matter at that level of generality.
5Multiheaded12y
Nah, don't worry. I understood from the start that politically that blog is something like the rants of a hippie Bircher. That is, with rather clouded judgment and some nonsense priors in the first place, but curious when it directs attention to odd facts that don't fit the mainstream narrative. [1] Like the village idiot whose ravings contain clues to plot secrets in some computer RPGs. (when I said "the Bush regime", I didn't mean all the standard left-of-center complaints about how he was evil, stupid and killed puppies - although I agree with the last two - but the genuinely irrational-looking stuff like the connections with fringe groups and the CIA's rumoured odd activities) P.S. Wow, that guy's T-shirts are quite awfully designed. P.P.S. And still it's clearly worth reading, at least in matters which are somewhat above mere conspiracies and politics: When it's Hanson talking about the glorious future of Ems, the self-styled "rationalists" - I'm not talking about the LW majority, but the thinking patterns characteristic of some of the Overcoming Bias old guard - smile and nod. When it's a somewhat disturbed and not overly logical guy warning sincerely about the looming Hell on Earth - factually, the same thing - they groan with annoyance at the pathetic Luddites and their mental disease known as "humanity". Obvious devil-worshipping "rationalist" cults like Objectiivism are only the tip of the iceberg here; we're talking about some rather shocking spiritual and cultural erosion, handwaved as "non-neurotypicality" or "contrarianism" when it is at all acknowledged. (I'm not saying that there's something horribly wrong with non-neurotypicality or contrarianism per se, as they are, but there's nothing wrong with patriotism per se either, and you know who else was patriotic? [Godwin's law]) By God, Will, I feel like I understand your concerns so much better now! P.S. I know, I know, it's kinda hypocritical of me to criticize a community member as morally cor
-2Multiheaded12y
Also, damn, it's a bit of a jolt to encounter someone who thinks of the world's course in the same Gnostic terms that I often entertain. I too have been associating the spectre of anti-religious, anti-ideological, technocratic tyranny that's haunting us with the supposed iron "logic", runaway reductionism and blind hubris of the Archons, as relayed by the ancients and by latter-day SF visionaries like Dick. (All aboard! We're off for -10 rating in 3... 2... 1...)
-3Eugine_Nier12y
Given how deeply this comment is buried in an old thread I'd be surprised if 10 people even read it.
-1Multiheaded12y
Oh, don't worry, dude, you can simply make nine or so new accounts to make up for it. ;)
1faul_sname12y
Sort by greenest.

I think of it as "Pay more attention to this" / "Pay less attention to this." Communicating primarily to other readers rather than to posters.

5John_Maxwell12y
I think this would discourage me from writing contrary stuff. Right now if I get voted down, I explain it to myself as me having an unpopular but possibly correct opinion. Hearing that people want "less like this" seems harsh somehow.
7Larks12y
This is the pro-airbrushing argument; airbrushing in magazines decreases body neurosis because it gives girls plausible deniability for why they don't look like models. I saw this not to pass judgement either way on your argument.
0NancyLebovitz12y
Does airbrushing actually work to decrease body neurosis? My impression is that it doesn't. However, mannikins seem to cause less damage, possibly because they're less realisitic looking.
1TimS12y
Isn't that the point? A stimuli that is insufficiently strong to change behavior is pointless to use for behavior modification.
4thomblake12y
Frankly I think we should reconsider the early suggestion that karma on comments should be between 0 and 1, starting at 0.5.
2David_Gerard12y
1 and 999. No doubt someone will write a script to render the number in decibels ...
4steven046112y
Hmm. Or "Reward"/"Punish"? "Incent"/"Disincent"? "Carrot"/"Stick"? "I like your comment, so I more like thissed it" doesn't roll off the tongue.
7Alicorn12y
I want to go around carroting things.
4[anonymous]12y
All I could think of was this. (deep link, ten seconds long). (Warning: Homestuck fandom, implausibly unsafe for work, unless your boss is into Homestuck.)
1Richard_Kennaway12y
Please, no. As far as I'm concerned, an upvote or downvote, by me or on my posts, is not a reward or a punishment. Not even slightly. So much the better. I am not interested in who has upvoted or downvoted me, and I never mention my own votes.
6David_Gerard12y
I think you're wrong there. Humans are exquisitely sensitive to status, anywhere they see anything that looks even slightly like it. Upvotes/downvotes are precisely rewards/punishments, whatever else they may be or whatever you may intend yours to be.
-3Richard_Kennaway12y
Other people can torture themselves with such phantoms or not, as they please.
2David_Gerard12y
"as they please" is, I think, wrong too. It's incredibly difficult to switch off awareness of status. Particularly with your score on the LessWrong video game right up there at the top-right in a little green oval, with your this-month score just below it.
-3Richard_Kennaway12y
I'm not talking about how easy or difficult it is.
0David_Gerard12y
"as they please" seems dismissive of how difficult it is. It's that basic to human nature, not just human thinking. Of course, you may be able to lessen how much you care about your score on the LessWrong game to the point where it doesn't affect you more than epsilon, but assuming you're a human I would be very surprised to find you literally didn't have even the faintest twinge.
-4Richard_Kennaway12y
"Difficult" too easily becomes an excuse for not doing the work. How "difficult" is it to get a university degree? How "difficult" is it to bike 100 miles? Sometimes "difficult" just means "I don't want to". So I see I'm currently at -3 for my two comments above, which I think may be the first time I have ever commented on the votes on my own posts. My reaction: so what? I am sufficiently self-assured (a virtue worth cultivating, and observing one's reaction to one's karma score is one small way of cultivating it) that I draw from it neither validation nor shame, and besides, a trifling few points here and there are nothing. Comments are a more substantial currency. The dogs bark. The caravan moves on. Also relevant.
3steven046112y
I agree that reward/punish doesn't quite capture the intended meaning. The other suggestions I edited in also have that problem. Even if we're not mentioning votes, there are various other reasons why we might want to talk about the process of voting. I kind of like "I like your comment, so I morepleased it".
0vi21maobk9vp12y
"Appreciated this" / "Pearl wasted on me" ?
0David_Gerard12y
"Bouquet"/"Brickbat".
0Multiheaded12y
This is a seriously fucking awesome suggestion! Do it!
-2A4FB53AC12y
You should call it black and white. Because that's what it is, black and white thinking. Just think about it : using nothing more than one bit of non normalized information by compressing the opinion of people who use wildly variable judgement criteria, from variable populations (different people care and vote for different topics). Then you're going to tell me it "works nonetheless", that it self-corrects because several (how many do you really need to obtain such a self-correction effect?) people are aggregating their opinions and that people usually mean it to say "more / less of this please". But what's your evidence for it working? The quality of the discussion here? How much of that stems from the quality of the public, and the quality of the base material such as Eliezer's sequence? Do you realize that judgements like "more / less of this" may well optimize less than you think for content, insight, or epistemic hygiene, and more than it should for stuff that just amuses and pleases people? Jokes, famous quotes, group-think, ego grooming, etc. People optimizing for "more like this" eventually downgrades content into lolcats and porn. It's crude wireheading. I'm not saying this community isn't somewhat above going that deep, but we're still human beings and therefore still susceptible to it.
7NancyLebovitz12y
I've noticed that humor gets a lot of upvotes compared to good but non-funny comments. However, humor hasn't taken over, probably because being funny can take some thought. I don't think karma conveys a lot of information at this point, though heavily upvoted articles tend to be good, and I've given up on reading down-voted articles, with a possible exception of those that get a significant number of comments.
3David_Gerard12y
More so than "vote up"? You've made a statement here that looks like it should be supported by evidence. What sites do you know of this happening from going from "vote up" to "more of this"?
-1A4FB53AC12y
Not more so than "vote up". In this case I don't think both are significantly different. They both don't convey a lot of information, both are very noisy, and a lot of people seem to already mean "more like this" when they "vote up" anyway.
6khafra12y
I don't think it was clear from the context that you were arguing against the practice of community moderation in general. I also don't think you supported your case anywhere near well enough to justify your verbal vehemence. Was this a test/demonstration of Wei Dai's point about intolerance of overconfident newcomers with different ideas?
2A4FB53AC12y
Actually, not against. I was thinking that current moderation techniques on lesswrong are inadequate/insufficient. I don't think the reddit karma system's been optimized much. We just imported it. I'm sure we can adapt it and do better. At least part of my point should have been that moderation should provide richer information. For instance by allowing for graded scores on a scale from -10 to 10, and showing the average score rather than the sum of all votes. Also, giving some clue as to how controversial a post is. That'd not be a silver bullet, but it'd at least be more informative I think. And yes, I was also arguing this idea thinking it would fit nicely in this post. I guess I was wrong since it seems it wasn't clear at all what I was arguing for, and being tactless wasn't a good idea either, contrarian intolerance context or not. Regardless, arguing it in detail in comments, while off-topic in this post, wasn't the way to do it either.
4NancyLebovitz12y
Karma graphs would give a lot of information-- whether a person's average karma is trending up or down, and whether their average karma is the result of a lot of similar karma or +/- swings.
2Bugmaster12y
Don't you technically need at least two bits ? There are three states: "downvoted", "upvoted", and "not voted at all".
2wedrifid12y
One and a half if you can find a suitable compression algorithm. I wouldn't rule that out as a possibility but it may be counter-intuitive.
1A4FB53AC12y
True, except you don't know how many people didn't vote (i.e. we don't keep track of that : a comment at 0 could as well have been read and voted as "0" by 0, 1, 10 or a hundred people and is the default state anyway.)(We similarly can't know if a comment is controversial, that is, how many upvotes and downvotes went into the aggregated score).
2Bugmaster12y
The system does keep track of how everyone voted, though; it needs to do that in order to render the thumbs up/down buttons as green or gray. wedrifid is right though; using suitable compression, you might be able to get away with less than two bits (in aggregate).
0John_Maxwell12y
Edited wiki.
0thomblake12y
Useful edit.

Others already noted that we need contrary opinions more than contrarian people per se. Let me make another distinction. Is the goal a community with a diverse set of opinions, or more people who are vocal and articulate about some minority opinion? Maybe the latter goal is worth working on, but I suspect the former has already been reached. Let me go with myself as an example. I don't think anybody ever saw any of my comments as contrarian, and I am sure nobody associates my nick with contrarianism. The thing is: I would bet against Many Worlds. I am not a consequentialist. I am not really interested in cryonics. I think the flavor of decision theory practiced here is just cool math without foreseeable applications. I give very low probability to FOOM. I think FAI as a goal is unfeasible, for more than one reason.

I am not vocal at all about these positions, and you will very rarely see me engage in loud debates. But I state my position when I feel like it, and I was never punished for that. (I don't have any negatively voted comment out of a few hundred.) I think we would see a similar pattern when checking the positions of other individual "non-contrarian" commenters.

Me too:

I would bet against Many Worlds. I am not a consequentialist. I am not really interested in cryonics. I think the flavor of decision theory practiced here is just cool math without foreseeable applications. I give very low probability to FOOM. I think FAI as a goal is unfeasible, for more than one reason.

I used to be very active on Less Wrong, posting one or two comments every day, and a large fraction of my comments (especially at first) expressed disagreement with the consensus. I very much enjoyed the training in arguing more effectively (I wanted to learn to be more comfortable with confrontation) and I even more enjoyed assimilating the new ideas and perspectives of Less Wrong that I came to agree with.

But after a long while (about two years), I got really, really bored. I visit from time to time just to confirm that, yes, indeed, there is nothing of interest for me here. Well, I'm sure that's no big deal: people have different interests and they are free to come and go.

This is the first post that has interested me in a while, because it gives me a reason to analyze why I find Less Wrong so boring. I would consider myself the type of "reasonable contrarian&q... (read more)

6khafra12y
Over a year ago, Michael Vassar spoke about writing a rationalist's guide to politics. Seems like the sort of thing Steve Rayhawk would also be good at. Perhaps we could all get together and bribe somebody who could do it well to do it.
1[anonymous]12y
You have my sword.
0byrnema12y
I like that idea. I expect that this candidate would think very differently from me (perhaps the inferential distance would make communication difficult?) and for some reason be especially detached from social thought patterns. I think I'm somewhat detached, but can't make heads or tails of the patterns. Thus, apart from the possible difficulty in communication, I would trust my judgement of whether they were resolving the questions and would be happy with an individual attempt. ... An example of the type of candidate comes to mind, the Dûnyain Kellhus, but unfortunately he is fictional.
4wedrifid12y
One or two comments every day is very active? Oops.
5thomblake12y
You should make some discussion posts about your reasons for disagreeing with the perceived consensus on each of those issues. If they are articulate, specific, and uses the techniques of epistemic rationality, they should be well-received. (If you have good reasons for disagreeing with the techniques of epistemic rationality, then that's an even better post).
1vi21maobk9vp12y
If I have seen the replies to well-written comments expressing some opinions, I may find it unlikely that I would get new information from replies to a discussion post. And I may have some hard-to-share reasons and personal red flags, so I do not know whether I will do good to anyone. So, why bother? Maybe original poster wouldn't agree with this approach, but his behaviour is consistent with it.
3vi21maobk9vp12y
A perfect example of the problem, I guess. Many pro-LW-mainstream arguments are weak if you have significantly different priors. People with minority view quickly learn the difference in priors and learn to express their views less often and defend them less. I also consider FOOM-as-described-on-LW quite improbable and the writings of Eliezer on the topic simply raise a few red flags; I see that it is a popular position here, but most people don't find it worth the effort to fight mainstream. There are still many topics on LW where no relevant values or priors are parts of LW majority's collective identity and I get some entertainment and information form reading these discussions and participating in them. There are also topics close to things that are accessible to science with all its rigidity (but also stability) compared to Bayesian inference. These are very informative too.

I would prefer an increase in 'question' (problem) posts, as opposed to 'statement' (solution) posts, contrarian or no.

Most of the machine intelligence folk don't seem to be on "your" side. I think they see you as potential competitors who don't share their values.

I tend to be more sympathetic to their position than yours. In particular I don't seem to share your values, and don't much like your PR - or your "end of the world" propaganda. I think that developing in secret is a pretty dubious plan - and that the precautionary principle sucks

Probably the best thing about you is that you have Eliezer on your side - and he's a smart cookie. However, that aspect also appears to have its downsides.

It took me much longer than it should have to mentally move you from the "troll" category to the "contrarian" one. That's my fault, but it makes for an interesting case study:

I quickly got irritated that you made the same criticisms again and again, without acknowledging the points people had argued against you each time. To a reader who disagrees with you, that style looks like the work of a troll or crank; to a reader who agrees with you, it's the best that you can do when arguing against someone more eloquent, with a bigger platform, who's gone wrong at some key step.

It should be noted that I don't instinctively think any more highly of contrarians who constantly change their line of attack; it seems to be a "damned if you do, damned if you don't" tribal response.

The way I changed my mind was that you made an incisive comment about something that wasn't part of your big disagreement with the Less Wrong community, and I was forced to update. For any would-be respected contrarians out there, this might be a good tactic to circumvent our natural impulse towards closing ranks.

7Will_Newsome12y
I still find it tricky to distinguish if timtyler realizes what he's saying is going to be misinterpreted but just doesn't care (e.g. doesn't want to cave into the general resource-intensive norm of rephrasing things so as not to set off politics detectors), or if he doesn't realize what he's saying is going to be misinterpreted. E.g. he makes a lot of descriptive claims that look suspiciously like political claims and thus gets downvoted even when upon being queried he says they were intended purely as descriptive claims. I've started to think he generally just doesn't notice when he's making claims that could easily be interpreted as unnecessarily political.
5timtyler12y
Politics? This might, perhaps, be to do with the whole plan of unilaterally taking over the world? If so, that is a plan with a few politicical implications, and maybe it's hard to discuss it while avoiding seeming political.
8Will_Newsome12y
Yes, and because the Eliezerian doom/world-takeover position is somewhat marginalized by the mainstream, people around here are quick to assume that stating simple facts or predictions about it, unless the facts are implicitly in favor of the marginalized position, is instead implicitly a vote in favor of further marginalization, and thus readers react politically even to simple observations or predictions. E.g., your anti-doom predictions are taken as a political move with the intent of further marginalizing the fund-us-to-help-fight-doom political position, even in the absence of explicit evidence that that's your intent, and so people downvote you. That's my model anyway.
7timtyler12y
Of course, from my point of view, the "doom exaggeration" looks like a crude funding move based on exploiting people by using superstimulii - or, at best, a source of low-relevance noise from a bunch of self-selected doom enthusiasts who have clubbed together. You do have a valid point about my intentions. I derive some value from the existence of the SI, but the overall effect seems to be negative. I'm not on "your side". I think "your side" currently sucks - and I don't see much sign of reform. I plan to join another group.
-2Will_Newsome12y
Me too. Probably the Catholics.
8khafra12y
Is there a Dominican community blog I should watch? Also, would you surreptitiously palm some small dry ice granules right before you dip your fingers in the water during confirmation? I've always wanted to see that.
1Will_Newsome12y
I know basically nothing about modern Catholics, actually, which is a big reason why I haven't yet converted. E.g. I have serious doubts about the goodness of the Second Vatican Council. If the Devil has seriously tainted the temporal Church then I want no part in it. That would be really cool. But I think God would be displeased. ...I'm not sure about that, I'll ask Him. (FWIW I rather doubt He'll give an unambiguous answer.)
3drethelin12y
If you had to specify a historical year in which Catholicism seems most correct to you which would it be?
4Will_Newsome12y
I think it depends somewhat on a subquestion I'm confused about. How much culpability should we assign the Church as an institution for the Reformation? On the one hand they were getting pretty corrupt, on the other hand that's like blaming someone who lived a vigorous, moral life, but who is now dying of cancer, for harboring cancer. Should we blame the man for not having already discovered the cure to cancer? Anyway, my intuition says the answer is about 1200 or 1300 A.D., but I really don't know. How close or far before the Reformation is dependent on how culpability should be assigned to the Church for the Reformation. Jayson_Virissimo or Vladimir_M would have better answers.
2Jayson_Virissimo12y
Sorry; my knowledge of the Middle Ages (and the Early Modern Period) is very low-level (with depth on very narrow topics like medieval science and logic, but little outside of that including politics and religion). Making an accurate judgment as to the (average?) truth-value of the many (importance-weighted?) propositions affirmed by (the majority of?) Catholic churchmen is way too high-level for my current understanding (although, I hope to rectify this in the near future). Also, although many of my comments can reasonably be interpreted as being "pro-Catholic", this is mostly by accident. It would be more accurate to say that I am defending the medievals (many of which were Catholics) from libel (of which I have been guilty in the past and am attempting to do penance).
1Richard_Kennaway12y
How do you go about asking God, and how do you experience His answers?
-1NancyLebovitz12y
Why do you think the Devil might have tainted the temporal Church through the Second Vatican Council?
5Will_Newsome12y
So this is getting into really crazy conspiracy theories, but I notice Vatican II came soon after the Church's failure to release the Third Secret of Fatima, which given the way Church authorities reacted to it IMO seems to indicate that it did indeed predict something like ongoing or imminent Satanic infiltration, or something similarly potentially disruptive to the termporal Church. FWIW I'm pretty sure this conspiracy theory only sounds even halfway plausible if you already accept as legitimate the various prophecies and miracles of Fatima. ETA: Not sure what to make of the fact that if I was in a Dan Brown novel this is definitely a hypothesis I should keep to myself. I fear I'm not being very genre savvy.
[-][anonymous]12y100

I know basically nothing about modern Catholics, actually, which is a big reason why I haven't yet converted. E.g. I have serious doubts about the goodness of the Second Vatican Council. If the Devil has seriously tainted the temporal Church then I want no part in it.

Considering this among other things, I want to see the contrarian awesomeness that would be you writing a series of posts on the Orthospehere explaining your positions and theories regarding the Church and global history.

Regardless if this turned to be an epic troll or the birth of a new cult, it would be extremely entertaining.

4Eugine_Nier12y
Given how correlated his novels tend to be with reality, I'd decrease my belief in the hypothesis.
0Will_Newsome12y
Upon reflection I remembered reading that there was serious cause for concern years before Vatican II. (N.B.: Linked blog seems to be generally epistemically careful but is big on conspiracy theories.)
-2NancyLebovitz12y
There is no such thing as "modern Catholics". There are a number of subgroups, but I don't know enough to be usefully more specific.
3timtyler12y
That doesn't sound great! Was I right? If you think there's a case where I should have updated - but didn't - perhaps it can be revisited? Of course, I don't mean to put pressure on you to trawl through my comments - but it would be nice for me to know if you have any specific cases in mind.
5orthonormal12y
I couldn't find them in a quick search, but the gist of the argument that got me frustrated was a cluster of arguments that you've stated a lot but never written up at length. Let me summarize roughly: All new technological developments are just continuations of evolution; there are no relevant differences between evolution of genes, memes, corporations, etc; and therefore the Singularity couldn't be an existential crisis, just a faster continuation of evolution. (Apologies if I've mangled it.) It seemed to me that every time a relevant topic was mentioned, back in the days of the Sequences, you merely stated one of these opinions rather than argued for it. But again, it's difficult for me to recognize good arguments when I disagree with their conclusions.
2timtyler12y
Hmm. Thanks. I did write a whole book about that one - I think. Your objection also makes me think of this material: * http://alife.co.uk/essays/a_new_kind_of_evolution/ * http://alife.co.uk/essays/a_new_kind_of_evolution/textbooks/ * http://alife.co.uk/essays/a_new_kind_of_evolution/quotes/ Even with regular evolution there can still be existence "failures" - for particular species. Also, I do think one of these is coming: http://alife.co.uk/essays/memetic_takeover/ ...leading to this: http://alife.co.uk/essays/engineered_future/ - apparently a future where humans as we know them play a pretty insignificant role. I do think that the trend towards increased destructive power needs to be considered in the light of the simultaneous trend towards greater levels of cooperation, moral behaviour, and peacefulness.
4orthonormal12y
Ah— you have written it up at great length, just not in Less Wrong posts. I think you claim too strong a predictive power for the patterns you see, but that's a discussion for a different thread. (One particular objection: the fact that evolution has gotten us here contains a fair bit of anthropic bias. We don't know exactly how narrow are the bottlenecks we've survived already.)
2JoshuaZ12y
We can estimate this for a lot of the major bottlenecks. For example, we can look at how likely other intelligent species are to survive and in what contexts. We have a fair bit of data for that. We also now have detailed genetic data so we can look at historical genetic bottlenecks in the technical sense for humans and for other species.
2siodine12y
http://en.wikipedia.org/wiki/Population_bottleneck#Humans
0timtyler12y
Well, I don't want to appear to endorse the thesis that you associated me with - but it appears that while we don't know much about the past exactly, we do have some idea about past risks to our own existence. We can look at the distribution of smaller risks among our ancestors, and gather data from a range of other species. What Joshua Zelinsky said about genetic data is also a guide to recent bottleneck narrowness. Occam's razor also weighs against some anthropic scenarios that imply a high risk to our existence. The idea that we have luckily escaped 1000 asteroid strikes by chance has to compete with the explanation that these asteroids were never out there in the first place. The higher the supposed risk, the bigger the number of "lucky misses" that are needed - and the lower the chances are of that being the correct explanation. Not that the past is necessarily a good guide - but rather we can account for anthropic effects quite well.
-4Will_Newsome12y
User:timtyler himself has brought up the dinosaurs' semi-extinction, for example, which was a local decrease in "moral progress" even if it might have been globally necessary or whatever.
2siodine12y
What's the current state of memetics in science (universities, journals, and so on)? I thought it turned out to be a dead end.
3timtyler12y
Susan Blackmore recently described the current state of memetics as a science as being "pathetic". A few pages on the general topic: * References: http://memetics.timtyler.org/references/ * Books: http://memetics.timtyler.org/books/ * Timeline: http://memetics.timtyler.org/timeline/ * Video: Tim Tyler: Why is there no science of memetics? What we do have is a lot of modern work on "cultural evolution". It's not quite the same - but it's close, and it has many of the basics down. Statistically, memetics may not be doing too well - but memes are going crazy - through the roof. It bodes well for the subject, I think.
4siodine12y
Nice, I was impressed by the video and your page on the criticisms of memetics. But I think you'd be more agreeable to more prejudicial people (i.e., most everyone) if you made some stylistic changes; would you care to see some criticisms?
1timtyler12y
Any feedback you care to offer would be more than welcome.

Perhaps we have this backwards?

If there is something intrinsically valuable about controversy (and I'm not really sure that there is, but I'm willing to accept the premise for the sake of discussion), and we're not getting the optimal level of controversy on the topics we normally discuss (again, not sure I agree, but stipulated), then perhaps what we should be doing is not looking for "more and better contrarians" who will disagree with us on the stuff we have consensus on, but rather starting to discuss more difficult topics where there is less consensus.

One problem is, of course, that some of us are already worried that LW is too weird-sounding and not sufficiently palatable to the mainstream, for example, and would probably be made uncomfortable if we explore more controversial stuff... it would feel too much like going to school in a clown suit. And moving from areas of strength to areas of weakness is always a little scary, and some of us will resist the transition simply for that reason. And many more.

Still, if you can make a case for the value of controversy, you might find enough of us convinced by that case to make that transition.

Here's a case for the value of controversy.

  • LessWrong orthodoxy includes a large number of propositions (over a hundred posts in just core sequences, at least one thesis per post)
  • The deductions that lead to each claim are largely independent (if post B was an obvious corollary of post A, it would have saved writer's and readers' time not to write it)
  • Reasoning is error-prone, especially when not formalized (this is a point made in the sequences; if it's wrong then q.e.d.)
  • Even if each deduction is overwhelmingly likely (let's say 99%) to be correct, it would be likely (63% in this case) that at least one out of a hundred would be incorrect
  • Because these are deductive chains of reasoning (they're "the sequences", not just "the set"), one false deduction can invalidate any number of conclusions which follow from it. The Principle of Explosion has been defeating brilliant people for millennia.

In other words, even if you believe that each item of LessWrong consensus is almost certain to be correct, you should still be doubtful that every item of LessWrong consensus is likely to be correct. And if there are significant errors, then how else will they be found and publicized other than via a controversial discussion?

9TheOtherDave12y
I agree that there are errors in the "LW consensus." I agree that a cost-effective mechanism for identifying those errors would be a valuable thing. By your estimation, how many controversial discussions have occurred on LW in the last year? How many of them have contributed to identifying any of those errors?
0roystgnr12y
Those are both good questions (as is the implicit point about cost-effectiveness or lack thereof); I'm afraid I'm not a heavy enough reader here to quickly give accurate answers.
0TheOtherDave12y
I'm not looking to you for accurate answers, I'm trying to understand the model you're operating on. If you tell me you think there have been a few controversial (in the sense you describe above) discussions and you think they've contributed to identifying errors, then it makes sense to me that you think having more such discussions is valuable. I may disagree, but it's clear to me what we're disagreeing about. If you tell me you don't think we've had any such discussions, I can sort of understanding you believing that they would be valuable if we had them, but I would also conclude I don't quite know what sorts of discussions you're talking about. If you tell me you think we've had a few such discussions but they haven't contributed anything, then I would be very confused and want to revisit my understanding of why you believe what you believe. Etc.
4[anonymous]12y
Controversial doesn't necessarily mean weird-sounding. For example, we could talk more about medicine, an area with a great deal of disagreement, without seeming like clown-suit wearing crazies. Mainstream topics should be more than enough to fill the controversy quota.
2TheOtherDave12y
(nods) Fair point.
0David_Gerard12y
This wouldn't be an issue except it's entirely unclear to me that LessWrong is making much in the way of progress of whatever sort. There's the meetup groups, which sometimes look good and sometimes sputter. But perhaps I'm wrong and there's a list of things that are reasonable evidence of progress of whatever sort.
0Will_Newsome12y
See Wei Dai's comment here—he doesn't value controversy qua controversy.
0TheOtherDave12y
Mm. Fair enough. As I've said elsewhere, I'm not convinced that the goal of having correct beliefs on the topics addressed in the Sequences will be cost-effectively approached by introducing new contrarians to LW. It would likely be more cost-effective to identify some thinkers we collectively esteem and hire them to perform a "peer review" on those topics. That said, I'm not sure I see what the point of that would be either, since it's not like EY is going to edit the Sequences regardless of what the reviewers say. It might be even more cost-effective to hire reviewers for his book before he publishes it.

Idea- Using Contrary Opinions as a Group Rationality Exercise

Sometimes when I'm discussing issues one-on-one with someone of a different opinion, I will find myself treating arguments as soldiers (I am improving on catching myself in this, I think.). I can also have difficulties verbalizing what is wrong with an argument when put on the spot.

Maybe we can use "Devil's Advocating" posts as a group exercise in rationality. Someone can read or summarize a specific opposing viewpoint that they do not necessarily agree with (maybe subjectivism, or Kuhn's scientific revolutions). They could hopefully even get completely new material, in order to provide practice in a field we haven't discussed.

They will present the strongest summary they can in a post, writing as if they fully supported the idea. The tag [Devil's Advocating] can be used to show that this is what they are doing.

One comment thread can be devoted to finding arguments that the viewpoint covers strongly. (i.e. maybe subjectivism handles a specific question a little better than most other philosophies, or maybe Kuhn's revolutions provide a better explanation of the different types of science that scientists engage in... (read more)

6thescoundrel12y
This reminds me of days in +x debate, where the topic was set in advance, and you were assigned to oppose or affirm each round. Learning to find persuasive arguments for ideas you actually support is not an intuitive skill, but certainly one that can be learned with practice. I, for one, would greatly enjoy +x debate over issues in the less wrong community.
[-][anonymous]12y90

I would love to be better at contrarianism, but I don't know where to begin.

I got where I am today mostly through trial and error.

The General Contrarian Heuristic:

  • Assume these and such people who claim to be right actually are at-least-somewhat-straightforwardly right, and they have good evidence or arguments that you're just not aware of. (There are many plausible reasons for your ignorance; e.g. for the longest time I thought Christianity and ufology were just obviously stupid, because I'd only read atheist/skeptic/scientismist diatribes. What evidence filtered evidence?) What is the most plausible evidence or argument that can be found while searching in good faith? This often splits in two directions:

    • The Vassarian steel method: E.g., you hear lots of stuff about fairies, so you go digging around and find Charles Bonnet syndrome. This might be akin to constructing steel men, but beware!, for it is often a path to sophistry & syncretism. You know how in Dan Brown novels he keeps constructing these shallow connections between spirituality and science in order to show that they're not actually at odds? Don't be Dan Brown.
    • The Newsomelike schizophrenic method: You find Charles Bonnet syndrome but decide that even that isn't enough—you postulate that daimons are taking advantage of any plausible exc
... (read more)

May we not forget interpretations consistent with the evidence, even at the cost of overweighting them."

Upvoted. The easiest way to get the wrong answer is to never have considered the right answer.

I've always thought that imagination belonged on the list of rationalist virtues.

8NancyLebovitz12y
I like that a lot.
7Will_Newsome12y
"What do you think are the rationalist virtues?" might be an interesting discussion post.

For comparison, the General Chess Heuristic: Think about a move you could make, think about the moves your opponent could make in reply, think about what moves you could make if they replied with any of those candidate moves, &c.; evaluate all possible resultant positions, subject to search heuristics and time constraints.

What's interesting is that novice chess players reliably forget to even consider what moves their opponent could make; their thought process barely includes the opponent's possible thought process as a fundamental subroutine. I think novice rationalists make the same error (where "opponent" is "person or group of people who disagree with me"), and unfortunately, unlike in chess, they don't often get any feedback alerting them to their mistake.

(Interestingly, Roko once almost defeated me in chess despite having significantly less experience than me, because he just thought really hard and reliably calculated a ton of lines. I'd never seen anyone do that successfully, and was very impressed. I would've lost except he made a silly blunder in the endgame. He who has ears to hear, let him hear.)

Any extreme minority position would take a long time to win converts. People are generally wrong because they have bad concepts, not because they have clear concepts, but mistakenly thought 2+2=5.

It takes a while to penetrate poor concepts, and the people with poor concepts have to be willing to put in the effort to justify their argument, and not just take it as a given that is up to someone else to refute their nonsense, because you can't refute gibberish. Most people here are intellectually confident. Add to that the consensus of the group, and who is ... (read more)

2billswift12y
I think this is the best comment, at least the one that best captures my own views, on this thread. Another way of looking at the problem expressed in buybuydandavis's first two paragraphs is that most people are so busy signalling, rather than thinking, that their concepts are usually "not even wrong".

Maybe we could have a "contrarian of the month" award? This could also encourage normally agreeable Less Wrong users to argue against consensus positions in hopes of winning the award.

Maybe we could have a "contrarian of the month" award?

Can we please not do this? I already feel a pre-emptive contrarian outrage against whatever consensus is arrived at when awarding this "official contrarily" award. Then I start thinking of court Jesters. This is a way to get people to think in the predetermined 'outside the box box' and change their 'mainstream' uniform to the 'rebel' uniform. That's not the way to get useful contrarians.

This could also encourage normally agreeable Less Wrong users to argue against consensus positions in hopes of winning the award.

You're advocating this as a good thing?

2John_Maxwell12y
Are you suggesting folks can't be trusted to reliably identify genuinely high-quality opinions that disagree with theirs? What can we learn from this thread? http://lesswrong.com/lw/2sl/the_irrationality_game/ The OP talks about folks who "like to find fault in every idea they see". Assuming this is valuable, there are two ways to have this kind of person: be this kind of person naturally, or unnaturally in order to win an award. Keep in mind that the award's specifications can be changed, for example, "best civil disagreement with LW majority" or "changed the most minds among LW users".
-2Will_Newsome12y
(Anybody is welcome to copy/paste/edit that post and run it again, probably in Main because the less casual nature of Main discourages accidental failure to read the rules. Also, I noticed that a lot of the rules weren't really necessary because people did reliably play in the spirit of the game; most of the rules are along the lines of 'don't cheat'. So if you re-run it you might want to remove a lot of the text. FWIW I'd upvote it and probably make a lot of comments.)

I would change the rules to go something like this: Write a one sentence summary of your conclusion first, in as shocking terms as possible. Get people to vote up or down based on whether they agree with the initial one sentence summary. Then you justify the one sentence summary in subsequent paragraphs, which might cause folks to change their mind. That way we could get novel but possibly true beliefs in addition to irrational beliefs at the top.

Or rethink the game entirely along these lines so it is the "More Plausible Than I Initially Thought Game", so we don't get things like UFOs at the top. Participants upvote those comments that cause the maximum change to their beliefs, especially by making something surprising seem at least vaguely plausible. I dislike the current game rules somewhat because it seems like a signaling fest.

0Will_Newsome12y
FWIW I'm really glad that UFOs were at the top. The resultant discussion and links to articles about Fatima contributed to me doing a lot of serious thinking and ultimately changing my mind, and now I believe in "hyperdimensional"/demonic/high-weirdness explanations for UFOs. Your variation on the game still sounds better, though, 'cuz it focuses on marginals which are clearly more important here.
0[anonymous]12y
I was going to post a joke about receiving -100 reputation in less than 24 hours, but it was too sad to be funny.

Awarded to a nonconformist in black or a nonconformist in a clown suit? The latter is likely to get the tone argument (where someone's claimed rejection is the tone of the statement rather than its content).

Suggestion: whenever you're tempted to respond with a tone argument ("stop being so rude/dismissive/such a flaming arsehole/etc"), try really hard to respond to the substance as if the tone is lovely. The effort will net you upvotes ;-)

Seconding your suggestion because it's worked well for me every time I found the strength to use it. Also, when you feel really aggravated at your opponent's tone, fogging is a useful and civil-sounding technique.

7thomblake12y
That took forever for me to figure out. Wikipedia:Fogging.
5thomblake12y
Hmm... I just realized my standard for "taking forever" to find a piece of information is about 30 seconds. I love the future.
7David_Gerard12y
For a good example, note how wonderful Wei Dai's tone consistently is, even when responding to comments where "go away you idiot" would be a quite reasonable reaction.
-7wedrifid12y
5Wei Dai12y
Worked well in what sense? David talked about netting upvotes, but surely that's not a main consideration for you at this point. I'm hoping that being nice and responding just to substance might make the other person less belligerent and a better contributor to the community. I tried this on Dmytry and it didn't work, but I wonder if it has worked in the past on others. Do you or anyone else have any anecdotes in this regard?
8cousin_it12y
Hmm, you're right, I just checked and it has never worked on rude people for me either. I must've been thinking about my exchanges with some people who were confident and confused about an issue, but not rude. Sorry.
5David_Gerard12y
It nets upvotes because it produces a useful response post for the onlookers, who have the votes. This is why it's work, because it involves turning an annoying post into something of value.
4wedrifid12y
Avoiding flame wars. Leaving the 'contrarian' at least with the sense that some of their ideas have been heard and validated. Reducing the extent to which you yourself get caught up in negative spirals. All without enabling them or encouraging more undesired behavior.
2Wei Dai12y
Both you and David_Gerard seem to have taken my question as asking about the general benefits of "ignoring tone", when I was trying to figure out what cousin_it meant by "worked well", specifically whether he had succeeded in making a rude commenter less belligerent and a better contributor to the community, and also explaining why I wasn't sure what he meant. Did you really misinterpret my question, or did you just use it as an opportunity to go off on a tangent and write something of general interest? (I'm trying to figure out if I need to be more careful about how to express myself.)
-1David_Gerard12y
I would be interested to know what "worked well" meant more specifically as well (more specifically than "I felt personally satisfied with the conversation").
-4wedrifid12y
I don't seem to have done that at all. Not only was I repling to what 'worked well' meant - in general and from what I have observed of specific recent applications here - I was discussing the use of fogging, not merely tone-ignorance.
2Will_Newsome12y
(I remember being sort of rude or at least mildly-aggressively-uncharitable to you about a year ago and you responded saying we could clear up any misunderstandings via chat. I subsequently issued some mea culpas and was probably more charitable towards you from then on. Not sure if that counts, IIRC I was only being mildly rude.)
0John_Maxwell12y
Whatever kind of contrarian Less Wrong thinks is valuable. It's not completely specified. I'm not sure I see how tone comes in.
4David_Gerard12y
I'm thinking of the responses to critics of late. Even the arseholes are slightly worth listening to, but tone arguments are a way of not listening, and this may miss something important even if it's often all the response it deserves. No-one's obligated not to use it, but it's a good exercise to be able not to, particularly for the benefit of onlookers.
2TheOtherDave12y
Of course, listening doesn't leave a record, so it's hard to tell how many people are listening. It's the relative handful of people who reply who define the perceived tone of the site's response. Or are you suggesting that responding to the substance is a better strategy than simply listening?
0David_Gerard12y
Hmmm. Driving readers away in such a way that they don't even respond strikes me as bad. But in working out what to do about this, I'm left with asking my other-people-simulator, which I strongly suspect will just hand me back the results of typical mind fallacy.
8ahartell12y
I'm not sure how much I like this idea (or the version I'm about to propose) but I think it would be better to treat it as a "Contrarian Quotes of the Month" type thing, kind of like the Rationality Quotes thread but using contrarian lesswrong comments.
-1Alicorn12y
Would this award have content?
2John_Maxwell12y
Sorry, I'm not sure what you mean. I'm thinking it would go something like this: users would be encouraged to track examples of contrarian contributions. At the end of the month, there would be a nomination process (with pointers to examples of contrary statements) and then voting on who was the best contrarian. (Whoever maximizes quality of statements degree of disagreement with other users at the time they wrote the statements. Number of contrary statements made could also be a multiplier, although that might be a bad idea if we want to avoid flooding LW with disagreeable contributions. Come to think of it, "contrarian *contribution of the month might be a better award".) Allowing users to nominate themselves seems like a generally good idea, in case we are subconsciously avoiding our beliefs' real weak points, and to fight availability bias (individual users are more likely to remember good contrary comments they made early on in the month). There's probably no reason not to keep the registry open for nominations all month long. If you're asking if there will be an award, maybe we could give them karma somehow? Personally, I suspect just winning the title will be a significant motivator. An interesting variation would be to encourage established users to create alternate accounts to be contrary with, and only step out from behind the alternate account if they won the award. One problem is quantifying the degree of disagreement. For instance, in one sense this recent discussion post of mine is very much in line with stereotyped opinions of what Less Wrong thinks, but in another sense, it got a substantial number of votes down (was negative for a good while after I created it) and the top-rated comment on it, voted much higher than the post itself, expresses disagreement. So was I being contrarian or not? http://lesswrong.com/lw/bfy/you_only_live_once_a_reframing_of_working_towards/ Another idea is for contrary posts to specifically state that they are nomin

I don't like contrarians, but I think honest and fundamental dissent is vital.

A recent development in applied psychology is that small incentives can have large consequences. I think the upvote/downvote ratio is underestimated in importance. The ratio currently is obviously greater than 1; I don't know how much greater. (Who does?) This creates an asymmetry in which below zero, each downvote has disproportionate stigmatizing power, creating an atmosphere of apprehension among dissenters. The complexion of postings might change if downvoting and upvoting r... (read more)

We need a handy way of saying "Yes I understand the standard arguments for P but I still think it's worth your while considering this argument for ¬P rather than just telling me the standard arguments for P."

Unfortunately it may be that the only credible signal of this is to first outline the standard arguments for P.

9Will_Newsome12y
Agreed. In my experience this problem of standard-argument-affirming shows up a lot during debates about uFAI risks. If I try to suggest some non-obvious argument against the Eliezerian position then I tend to mostly get re-assertions or re-phrasings of the standard Eliezerian arguments, which is distracting and a tad insulting. It seems some people identify me as a mainstream-view-loving enemy who is trying to unfairly marginalize the Eliezerian position, and thus don't bother to carefully check if my argument might be reasonable on its own terms. In the last few months I've been averaging like 5 to 10 karma on my anti-Eliezerian AI risk arguments, and I think that's because I've expressed them more clearly and redundantly. But they're the same arguments that were getting downvoted to -5 or so back a year or two ago when I wasn't taking special care not to trigger local immune responses. (Weirdly, even saying that I'd spent a year or so with the Visiting Fellows talking to a lot of SingInst people who didn't think I was clearly stupid or insane didn't dissuade people from thinking I was clearly mistaken about basic SingInst arguments. I still don't really understand that... maybe I was interpreted as making an unjustified claim to authority that shouldn't be taken as evidence, or something.)
4Rain12y
The majority of your comments which I've downvoted have been for use of improper vocabulary. That is, you repurpose words in unconventional ways which result in extremely difficult, if not impossible, translation to something I can understand. Lately, you seem to have been taking more care to use words with their dictionary definitions.
-1Eugine_Nier12y
Part of it maybe that people know you and know you're not an idiot.

I think the kind of people you're looking for are rare in general, so it shouldn't be a surprise that they are rare on LW.

That said, there's room for improvement. The karma system only allows for one kind of vote. It could be more like Slashdot and allow for tagging of the vote, or better yet allow for up/down voting in several different categories. If a comment is IMO well worded, clear, logical, and dead wrong, then it's probably worth reading, but not worth believing. Right now all I can do is vote it up or down. I'd like to be able to vote for clar... (read more)

1TheOtherDave12y
Conversely, we could establish the convention of downvoting stuff we consider valueless and upvoting stuff we consider valuable, and leave right and wrong out of it except insofar as voters value right things and antivalue wrong things. If we did that, we'd understand that highly upvoted comments were considered valuable, but not necessarily agreed with. Oh, wait. Sure, we could also create a mechanism whereby people could indicate whether they agreed with it (also whether they thought it was well-worded, clear, logical, funny, properly spelled, whether it rhymed, and various other attributes), but before doing that it's worth asking what the benefit of that would be. I understand wanting to facilitate finding valuable comments and hiding valueless ones, but for the other stuff I'd like to see the benefits articulated, not just labelled "better".
1anotherblackhat12y
The idea is to make it possible to say (by voting) "even though I think you're wrong, I'd like to hear more". The problem IMO with the current system is that the people who vote "I think that's wrong" drown out the people who vote "I think that's interesting". It may be that isn't supposed to happen, but that seems to be what does happen. Would a "rhymes" button make sense? Sure - if you wanted to encourage rhyming posts. The GP wants to encourage contrarians and skeptics, so "like/dislike" and "agree/disagree" seemed appropriate. I haven't seen many of them on LW, but on other boards I really wish there was a "WTF? didn't understand your post" button, as I would press that one quite a bit. What buttons are best is a subject unto itself, but probably not worth discussing unless the basic concept is possible and worthwhile.
3TheOtherDave12y
Conversely, the impetus to make the basic concept possible might increase if someone made a compelling case for what value it would provide. Incidentally, I'm not suggesting that people should upvote/downvote based on "interesting" rather than "true". I'm suggesting people should upvote/downvote based on "want more like this." That means if I see a true comment, and I want to see more true comments, I upvote it because it's true. If I see a well-written comment, and I want to see more well-written comments, I upvote it because it's well-written. If I see a rhyming comment, and I want to see more rhyming comments, I upvote it because it rhymes. Etc. Being able to tag a vote to indicate what attribute(s) I wanted more or less of would admittedly be clearer in ambiguous cases... I do sometimes find myself staring at a downvote wondering what the reason for it was. That said, I'm not sure it would actually add much value.
2JackV12y
I think this is directly relevant to the idea of embracing contrarian comments. The idea of having extra categories of voting is problematic, because it's always easy to suggest, but only worthwhile if people will often want to distinguish them, and distinguishing them will be useful. So I think normally it's a well-meaning but doomed suggestion, and better to stick to just one. However, whether or not it would be a good idea to actually imlpement, I think separating "interested" and "agree" is a good way of expressing what happens to contrarian comments. I don't have first-hand experience, but based on what I usually see happening at message boards, I suspect a common case is something like: 1. Someone posts a contrarian comment. Because they are not already a community stalwart, they also compose the comment in a way which is low-status within the community (eg. bits of bad reasoning, waffle, embedded in other assumptions which disagree with the community). 2. Thus, people choose between "there's something interesting here" and "In general, this comment doesn't support the norms we want this community to represent." The latter usually wins except when the commenter happens to be popular or very articulate. The interesting/agree distinction would be relevant in cases like this, for instance: * I'm pretty sure this is wrong, but I can't explain why, I'd like to see someone else tackle it and agree/disagree * I think this comment is mostly sub-par, but the core idea is really, really interesting * I might click "upvote" for a comment I thought was funny, but want a greater level of agreement for a comment I specifically wanted to endorse. There's a possibly similar distinction between stackoverflow and stackoverflow meta, because negative votes affect user rank on overflow but not meta. On stack overflow, voting generally refers to perceived quality. On meta, it normally means agreement. I'm not sure I'd advocate this as a good idea, but it seemed an int
1NancyLebovitz12y
"Even though I think you're wrong, I'd like to hear more" strikes me as better expressed as a comment rather than a vote. That way, you can explain what you want to hear more about.
0vi21maobk9vp12y
Vote + comment is even better: you can sort by votes. There are topics here on LW where I would prefer to read only threads with high "wrong but interesting" scores.
0anotherblackhat12y
I'd much rather get a reply than a vote. But presumably there's a reason for the current system rather than the arguably simpler method of not having up/down buttons.

Some advice for wannabe contrarians and trolls, here. (Muflax seems to be in the middle of re-designing his blog so the link might not be 100% stable.)

I think we can see now how the situation evolved: SI ignored what 'contrarians' (the mainstream) said, the views they formed after reading SI's arguments, etc.

SI then gone to talk to GiveWell, and the presentation resulted in Holden forming same view - if you strip his statement down to bare bones he says that he thinks giving money to SI results in either no change, or increase of the risk, as the approach SI advocates is more dangerous than current direction, and the rationale given has already been available (but has been ignored).

Ultimately, it may b... (read more)

1Viliam_Bur12y
Is Holden's view really the same as the mainstream view, or is it just a surface similarity? For example, a typical outsider would doubt about SIAI abilities, because a typical outsider thinks intelligent machines belong to sci-fi, not real life. Holden worries about lack of credentials. Among those who think intelligent machines are possible, a typical person thinks it will be OK, because obviously the machines will do only what we tell them to do. Holden worries that a (supposedly) Friendly AI is more risky than a "Tool AI". Etc.
0private_messaging12y
Mainstream meaning the people with credentials that the Holden was referring to (whose views are somewhat echoed by everyone else). The kind of folk that will not be swayed by some sort of mental confusion between common discourse "the function of the AI is to make paperclips" and technical discourse where utility function is mathematical function that is a part of specific design of a specific AI architecture. Same kind of folk, if they come across the Russian mathematician name-dropping that's going on here, and after they politely exhaust the possibility that they misunderstood, would be convinced that this is some complete pile of manure arising from utterly incompetent person reporting his awesome misunderstandings of advanced mathematics he read off a popularization book. Second order bad science popularization. I don't even care about AI any more. It boggles my mind that there's entire community of people who just go around having such gross lack of understanding of the things they are talking about. edit: This stuff is only tolerated because it sort of promotes interest in mathematics. To be fair, even very gross misunderstanding of mathematics may serve a good function if a person passionately talks of the importance of mathematics he misunderstood. But once you start seriously pushing nonsense forward - you're out. This whole thing reminds me of experience with entirely opposite but equally dumb point: some guy with good verbal skills read Godel, Escher, Bach, thought he understood Godel's incompleteness theorem, and imagined that understanding of Godel's incompleteness theorem implied that humans are capable of hypercomputation (beyond Turing machine). It's literally impossible to talk sense into such cases. They don't understand the basics but they jump ahead to the highly advanced topics, which they understand metaphorically. Not having had properly studied mathematics they do not understand how great is the care required for not screwing up (especiall

haven't read yet but you can start by not calling anyone who disagrees with the established view a contrarian. It implies anyone who disagrees is doing so to play out a role rather than out of actual disagreement.

edit: so it seems that people who are playing out a role is exactly what you want more of. I assumed you were using "how can we get more contrarians" as codespeak for how can we get more disagreement. If you just want more actual "contrarians", well, I'm not sure "contrarians" is a real category. In any case it's not ... (read more)

-1Eugine_Nier12y
I don't think that's the standard definition of contrarian.

I don't see a problem with driving "contrarians" away. That is what we should be doing.

To be a "contrarian" is to have written a bottom line already: disagree with everything everyone else agrees with.

To be a "contrarian" among smart people is to adopt reversed intelligence as a method of intelligence.

To be a "contrarian" among stupid people is, like American football, something that you have to be smart enough to do but stupid enough to think worth doing.

To be a "contrarian" is to limit oneself to writing ... (read more)

Yes, being a "contrarian" is irrational for the individual, but may be good for the group. I wouldn't try to turn someone into a "contrarian" for my own benefits, but I don't feel qualms about making better use of people who already are.

-14Will_Newsome12y
5[anonymous]12y
I think there's a difference between "contrarian about X" and "contrarian". The former has (hopefully) looked at the evidence around X and come to a position on X that differs from the mainstream. The latter values being different over being right. I think the first sort can be valuable, and shouldn't be driven away.
0Richard_Kennaway12y
Wei Dai's first sentence only talks about the second sort, and I wouldn't call someone who has come to a position on X that differs from the mainstream a "contrarian about X". If they call themselves that, then instead of simply being able to present their arguments, they have tied their identity to being in opposition, and the whole downward spiral I described comes into play.
-1chaosmosis12y
There's no problem with identifying with arguments and wanting to defend certain positions if you are open to arguments and evidence against your position. It's actually convenient to do so for the purposes of discussion and advocacy. Most people here are probably "transhumanists", which connects their beliefs to their identity, but that doesn't mean they wouldn't change their mind or alter their beliefs if they see evidence against transhumanism. Describing specific traits that apply to you and your positions shouldn't make you reluctant to change your positions, and also identifying with specific advocacy groups is probably inevitable. I don't think you're really addressing what Wei Dai's original post is actually discussing. I think that it should be apparent that Wei Dai isn't advocating having more closeminded commenters, but is advocating a more diverse set of viewpoints and advocacies. You're dismissing the overall idea was trying to be reached at based on an interpretation of "contrarian" that doesn't make sense when viewed in the context of the advocacy statement within the original post. Even if you're right about what "contrarian" means, please mentally replace every instance of "contrarian" with "person advocating something unpopular", and that will make this discussion much more productive. I agree that tying one's identity to opposition specifically is bad, though. That's political paralysis as a consequence of misguided cynicism. If you reject every position then you can advocate nothing. That's not just ineffective, it's a horrible way to live. Affirmation is good.
-2wedrifid12y
As far as I know Wei Dai is male.
1chaosmosis12y
I realized while writing the post that I didn't know his gender and proceeded to edit as fast as I could but you people still caught the mistake before I fixed it, I'm embarrassed. At least it's better to use "she" than "he" as my default assumption (balances against gendered language in favor of men, etc). Although on second thought it probably indicates that I associate civility with females which is stupid and unfair and can't be intentionally controlled by me anyways so it's not really worth lamenting. But, sorry, Wei Dai, although it was just an accident and I doubt you'll care much.
-3wedrifid12y
It makes a difference that there are some Wei Dais that are female. I probably wouldn't default to associating anti-consensus advocacy with female. That goes against a notorious (and as far as I know reasonably well founded) stereotype.
0chaosmosis12y
I was thinking and perceiving in terms of tone rather than in terms of advocacy statement. Someone else mentioned somewhere that essentially Wei Dai is very good at disagreeing politely.
1Alicorn12y
I've met him in person, and this is the case.
3Eugine_Nier12y
I sometimes argue in favor of positions I don't really believe (i.e., assign p<.5 to) if I think the probability is higher than general consensus and I suspect at least Will Newsome frequently does the same.
6Will_Newsome12y
Yes, but it's often a hassle. You risk being accused of trolling, overconfidence, &c., and it's difficult to claim that such accusations don't have some tinge of truth. I suspect it's not overall a very good habit and that I bring it to LessWrong mostly because it happens to work well in my personal rationality practice. On LessWrong it's probably better to put in a little extra work to find a way to go meta—don't support a side, but show clear not-introspectively-obvious reasons why someone could hold a belief that was to them introspectively obvious and thus difficult to explain. I generally like the anti-democracy LW commenters because they seem to have practiced this skill.
1timtyler12y
Contrarians get to pick and choose their battle grounds. All they have to do to be right is to seek out places where a lot of people are wrong.
0Viliam_Bur12y
This comment should have 99 upvotes and should be moved to "Main" as a separate article. Then we should link it whenever the same topic appears again. Reversing group-think is like reversing stupidity, or like an underconfidence at group level. It can be done. It can be interesting. But I prefer reading rational people's best estimates of reality. And I prefer disagreement based on genuine experience and belief, not because someone has felt a duty to artificially maintain diversity. If you disagree with whatever, for example many-worlds interpretation, say it. Say "I disagree because of X and Y". Or say "I disagree, because if feels wrong, and because many people disagree, including some experts in the field (which is a good Bayesian evidence)". That's all OK. But don't say or imply things like "we should attract more people who disagree with many-world interpretation, to keep our discussion balanced". That is manipulating evidence. If anything, we should discuss wider range of topics. Then naturally we will attract people who agree with N-1 topics, and disagree with 1 topic; and they will say it, and we will know they mean it.
0[anonymous]12y
Hm... I think you're lying to be contrary. E.g.: I think you think Robin Hanson and Eliezer Yudkowsky have useful things to say. Both have styled themselves contrarians. Your points are clearly dumb cliches—I think you did that purposefully, but I think the way in which you did it is self-contradictory, thus your meta-level point would also be invalid. So maybe you're calling attention to the meta-level problem of determining what a "contrarian" is?

This could be rephrased more positively :D

If someone has something they may well be right about, and you don't learn it, that's a problem. Or if they make an argument that you know is wrong from parallel lines of evidence but can't say why it's wrong, that's a slightly smaller problem. And it's a problem with you, not with them. This is a general principle of disagreement. This post is the charge that we are bad at learning from people.

Hmm. Or maybe that's not right. We could be learning from them (on average), but still driving them away because what seemed like constructive argument from one side didn't from the other. In which case, that's fine and you shouldn't listen to this comment :P

2billswift12y
Or still driving them away because the comment stream petered out before people got around to expressing their changed viewpoint and the contrarian left because he never realized he was having an impact. The post and comment format isn't really very good for a serious back-and-forth discussion. Especially when posts are so briefly on the front page, note that this is another good reason for getting meet-up announcements OFF of the discussion page.

It's so difficult to find someone who will communicate on our level and yet disagrees on object-level things.

Probably the best way to get more contrarians, is for folks from Less Wrong to learn from people outside the community, change their own beliefs because of it, and come back to share their wisdom with the masses.

Okay, that sounded better in my head too.

It's so difficult to find someone who will communicate on our level and yet disagrees on object-level things.

Is this because people smart enough to communicate on our level largely agree with a lot of what is generally agreed on here, for the same reason that most people all agree that 2+2=4?

Or is it because LessWrong is, for reasons unconnected with rationality, largely drawn from a certain very narrow demographic range, who grab onto this constellation of ideas like an enzyme to its substrate, and "communicating on our level" just means being that sort of person?

2thomblake12y
Probably both, mostly the latter. Noting that "being that sort of person" refers to the demographic range, and not necessarily agreeing with those ideas.
0vi21maobk9vp12y
It is not just about demographic. You are supposed to be familiar with many standard arguments; but many of them make no sense if you have different priors, because they have too little evidence on their side (AI researcher interview series seems to illustrate well that some kind of experience can give you evidence against a few key points). If you find Hanson's arguments about the core of FOOM concept stronger than Eliezer's, you will have less incentive to familiarize yourself to everything that you should remember to communicate on what you called "our level", because it makes no sense without this key point. So disagreement on object level in the very beginning leads to infamilarity with required things. Nothing too strange here.

To what degree should the lack of good contrarians be taken as evidence that LW "consensus" (scare quotes because the like-mindedness of this community is overestimated [1]) is true?

People are always talking about how the Less Wrong arguments are good viewed from the inside but not the outside, so this question is important as it is an outside-view consideration that, unlike most others, strikes favorably on the Less Wrong mentality, which is usually only justified inside the arguments.

8AlanCrowe12y
Asymmetrical motivation is the problem. If you disagree with a mainstream position, arguing against it feels worth while. If you agree with a fringe position, arguing in favour of it feels worth while. But if you disagree with a fringe position, why bother? Where the LW "consensus" agrees with the mainstream, then the lack of good contrarians (who would feel their time well spent) is evidence of a sort that the LW "consensus" is true. But such weak evidence is hardly needed. But where the LW "consensus" is itself the fringe position, we expect that good contrarians would have better things to do than try to set us straight. Thus the lack of good contrarians is both what we expect when a fringe LW "consensus" position is true (which makes it hard to dispute) and when it is false (why bother?). Consequently the lack of good contrarians tells us nothing at all in exactly the case when we look to it for clues.
0[anonymous]12y
Good point.

I completely disagree. The optimal number of contrarians is 0.

6TimS12y
What is the optimal number of people who are intelligent but, on reflection, don't agree with the LessWrong consensus?
5Incorrect12y
Give me your answer to that question before I answer.
5TimS12y
I'd guess that somewhere between 1/3 and 1/4 of the current active LessWrong community should be willing to intelligently disagree with consensus - if our goal is to improve our theories of how society does and should work.
2Incorrect12y
I completely disagree.
2TimS12y
Is there an answer (other than zero), that you wouldn't completely disagree with? If not, why did you ask me for my number first? FWIW, I don't think "willingness to intelligently disagree with consensus" = contrarian. Disagreeing for the simple purpose of disagreeing is pointless.
-4Incorrect12y
I would disagree with you if you said zero too.
6Dorikka12y
If this chain of posts is a joke, I don't think I get it. If it's not, I am mildly amused.
6[anonymous]12y
I think it's a meta-joke. Incorrect is a hyper-contrarian arguing about how many contrarians there should be :)

Not only that, but in an uninformative and confrontational manner, posing the problem of how to respond to generate better contrarianism.

2Eugine_Nier12y
TimS is encouraging people to be more contrarian, so Incorrect is disagreeing with him.
2TimS12y
contrarian != willing to intelligently disagree with consensus
2TimS12y
I'm not joking, but it's pretty clear Incorrect is. I'm not amused, but the joke is basically at my expense, so that's not very good evidence of whether Incorrect was actually amusing.
0thomblake12y
Speaking as one who often upvotes bad jokes... No.
0thomblake12y
For clarification, I only upvote good bad jokes.
0taelor12y
Is this the right room for an argument? Edit: I seemed to have failed my spot test to notice that some else in the thread had aready linked to the same video.
4orthonormal12y
It's unlikely that the "LW mainstream position" is currently right about all of its weird beliefs, though I wouldn't be surprised if we're right to take each of the ideas more seriously than the normal mainstream does. EDIT: never mind, I didn't catch that you were doing this.
[-][anonymous]12y00

Of course what is optimal might be open to debate, but from my perspective, it can't be right that my own criticisms are valued so highly (especially since I've been moving closer to the SingInst "inner circle" and my critical tendencies have been decreasing). In the spirit of making oneself redundant, I'd feel much better if my occasional voice of dissent is just considered one amongst many.

[This comment is no longer endorsed by its author]Reply

Tangentially related: I was in the HPMOR thread and noticed that there's a strong tendency to reward good answers but only a weak tendency to reward good questions. The questions are actually more important than the answers because they're a prerequisite to the answers, but they don't seem to be being treated as such. They have roughly half as much reputation as the popular answers do, which seems unfair.

I would guess that this extends to the rest of the site as well, as it's a fairly common thing that humans do. Things would probably be better here if we ... (read more)

2Nornagest12y
Disagree. Insightful-sounding questions are much much easier to come up with than genuinely insightful answers, so despite the fact that the former is a prerequisite to the latter, rewarding them equally would provide perverse incentives. At least, that's true if our goal is to maximize the number of insightful results we generate -- which seems like a pretty reasonable assumption to me.
0chaosmosis12y
You cheated. You're comparing "insightful-sounding questions" to "genuinely insightful answers". Of course the genuine answers are going to come out ahead. That's completely unfair to the suggestion. But, assuming that people on LessWrong actually have the ability to distinguish between insightful-sounding questions and genuinely insightful questions (which seems just as easy as distinguishing between insightful-sounding answers and genuinely insightful answers, btw) the proposal makes sense. Your comment does not contain an argument. It contains a blatantly flawed framing of the proposal I put forward and a catchphrase, "perverse incentives", and you don't explain the thought that goes into that catchphrase. You never articulate what the actual impact of these perverse incentives would look like, or how these perverse incentives would arise. Do you anticipate that if more people upvoted questions we would end up with fewer good results? I do not see how such an outcome would occur. I see zero reason to believe the "perverse incentives" you reference would originate. There's a huge tendency within academia to ignore anything with partial solutions or doubts or blank spaces, and to undervalue questioning. Questions are inherently low status because they explicitly reveal a large gap of knowledge that cannot easily be overcome by the asker, and also have an element of submission to the "more intelligent" person who will answer the question. My suggestion is designed to counterbalance that. The best way to maximize the number of insightful thoughts and results you have is to ask insightful questions, that seems like a very reasonable assumption to me. Moreover, putting forth the question which took place at an earlier point in the thought process allows others to more easily understand whatever conclusions you may or may not reach. It also allows people to take that question along different avenues of thought to reach useful conclusions that you would not have even
2Nornagest12y
"Perverse incentives" isn't a LW catchphrase. It's a term from economics, used to describe situations where external changes in the incentive structure around some good you want to maximize actually end up maximizing something else at its expense. This often happens when the thing you wanted to maximize is hard to quantify or has a lot of prerequisites, making it easier to encourage things by proxy -- which sometimes works, but can also distort markets. Goodhart's law is a special case. I'd assumed this was a ubiquitous enough concept that I wouldn't have to explain it; my mistake. In this case, we've got an incentive (karma) and a goal to maximize (insightful results, which require both a question and a promising answer to it). In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question. Also in my experience, questions are cheap if you're already closely familiar with the source material, which most of the people posting in the MoR threads probably are. If I'm right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results. There are a number of ways this could fail in practice: the question or answer space might be saturated, or people's inclinations in this area might be insensitive to karma (in which cases no amount of incentives either way would help). One of the premises could be wrong. But as marginal reasoning, it's sound.
0chaosmosis12y
This is all reasoning that should have been made explicit in your comment. Your objection has good thoughts going into it but I had no way of knowing that from your previous comment. I knew that "perverse incentives" was an economic catchphrase but thought you were just referencing it without reason because you made no attempt to describe why the perverse incentives would arise and why the LessWrong commenters would have a difficult time distinguishing intelligent questions from dumb questions. I thought you were treating the economic catchphrase like phlogiston. If your above thought process had been described in your comment it would have made much more sense. Isn't this the same with answers? I don't see why it wouldn't be. Isn't this the same with answers? I don't see why it wouldn't be. This only makes sense if people are rational agents. Given that you've already conceded that we irrationally undervalue good questions and questioners, doesn't it make more sense that actively trying to be kinder to questioners would return the question/answer market to its objective equilibria, thus maximizing utility? I note the irony of asking questions here but I couldn't manage to express my thoughts differently.
0Nornagest12y
If you come up with a good (or even convincing) answer, you've already front-loaded a lot of the analysis that people need to verify it. All you need to do is write it down -- which is enough work that a lot of people don't, but less than doing the analysis in the first place. It helps, but not as much. Patching holes takes more original thought than finding them. It makes sense if people respond to karma incentives. If they don't, there's no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn't. I didn't say this.
0vi21maobk9vp12y
Actually, changing karma allocation norms could change visibility of unanswered questions judged interesting. This can be an end in itself, or an indirect karma-related incentive.

I've noticed there's been a dozen or more threads and suggestions like this one; has anything ever come from them? These suggestions are starting to look like simple opportunities for circle jerking. Who would even decide on and implement these things? Yudkowsky?

2MixedNuts12y
Matt.
1siodine12y
So does Matt ever implement these suggestions?
6MixedNuts12y
* Requesting suggestions for the front page * Reversion of favicon change * Allowing PDF uploads * Deletion of childless comments * Polls, to be implemented * Display number of comments * Site redesign after call for suggestions including * Meetup system * Comment retraction * Karma bubble size * Recent karma change * Removing reporting to moderators * Disallowing voting on out-of-context comments I may be missing some.

Somebody who is right does not need a contrarian that badly. Someone who is wrong needs one. But just everybody thinks how his contrarian is not a particularly good one,

0vi21maobk9vp12y
Are there "people/communitites who are right"? There are usually ones who are right about some things, wrong about other things. Motivation to find contrarians can stem from two directions: to be less wrong even if a falsehood is temporarily considered proven; and to get a wider set of ideas when brainstorming. Note that when brainstorming you benefit from completely unfeasible but relevant ideas. Wild ideas give new points of view and increase the range of feasible ideas you can think of.
[-][anonymous]12y-20
[This comment is no longer endorsed by its author]Reply