Mandatory Secret Identities

by Eliezer Yudkowsky3 min read8th Apr 2009186 comments

40

Group Rationality
Personal Blog

Previously in seriesWhining-Based Communities

"But there is a reason why many of my students have achieved great things; and by that I do not mean high rank in the Bayesian Conspiracy.  I expected much of them, and they came to expect much of themselves." —Jeffreyssai

Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of...

...becoming a teacher and having their own martial arts dojo someday.

To see what's wrong with this, imagine going to a class on literary criticism, falling in love with it, and dreaming of someday becoming a famous literary critic just like your professor, but never actually writing anything.  Writers tend to look down on literary critics' understanding of the art form itself, for just this reason.  (Orson Scott Card uses the analogy of a wine critic who listens to a wine-taster saying "This wine has a great bouquet", and goes off to tell their students "You've got to make sure your wine has a great bouquet".  When the student asks, "How?  Does it have anything to do with grapes?" the critic replies disdainfully, "That's for grape-growers!  I teach wine.")

Similarly, I propose, no student of rationality should study with the purpose of becoming a rationality instructor in turn.  You do that on Sundays, or full-time after you retire.

And to place a go stone blocking this failure mode, I propose a requirement that all rationality instructors must have secret identities.  They must have a life outside the Bayesian Conspiracy, which would be worthy of respect even if they were not rationality instructors.  And to enforce this, I suggest the rule:

  Rationality_Respect1(Instructor) = min(Rationality_Respect0(Instructor), Non_Rationality_Respect0(Instructor))

That is, you can't respect someone as a rationality instructor, more than you would respect them if they were not rationality instructors.

Some notes:

• This doesn't set Rationality_Respect1 equal to Non_Rationality_Respect0.  It establishes an upper bound.  This doesn't mean you can find random awesome people and expect them to be able to teach you.  Explicit, abstract, cross-domain understanding of rationality and the ability to teach it to others is, unfortunately, an additional discipline on top of domain-specific life success.  Newton was a Christian etcetera.  I'd rather hear what Laplace had to say about rationality—Laplace wasn't as famous as Newton, but Laplace was a great mathematician, physicist, and astronomer in his own right, and he was the one who said "I have no need of that hypothesis" (when Napoleon asked why Laplace's works on celestial mechanics did not mention God).  So I would respect Laplace as a rationality instructor well above Newton, by the min() function given above.

• We should be generous about what counts as a secret identity outside the Bayesian Conspiracy.  If it's something that outsiders do in fact see as impressive, then it's "outside" regardless of how much Bayesian content is in the job.  An experimental psychologist who writes good papers on heuristics and biases, a successful trader who uses Bayesian algorithms, a well-selling author of a general-audiences popular book on atheism—all of these have worthy secret identities.  None of this contradicts the spirit of being good at something besides rationality—no, not even the last, because writing books that sell is a further difficult skill!  At the same time, you don't want to be too lax and start respecting the instructor's ability to put up probability-theory equations on the blackboard—it has to be visibly outside the walls of the dojo and nothing that could be systematized within the Conspiracy as a token requirement.

• Apart from this, I shall not try to specify what exactly is worthy of respect.  A creative mind may have good reason to depart from any criterion I care to describe.  I'll just stick with the idea that "Nice rationality instructor" should be bounded above by "Nice secret identity".

But if the Bayesian Conspiracy is ever to populate itself with instructors, this criterion should not be too strict.  A simple test to see whether you live inside an elite bubble is to ask yourself whether the percentage of PhD-bearers in your apparent world exceeds the 0.25% rate at which they are found in the general population.  Being a math professor at a small university who has published a few original proofs, or a successful day trader who retired after five years to become an organic farmer, or a serial entrepreneur who lived through three failed startups before going back to a more ordinary job as a senior programmer—that's nothing to sneeze at.  The vast majority of people go through their whole lives without being that interesting.  Any of these three would have some tales to tell of real-world use, on Sundays at the small rationality dojo where they were instructors.  What I'm trying to say here is: don't demand that everyone be Robin Hanson in their secret identity, that is setting the bar too high.  Selective reporting makes it seem that fantastically high-achieving people have a far higher relative frequency than their real occurrence.  So if you ask for your rationality instructor to be as interesting as the sort of people you read about in the newspapers—and a master rationalist on top of that—and a good teacher on top of that—then you're going to have to join one of three famous dojos in New York, or something.  But you don't want to be too lax and start respecting things that others wouldn't respect if they weren't specially looking for reasons to praise the instructor.  "Having a good secret identity" should require way more effort than anything that could become a token requirement.

Now I put to you:  If the instructors all have real-world anecdotes to tell of using their knowledge, and all of the students know that the desirable career path can't just be to become a rationality instructor, doesn't that sound healthier?

 

Part of the sequence The Craft and the Community

Next post: "Beware of Other-Optimizing"

Previous post: "Whining-Based Communities"

Group Rationality1
Personal Blog

40

186 comments, sorted by Highlighting new comments since Today at 9:47 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What does this post even mean? I don't have access to my own respect function, and I don't know if I'd mess with it this way even if I did.

If you were to say tomorrow "I've been lying about the whole AI programmer thing; I actually live in my parents' basement and have never done anything worthwhile in any non-rationality field in my entire life," then would I have to revise my opinion that you're a very good rationality teacher? Would I have to deny having learned really valuable things from you?

Or would I have to say, "Well, this guy named Eliezer taught me everything I know, he's completely opened my mind to new domains of knowledge, and you should totally read everything he's written - but he's not all that great and I don't have any respect for him and you shouldn't either" when referring people to your writing?

Or to put it another way...let's say there are two rationality instructors in my city. One, John, is a world famous physicist, businessman, and writer. The other, Mary, has no particular accomplishments outside her rationality instruction work. However, Mary's students have been observed to do much better at their careers than John's, and every time ... (read more)

If you were to say tomorrow "I've been lying about the whole AI programmer thing; I actually live in my parents' basement and have never done anything worthwhile in any non-rationality field in my entire life," then would I have to revise my opinion that you're a very good rationality teacher? Would I have to deny having learned really valuable things from you?

But the fact that reality doesn't disentangle this way, is in a sense the whole point - it's not a coincidence that things are the way they are.

If we get far enough to have external real-world standards like those you're describing, then yes we can toss the "secret identity" thing out the window, so long as we don't have the problem of most good students wanting only to become rationality instructors themselves as opposed to going into other careers (but a teacher who raised their students this way would suffer on the 'accomplished students' metric, etc.). But on the other hand I still suspect that the instructors with secret identities would be revealed to do better.

I've never seen anything from Eliezer that proves that he's done anything at all of value except be a rationality teacher. I know of two general criteria by which to judge someone's output in a field that I am not a part of:

1) Academic prestige (degrees, publications, etc.) and 2) Economic output (making things that people will pay money for).

Eliezer's institution doesn't sell anything, so he's a loss on part 2. He doesn't have a Ph.D or any academic papers I can find, so he's a loss on part 1, as well. Can SIAI demonstrate that it's done anything except beg for money, put up a nice-looking website, organize some symposiums, and write some very good essays?

To be honest, I'd say that his output matches the job description of "philosopher" than "engineer" or "scientist". Not that there's anything wrong with that. Many works that fall broadly under the metric of philosophy have been tremendously influential. For example, Adam Smith was a philosopher.

Eliezer seems to have talents both for seeing through confusion (and its cousin, bullshit) and for being able to explain complicated things in ways that people can understand. In other words, he'd be an amazing university professor. I just haven't seen him prove that he can do anything else.

Yes - in fact, the only thing that leads me to suspect that EY and SIAI are doing anything worth doing is the quality of EY's writings on rationality.

2badger12yEY has a lengthy article in this volume [http://www.amazon.com/Artificial-General-Intelligence-Cognitive-Technologies/dp/354023733X/ref=sr_1_1?ie=UTF8&s=books&qid=1239233739&sr=8-1] if that counts as academic. As has been said, being a theoretician seems distinct enough from teaching that it should count as a day job. I still view Eliezer as more of a teacher than a theoretician, but I don't think Eliezer is saying teachers don't have to be completely divorced from their subject in their day job to avoid affective death spirals [http://lesswrong.wikia.com/wiki/Affective_death_spiral].
4Scott Alexander12yRight. Our difference of opinion here is clearly nontrivial. I'll put it on the list of things to write posts about.
2arthurlewis12yAre you saying that teachers who don't externally practice the thing they're teaching won't make good teachers? Or that they're not worthy of respect at all? If the former, I agree with Yvain and others that we have better metrics for determining teacher quality. If the latter, I'm not sure why this would be the case. The comparison to literary critics doesn't answer that question; it just accesses our assumed cached thoughts about literary critics. What's the problem with people wanting to be literary critics? The post proposes a required formula for respect, but it never explains what quantity that formula intends to maximize. What's the goal here?

Is the point about respect for instructors supposed to generalize to instructors of disciplines other than rationality?

If so, what do you make of Nadia Boulanger? Her accomplishments as a musician (or otherwise) are unimpressive relative to those of her students and peers, and yet she is regarded as one of the greatest music teachers ever, and is accorded correspondingly deep respect by music historians, composers, etc. Are they all wrong to respect her so much, or does it not apply to music or this case?

It seems to me that a better formula for determining respect would somehow reflect the respect given to her students which they say is significantly due to her influence as a teacher. For example, if Aaron Copland singles her out as an amazing teacher who profoundly affected his musical life & education, then she deserves some of the respect given to him. And likewise for her many other students who went on to do great things.

There seems to be an implicit underlying belief in this post that teaching is not (or should not be) an end in and of itself, or at least not a worthy one. I think Boulanger and teachers of her caliber show that that's just not the case.

6Eliezer Yudkowsky12yI was thinking about that - a clause for respecting teachers with great students, should they have them. It still gives people the right incentives.

You've got things the wrong way round. It is the quality of the teacher's students that tell us whether we wish to study under her. The teachers own achievements are a proxy which we resort to because we need to decide now, we cannot wait to see the longer term effects on last years students.

Another proxy is the success of the teacher in getting her students through examinations. This is a proxy because we don't really want the certificate, we what the achievement that we think it heralds. We can assess the strength of this proxy checking whether success in the examinations really does herald success in real life.

I agree with the conclusion of the original post but find the argument for it defective. The key omission is that we don't have a tradition of rationality dojo's, so we do not yet have access to records of whose pupils went on to greatness. Nor do we have records that would validate an examination system.

Notice that the problems of timing are inherent. The first pupils, who went on to real world success, prove their teachers skill in an obvious way, but how did they choose their teacher? Presumably they took a risk, relying on a proxy that was available in time for the forced choice they faced.

0Annoyance12yYes, precisely. The issue isn't how we can become a better teacher, or find one to study under. The first question, that MUST be asked before all others, is: what does it mean to be a good teacher, and how can we define the relevant differences between teachers? Once that question has an answer, we can begin searching for ways to make ourselves better match that defined meaning, or in signal traits in others that indicate they're likely to match that definition well. Concepts like "has students that will accomplish great things" aren't useful for a variety of reasons. And once someone has developed a reputation for being a great teacher, they're likely to attract students with a lot of potential (assuming there are working metrics for potential that are actually consulted, as opposed to rich people simply buying a place for their talentless children). The reputation alone would result in the teacher's students doing better than most. Evaluating the teacher requires that we have some way of determining, or at least guessing, what a student's performance would have been without the teaching.
0gucciCharles4yIsn't teaching itself a skill? So what that she was a bad musician, she was obviously a first rate teacher (independent of the subject that she taught).
0anonym12yAs long as we determine how much of their students' success is attributable to the teacher, it seems reasonable. It seems we could make those sorts of judgments by: * comparing the success of the students of a teacher with the success of students of other teachers having equally talented students (e.g., compare Boulanger's students' success with that of students of contemporaneous Fontainebleau teachers); or * when successful people have typically studied with many different teachers, asking them how much of their success they attribute to the influence of their various teachers.
5MBlume12yI do find cases like this surprising, though. What was it that she was able to teach to her students that she could not put to use herself?
2gucciCharles4yShe gives a pattern of feedback that makes the students practice well? In the sense that she gives positive feedback she functions more as a motivator than as a teacher. Her skill is teaching, it's only happenstance that she teaches music; has she taught shoe polishing or finger painting she would have produced the best shoe polishers and the most skilled finger painters. Perhaps she doesn't have many complex skills but has strong fundamentals (think Tim Duncan of the NBA Spurs). She might make her students practice the fundamentals which will allow them to do more complex work as they get older. Finally, she might have knowledge more advanced than her skill. She might not have the hand eye coordination or the processing speed to play sophisticated music but she might know how it's done. Imagine a 5 foot tall jewish guy that loves basketball. He's not gonna make the NBA. It's simply not gonna happen. However, he might understand the game better than many NBA players. Likewise he might be the best basketball coach in the world even though his athleticism (and hence his basketball playing skills) is less than that of NBA players. Likewise the teacher might have had a strong theoretical understanding but not have had the ability to put her theoretical knowledge into practice.
1pre12yThe first thing that comes to mind is maybe she's able to teach students how to practice more in their youth than she did. That'd work at least.
0AlanCrowe12yI was thinking of Nadia Boulanger, with Astor Paizzolla [http://en.wikipedia.org/wiki/%C3%81stor_Piazzolla] as the distinguished pupil. Piazzolla was trying to be a classical composer, but Boulanger said his classical music was lifeless, it was his tangos that had fire. Perhaps the multi-talented young pupil faces perilous choices about where to focus his energies. Whether the older great teacher can warn effectively against the common errors probably depends on having a breadth of experience, perhaps having put in many years as a mediocre teacher, following up pupils and noticing how things worked out. The teacher's own youthful errors might be uniformative even if severe.
0anonym12yYeah, definitely surprising, but genius in any form is surprising. There is an essay here [http://www.musicweb-international.com/classRev/2008/June08/Boulanger_Berkeley.htm] by a former student that gives a sense of how she taught. And Philip Glass describes her as the decisive influence on him in this article [http://www.theatlantic.com/doc/200107/schiff], which also talks about her teaching a little. Here's an interesting passage from Nadia Boulanger: A Life in Music [http://www.amazon.com/Nadia-Boulanger-Music-Leonie-Rosenstiel/dp/0393317137/]: It was Nadia's manner rather than her materials that was unique. Her intensity, her emotional involvement with her students, her broad knowledge of music in general, and her ability to project her own passionate enthusiasm for each detail as well as its over-all form, were the qualities that made her extraordinary. Her electric personality brought a distinctiveness to everything that Nadia did. In this, lies what one reviewer called "the difference between good teaching and great teaching," for in the latter "the student feels that the teaching enacts an extraordinarily intimate and demanding relation between the teacher and his subject, a relation such that the teacher's sense of his subject is indistinguishable from his sense of life."

No.

You're wasting huge amounts of optimization power, here, in two different ways. Firstly, you're saying that no one should focus his efforts on becoming a good rationality instructor, that any work he does on that is entirely meaningless unless he is at least as good at something else. Secondly, you're saying that no one should focus his efforts on instructing people in rationality, that they should spend most of their time on whatever other thing it is that makes them impressive. If you have someone who is naturally better at instructing people in rationality than in anything else, you are wasting most of the surplus you could have gained from him in these two ways.

I'm sympathetic to your concern, but surely there must be a way we can avoid throwing out the baby with the bathwater?

3Eliezer Yudkowsky12yWell... go ahead and suggest a way to avoid throwing out the baby with the bathwater? I mean, we're talking about some pretty scary bathwater here.

Personally I suspect that the bathwater only really gets dirty when you are teaching something that is essentially useless in modern society, like martial arts or literary criticism. Most people who study, say, engineering don't do so in the hopes of becoming teachers of engineering.

Now you might say that this is because teachers of engineering are expected to also do research, but firstly that doesn't explain the disparity between fields, and secondly, I don't think that the example of tertiary education is one to aspire to in this way. I seem to recall you are an autodidact, so you may not have the same trained gut reaction I do, but I have seen too many people who did not have the skill of teaching but were good researchers teaching horribly, and I remember one heartbreaking example of an excellent teacher denied tenure because the administrators felt his research was not up to snuff too well, to want to optimize rationality teachers on any basis other than their ability to teach rationality.

2Eliezer Yudkowsky12yFair point. But this depends on things starting out healthy so that they stay healthy.
0alexflint9yMartial arts seem to get an unreasonably bad rep on LW. It's at least as useful as painting or writing fiction, and I consider those to be fine personal development endeavours.

While I think martial arts are pretty useful by hobby standards (although their usefulness is broad enough that they might not be optimal for specialists in several fields), several historical and cultural factors in their practice have combined to create an unusually fertile environment for certain kinds of irrationality.

First, they're hard to verify: what works in point sparring might not work in full-contact sparring, and neither one builds quite the same skillset that's useful for, say, security work, or for street-level self-defense, or for warfare. It's difficult to model most of the final applications, both because they entail an unacceptably high risk of serious injury in training and because they involve psychological factors that don't generally kick in on the mat.

Second, they're all facets of a field that's too broad to master in its entirety in a human lifetime. A serious amateur student can, over several years, develop a good working knowledge of grappling, or of aikido-style body dynamics, or empty-hand striking, or one or two weapons. The same student cannot build all of the above up to an acceptable level of competence: even becoming sort of okay at the entire sp... (read more)

1alexflint9yThanks for a thoughtful reply! You could say much the same about painting/dancing/cooking/writing: There are many different sub-arts; it's hard to master all of them; practitioners can become unduly wedded to a single style; there are examples of styles that have "gone bonkers"; there are many factors in place that hurt the rationality of practitioners. These are all valid concerns, but I don't think they're particularly problematic within martial arts in comparison to other hobbies.
1Luke_A_Somers8yYou could say point 2 about those, but points 1 and 3 stand. If you are half-way decent at painting/dancing/cooking/writing and think you're pretty good, it is unlikely to get your face stove in the first time you try it seriously. This leads to your getting feedback and improving. You can watch serious, nothing-held-back demonstrations as public performances (or to take home and study, in the case of writing) for a nominal fee.
3Desrtopa9yReally? I've always thought the opposite; that there's a common sense on this site that martial arts are a discipline worthy of taking seriously and investing far more attention in than I would have thought they merited with respect to their applications to rationality. I may be very interested in martial arts, but in most of my social outlets I don't have nearly as much of a sense of it being a shared interest.
0elharo8yPainting and writing fiction produce items that can then be enjoyed by many other people who are neither writers nor painters. Martial arts produces almost nothing, aside from an occasional sports event.
0MichaelHoward12y[mistake] How about RatRespect1 = min(RatRespect0, sqrt(RatRespect0)^2 + (NonRatRespect0)^2))? [edit] Confound you, Pythagoras! What I meant to say was... RatRespect1 = min(RatRespect0, sqrt(RatRespect0 x NonRatRespect0) There's no sudden ceiling, but you still get wiped for neglecting the real world.
5AlexU12yDo you people actually think in terms of equations like this? Once you begin throwing in exponents, I think the metaphorical/illustrative value of expressing things in math drops off quickly.
0MichaelHoward12yNot very well in my case, it seems, my apologies. Exponents now thrown out again.
2AlexU12yThat wasn't meant as a criticism of you specifically. I've just noticed that people on this site like to use equations to describe thought processes, some of which might be better communicated using everyday language. I'd argue Eliezer's post is an even worse example -- why not just say "the lesser of the two quantities" or something?
8Nebu12yI personally find "min(A,B)" clearer than "the lesser of A and B", but I'm on the autistic spectrum.
8SoullessAutomaton12yTo be fair, for people who are used to thinking in math, pseudo-mathematical notation is as readable as English, with advantages of brevity and precision. "People used to thinking in math" currently describes a large portion of users on this site. Use of gratuitous mathematical notion is likely to help keep it that way.
6AlexU12y"Use of gratuitous mathematical notion is likely to help keep it that way." Is that desirable? (Not saying you're implying it is.) The community could probably benefit from some smart humanities types.
3SoullessAutomaton12yI was actually trying to imply that it isn't desirable, so yes, I agree fully.
0SystemsGuy6yFirst post, so I'll be brief on my opinion. I would say "it depends". To communicate between people and even to clarify one's own thoughts, a formal language, with an appropriate lexicon and symbols, is a key facilitator. As for desirability of audience, the About page says "Less Wrong is an online community for discussion of rationality", with nothing about exclusivity. I would suggest that if a topic is of the sort that newbies and lay people would read, then English is better; if more for the theorists, then math is fine.
5elityre1yOh. I thought that the use of min( ) here, was immediately readable and transparent to me. The meaning of "the lesser of the two quantities" is less obvious, and the phrase is longer to say.
1MBlume12yhmmm...it's a little awkward reading the math without TeX, but I think assuming all variables real, that simplifies to RatRespect1=RatRespect0
0[anonymous]12yHow about RatRespect1 = min(RatRespect0, sqrt(RatRespect0)^2 + (NonRatRespect0)^2))? There's no sudden ceiling, but you still get wiped for neglecting the real world. HT to Pythagoras.
0[anonymous]12yHow about Rat_Respect1 = min(Rat_Respect0, sqrt(Rat_Respect0)^2 + (Non_Rat_Respect0)^2))? There's no sudden ceiling, but you still get wiped for neglecting the real world. HT to Pythagoras.
0outlawpoet12yI agree with this comment vociferously. The upper bound isn't a terrible idea, but it would, for example, knock E.T. Jaynes out of the running as a desirable rationality instructor, as the only unrelated competent activity I can find for him is the Jaynes-Cumming Model of atomic evolution, which I have absolutely zero knowledge of.

knock E.T. Jaynes out of the running

Dude, what on Earth are you talking about. E. T. Jaynes was a Big Damn Polymath. I seem to also recall that in his later years he was well-paid for teaching oil companies how to predict where to drill, though that's not mentioned in the biography (and wouldn't rank as one of his most significant accomplishments anyway).

0outlawpoet12yNot something I was aware of, but good to know. I wasn't aware of anything from before his career as an academic, 1982-onward. His wikipedia article doesn't mention anything but the atom thing. But he certainly set out to be a Professor of rationality-topics.
6saturn12yRegardless of the merits of E. T. Jaynes, we should place the activity of a rationality instructor in a separate mental bucket than a rationality theoretician. I would say that making a significant original intellectual advance counts as a real accomplishment.

How much of what you're trying to do could be accomplished by largely tabooing the term "rationality" in rationality dojos, and having the community be really really attached to that tabooing? So that the dojos are for "finding ways of thinking that actually bring accurate beliefs" and "finding ways of thinking that actually help people reach their goals", with mostly no mention of a term like "rationality" that's easy to reify? If we talked like that, actual and prospective students and teachers might naturally look outward, to the evidence that various thinking processes were or weren't helping. Such evidence would be found partly in terms of the actual "day job" accomplishments (or lack of accomplishments) of the teacher, and also in terms of "day job" accomplishments of the students after vs. before joining the group, and also in terms of any measures that a group of active, experimentally minded rationality students could think up of whether they were actually becoming better at forming accurate beliefs.

2Vladimir_Nesov12yYou can taboo a word, or even a concept, but you can't taboo a meaningful regularity, pretend that it's not there. The problem with belief-in-belief-in-rationality is the same as with other lost concepts, one of the essential lessons to learn, not something to shoo away. If you can't attain even this, what aspirations?
3AnnaSalamon12yI'm not proposing we pretend there's no regularity to "types of thinking that help us form accurate beliefs, across domains". Not at all. I'm proposing we stay attentive to the evidence as to what those types of thinking actually are and aren't, by spelling out our full goal as much as possible. If we use the term "rationality" as a shorthand instead of spelling out that we're after "types of thinking that actually help us form accurate beliefs", it's easy for the term "rationality" to become un-glued from the goal [http://yudkowsky.net/rational/virtues]. So that "rationality" gets glued to "that thing we do in the rationality dojo" or to "whatever the Great Teacher said" or to "anything that sets me apart from others and lets me feel superior, like using long sentences and being socially awkward", instead of being a term for those meaningful regularities we're actually trying to study (the meaningful regularities in thinking methods that actually work). Well, yes, I agree that a rationality dojo should talk about lost purposes, about the trouble with belief in belief in general, and about what exactly goes wrong when people speak overmuch of "rationality" instead of keeping their eyes on the prize. Is this supposed to be in tension with the suggestion that we, as a community, build a strong norm against talking overmuch of "rationality" and for, instead, speaking of "kinds of thinking that help us form accurate beliefs / achieve our goals"? I'm imagining that it's precisely by having a really clear view of the standard "lost purposes" failure modes, and of their application to "rationality" learning, that we can maintain such a norm.
1Vladimir_Nesov12yBut for some [http://www.overcomingbias.com/2007/08/hindsight-bias.html] reason [http://www.overcomingbias.com/2007/09/the-bottom-line.html] we are talking about a specific failure mode, one that is not necessarily the single best case to demonstrate the general principles, and one that by itself is clearly insufficient. Investing disproportionally in this single case must have additional purposes. I can see two goals: * Safeguarding the movement on early stages, where it's easy to start in a wrong direction * Acting as a safety vent, compensating for the difficulty in certifying [/lw/2s/3_levels_of_rationality_verification/] the sanity of the movement.
  1. In the boost phase of a newly launched idea, it's actually a really good idea to train teachers. That gives you exponential growth.

  2. It's a fail if the discipline gets into a death spiral about teaching teachers to teach teachers, iff the recursion lacks a termination condition. (Suitable conditions left as an exercise.)

  3. Even in the cruise phase, an idea needs a teacher replacement rate of >= 1.

  4. In cruise phase, it's a fail if every student wants to teach. But I don't see how it's a fail if some students want to teach and proceed to do so. Nor do I see how it's a fail if they end up being most of the teachers.

  5. The intersection of two very rare categories of people is nobody.

  6. Aren't you the same guy who, just a few days ago, pointed out how much better a trained professional is at his job than some volunteer? Teaching is a nontrivial skill.

3Eliezer Yudkowsky12yMost memes that grow exponentially do not manage to stay sane. What's best for the exponential growth of a meme (as though it were a bacterium with no identity other than itself) may not be best for the culturation of a cause. Exponential growth is good, I agree, but the fastest possible exponential growth... seems more doubtful. Exponential growth around friends giving friends copies of a fixed book, seems safer and shallower; the book won't change as it's passed around. The building-up of rationality dojos is a longer, slower endeavor which should be more carefully gotten right. Plus you can look for students who've already done something interesting with their lives and train them to be the teachers.
0JulianMorrison12yHmm. I'll concede a measured rate of startup, although I can't offhand think of any meme that got deranged by fast growth (and was sane to begin with). Perhaps, adopt the martial art idea of not giving out too many certificates-to-teach, and using lineage to check accreditation?

Has this requirement been successfully implemented for CFAR instructors?

1Nisan8yIt's not a formal requirement, but I'm personally impressed by the prior accomplishments of some of the CFAR staff and some of the careers of CFAR consultants. And there is at least one person who somehow has a non-CFAR career while also working full-time for CFAR. EDIT: Read about them here. [http://appliedrationality.org/about/]

Well if we develop rationality tests, then you should rely on the teachers who help their students do better on tests. And if you can't develop tests, then I don't see why you'd think you had evidence that any particular person was good at teaching rationality. Relying on their ability to do something useful as a predictor of their ability to teach rationality seems nearly as bad as relying on their publication record, or their IQ, or wealth, etc. I say focus on developing tests.

(Blinks.)

I wonder if this idea comes as a shock because everyone was planning on becoming rationality instructors, i.e., I should have warned everyone about this much earlier?

Is it offputting on some other level?

But I must also consider that it might really be that stupid. Damn, now I wish I knew the actual number of upvotes and downvotes!

5Scott Alexander12yI don't think too many people are actually considering "rationality instructor" as a career path at this point - which reminds me - what exactly are your plans for this rationality dojo thing anyway? Is it just something you like to talk about, or something you plan to one day set up? Are you hoping people from Less Wrong will start the first ones, or that people from Less Wrong will be students in ones set up in some other way?
3Paul Crowley12yWhen you (or anyone) says "rationality dojo", how literally is it meant? Is it specifically a physical meeting, rather than a web community? More literally, is this meant as a meeting of equals or an instructor with pupils? How much of the formalism of the dojo would you import? How would you change the relationship of sensei and pupil? I'm not so sure about wearing robes, and I draw the line at getting thwacked on the head with a stick. I am keen to increase the number of rational people, but there are a great many means by which a thing passes from mind to mind, and I'm not sure a dojo would be the first model I'd reach for - have I missed a post where this model is set out in more detail?
3Eliezer Yudkowsky12yI don't think I can afford to divert my attention into setting one up, but I've heard others already discussing it, so it's worth placing some Go stones around it.
2Scott Alexander12yReally? If it's not too private, who's been discussing it?

I don't know if I'm part of who Eliezer heard, but I'm planning on trying to start a rationality training group on Saturdays in the SF bay area, for middle and high school students with exceptional mathematical ability. I want to create a community that thinks about thinking, considers which kinds of thinking work for particular tasks (e.g., scientific progress; making friends), and learns to think in ways that work. The reason I'm focusing on kids with exceptional mathematical ability is that I'm hoping some of them will go on to do the kind of careful science humanity needs, with the rationality to actually see what actually helps. The aim is not so much to teach rationality knowledge, since AFAICT the "art of human rationality" is mostly a network of plausible guesswork at this point, but to get people aiming, experimenting, measuring, and practicing in a community, sharing results, trying to figure out what works and actually trying the best ideas (real practice; community resistance to akrasia). With some mundane math teaching mixed in.

As to "day job" credentials, I've had unusual success teaching mathematical thinking (does this count as "day job"? at least math teaching success is measurable by, say, the students' performance on calculus exams), bachelor degrees in math and "great books", and two or three years' experience doing scientific research in various contexts. I don't know if this would put me above or below Eliezer's suggested bar to a stranger.

2TheOtherDave10yHow has this project been going?
2Eliezer Yudkowsky12yYou're focusing on easy-to-verify credentials of the sort you'd list on a resume to be hired by some skeptical HR person. You have a secret identity.
2AnnaSalamon12yMy secret identity just says that some combination of you and Michael Vassar thought I was worth taking a chance on. I was trying to do some analog of cross-validation, where we ask whether someone who was basically following your procedure but who didn't know me or have particular faith in your or Michael Vassar's judgment, would think it okay for me to try teaching. I was figuring that your focus on day job impressiveness was an attempt to get away from handed-down lineages of legitimacy / "true to the Real Teachings"ness, which Objectivism or martial arts traditions or religions sometimes degenerate into.
3Eliezer Yudkowsky12yMore of an attempt to make sure that people write instead of just doing literary criticism.
1AnnaSalamon12yGot it. Sorry; I think I rounded you to the nearest cliche, maybe because of the emotional reaction you suggested some of us might be having.
2AnnaSalamon12yFWIW, part of my own emotional reaction to it did come from that, though I noticed and have my reaction tagged as an emotionally contaminated thing to be wary of.
2SoullessAutomaton12ySpeaking only for myself--I am here, consciously and explicitly, to learn rationality for its own benefits. I have no overwhelming interest in teaching others and, all else equal, have other things I would prefer to be doing with my life. I didn't vote either way on the post because I am ambivalent to it. It felt underdeveloped compared to your usual material, and to some extent seems like you're getting ahead of yourself on this "teaching rationality" thing--the current understanding of applied rationality in this community here doesn't seem to justify raising the concern yet. Perhaps the idea would have been better presented in the context of one of your parables/short stories/&c.?
8gwern12y
2jimrandomh12yHuh? There is no way that knowledge of astronomy could possibly have told him about the olive crop. It seems more likely that his useful knowledge was of economics and business, but that he made up a story about astronomy to impress his peers.

"no way...could possibly..." (emphasis added)

This is a good example of what I meant over in the evolutionary psychology thread; coming up with evolutionary psychology explanations is a good practice to avoiding succumbing to 'arguments from incredulity', as I like to call this sort of comment.

"Oh, I couldn't think of how astronomy could possibly be useful in weather or crop forecasting, so I'll just assume the stories about Thales are a lie."

I'll leave this here for you.

" Forecasting Andean rainfall and crop yield from the influence of El Niño on Pleiades visibility", Nature 403, 68-71 (6 January 2000):

"Farmers in drought-prone regions of Andean South America have historically made observations of changes in the apparent brightness of stars in the Pleiades around the time of the southern winter solstice in order to forecast interannual variations in summer rainfall and in autumn harvests. They moderate the effect of reduced rainfall by adjusting the planting dates of potatoes, their most important crop1. Here we use data on cloud cover and water vapour from satellite imagery, agronomic data from the Andean altiplano and an index of El Niño va

... (read more)
1MBlume12yI had hoped to become a rationality instructor of some stripe, but with an apprentice period as an experimental physicist, in order to give concreteness to my teaching. So, no particular degree of shock here.
0Paul Crowley12yI read it as an injunction to focus on fixing my own rationality and making best use of it, and not to think about how to help other people be more rational. That runs entirely contrary to my own hopes for making the world a better place. If all you mean is "spread rationality, but keep the day job" then absolutely, I'm keeping the day job, it pays better. The idea of a rationality pressure group has crossed my mind, but if I were to work for such a thing it would not be in the role of instructor, and I could probably do more for such an organisation by keeping the day job and giving it money in any case.
0PhilGoetz12yIt's an idea that is common among writers (with respect to writing instructors). Not the secret identity part, though. Eliezer's idea is a bit different, because success in any area of life should indicate rationality.I don't understand the secret identity part. If one identity is secret, how are students supposed to know whether to respect the instructor for accomplishments under his/her non-instructor identity? (If you're a rationality instructor or practitioner, having a secret identity is probably a good idea anyway, so you're not the first against the wall when the religious-Luddite anti-transhuman pogrom begins.)
0MBlume12yHe's joking about the secret part -- think "day job"
0[anonymous]12yThe idea never occurred to me -- not when I was sincerely involved in martial arts, and not since becoming sincerely dedicated to rationality. I'd be quite surprised if it has occurred to more than a few people here.
-1[anonymous]12yPerhaps few readers are thinking about becoming rationality instructors, so they feel it doesn't apply to them. That would likely diminish their estimation of its importance.

What work is the word "secret" doing in this post? It seems to me that you're talking about public identities, ones visible to outsiders, ones that potential students (not yet enrolled in the Conspiracy) can look at to evaluate would-be instructors. Are you using the phrase "secret identities" merely because it sounds cool?

Ditto with "conspiracy." I'd argue that giving LW the language and trappings of a 12-year old boys' club is ultimately detrimental to its mission, but it looks like I'm in the minority.

3gjm12yThe business about the Bayesian Conspiracy is, I think, more an in-joke than anything else. Eliezer's written various bits of fiction set in a future world featuring an actual "Bayesian Conspiracy", and he's on record as saying that there's something to be said for turning things like science and rationality into quasi-mystery-religions (though I expect he'd hate that way of putting it) -- but he's not suggesting that we actually should, nor trying to do so. Dunno whether such things help or hinder the mission of LW. I think it would be difficult to tell.
3AlexU12yIt just seems at odds with the scientific ethos of cutting out the bullshit whenever possible. Instead, Eliezer seems bent on injecting bullshit back into the mix, which I'd argue comes at the expense of clarity, precision, and credibility. However, I do realize it's a calculated decision intended to give normally dry ideas more memetic potential, and I'm not in a position to say the trade-off definitely isn't worth it.
3JulianMorrison12yDeliberately so. The original OB posts started with it as a thought experiment, "what if we kept science secret, so people would appreciate its Awesome Mysteries?"
1Paul Crowley12yDespite that, I think that whole style is a tremendous mistake. It's an interesting thought experiment, but we should be clear that it runs completely counter to the things that actually bring about accurate results.
2Eliezer Yudkowsky12yIronic, rather. I considered "Mandatory Alternate Lives" but "alternate life" simply doesn't have the phrase-recognition impact of "secret identity". There is no phrase that means exactly what I want; so I use "secret identity" in an obviously inappropriate way.

I think the phrase you want is "day job".

1JulianMorrison12yIf anything it's the teaching that ought to be under a nom de plume. I've heard more than once the complaint about universities, that they care more about getting an impressive name than whether he can teach.

How much of Objectivism's failure was due to its teachers not having developed sufficient awesomeness elsewhere, and how much was due to the fact that it, say, tried to claim that it had the One True Method of Thought, instead of fostering an environment where all teachings were conjectural, teachers were facilitators of investigation instead of handers-down of The Answer, and everyone together tried to figure out what worked?

I mean, to what extent can we avoid similar failure modes by fostering a culture that doesn't reify anyone's teachings, but that instead tries to foster a culture of experimenting, thinking up new strategies, pooling data, and asking how we can tell what does and doesn't work?

Some thoughts from my experience in a martial arts dojo:

  1. We avoid lots of failure modes by making sure (as far as reasonably possible) that people are there to train first and everything else second. One consequence of this is that we don't attach a whole lot of our progress to any particular instructor; we're blessed with a number of people who are really good at aikido, and we learn from all of them, and from each other.

  2. On setting the bar too high for instructors: Most martial arts rely on a hierarchy of instructors, where the average dojo head is a

... (read more)

Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of...

...becoming a teacher and having their own martial arts dojo someday.

I do not think this analogy fits. Martial arts is a self-contained bubble. What else is there to do but teach? To use a variation on the analogy, if someone being trained in the United States Marine Corps were given the question of what a truly dedicated student of the USMC were to become, they would probably answer along the lines of someone who kills th... (read more)

I've been expecting a deliberately daft post from Eliezer Yudkowsky and/or Robin Hanson to see whether we vote them up just based upon status.

I think this is it.

6Cyan12yEliezer has been very clear on OB that he doesn't write things with the intention of covertly testing or manipulating his audience. (Of course, anyone who did test or manipulate his audience might say the same thing...)
1Roko12yAnd, of course, they wouldn't even admit to it afterwards, in all likelihood!
1Paul Crowley12yIf I were going to do this, I would write something that flattered my audience - this does the very opposite. Besides which, we know that EY is voted down based on status - there was a discussion of it in the March open thread.

Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of.. becoming a teacher and having their own martial arts dojo someday.

I think that accademia is also subject to this mode of failure. As an exercise, try to think of great literary figures who were also professors of literature at major universities. Off the top of my head, I can think of exactly one: Vladimir Nabokov, and he was notably contemptuous of his colleagues. Can anyone else think up anymore?

Unsurprisingly, Paul Graham h... (read more)

Obviously success in other realms is bayesian evidence that someone would make a better rationality instructor. But as many others have argued, in this post Eliezer exaggerates the importance of this type of evidence.

I have a question: why are you panicking about this now? It's not like we have a huge problem yet with too many teachers, or too many freshly founded schools.

5Eliezer Yudkowsky12ySo that, having written up my thoughts on the subject, I can vanish into an appropriately dark basement for 5 years and not find armies of deranged Objectivists when I peek out? I'm trying to write up now everything that needs to be written up, which includes a contingency in case The Book takes off (should it be written and sold).
3JulianMorrison12yThe traditional fix is to anoint some disciples to teach in your stead.

Yes, and my impression has been that annointed disciples are generally the instigators of things going subtly wrong in self-reinforcing ways. People with big, novel ideas are not necessarily good judges of character.

0[anonymous]12yEspecially if the interview lasts 5 minutes [http://en.wikipedia.org/wiki/Conversion_of_Paul] and the jackass winds up writing half your scripture. (no, I don't hold a grudge against Paul, what are you talking about?)

Makes sense, though I will quibble with your opening line. What you say about martial arts dojos was probably true up until about twenty years ago, but today I suspect a sufficiently dedicated martial arts student is in fact dreaming of becoming a champion MMA fighter.

And you know, now that I think about it, even twenty years ago, I'm not sure anyone was dreaming of becoming a dojo owner. That was just what they could practically achieve. But they were dreaming of becoming a Dark Lord:

"Surely you've wanted to hurt people," said Professor Quirrell. "You wanted to hurt those bullies today. Being a Dark Lord means that people you want to hurt get hurt.

I guess the failure mode that you're concerned with is a slow dilution because errors creep in with each successive generation and there's no external correction.

I think that the way we currently prevent this in our scientific efforts is to have both a research and a teaching community. The research community is structured to maximise the chances of weeding out incorrect ideas. This community then trains the teachers.

The benefits of this are that you get the people who are best at communicating doing the teaching and the people who are the best at research... (read more)

1Eliezer Yudkowsky12yHm. Arguably I should only be worried about fast dilution rather than slow dilution. But I'm also worried that the community grows slower if it's inward-looking, and hope for faster growth if it's involved with the outside world. Entirely possible. But I'm not sure I have so much faith in the system you describe, either. The most powerful textbooks and papers from which I get my oomph are usually not by people who are solely teachers - though I haven't been on the lookout for exceptions, and I should be.
2Paul Crowley12yEr, I thought the difference between religious and scientific teachings was that scientific teachings didn't have to worry about dilution? It seems like you put a high probability on this community disappearing into a death spiral of some sort without you - I would have thought we should worry more that we're already in one which we haven't picked up on.
0Eliezer Yudkowsky12yMore of a difference between things that are hard vs. easy to teach and measure. Businesses have the same problem with a great CEO trying to hire great employees, dilution of corporate culture, etc. - they have highly quantifiable output at the end of the day, but in the middle of the day and the middle steps of the process, it's not as easy to measure. I anticipate a beginning period extending for at least several years when we don't have good metrics because we're still trying to develop them.
1marc12yI think that you can legitimately worry about both for good reasons. Fast growth is something to strive for but I think it will require that our best communicators are out there. Are you concerned that rationality teachers without secret lives won't be inspiring enough to convert people or that they'll get things wrong and head into death spirals? From a personal perspective i don't have that much interest in being a rationality teacher. I want to use rationality as a tool to make the greatest success of my life. But I also find it fascinating and, in an ideal world, would stay in touch with a 'rational community' as both a guard against veering off into a solo death spiral and as a subject of intellectual interest. I'm sure that there must be other people like me that are more accomplished and could give inspiring lectures on how rationality helped them in their chosen profession. That would go some way to covering the inspiration angle. As an aside i appreciate why you care about this; I'm always a bit suspicious of self help gurus who's only measurable success is in the self help theory they promote. I wonder whether I'm selecting for people who effectively sell advice rather than effectively use advice.

The mini-intro to this post on the craft and community sequence page says that it was not well received. But the requirements that this write up recommends really act as beautiful safeguard against becoming pedantic. If I hadnt read this page quite early (before I got past the 25% mark on the sequences), I doubt I would have stopped myself from falling into a happy death spiral (I honestly still really struggle with that one all the time).

It's really hard for me even now to "not speak over much of the way" (though, I mostly think it to myself, ... (read more)

1AshwinV6yUpdate: I'm over it now. :D

If you have "something to protect", if your desire to be rational is driven by something outside of itself, what is the point of having a secret identity? If each student has that something, each student has a reason to learn to be rational -- outside of having their own rationality dojo someday -- and we manage to dodge that particular failure mode. Is having a secret identity a particular way we could guarantee that each rationality instructor has "something to protect"?

3DaFranker8yFailure mode: My "something to protect" is to spread rationality throughout the world and to raise the sanity waterline, which is best achieved by having my own rationality dojo. Beware the meta.
0Insert_Idionym_Here8yI agree. I think that failure mode might then be better avoided by restricting possible "somethings", as opposed to adding another requirement on to one's reasons for wanting to be rational.
2DaFranker8yYes, but that's an exercise implicitly left to the reader. Formulating it this way is somewhat intuitively easier to understand, and if you've read the other sequences this should be simple enough to reduce to something that pretty much fits (restriction of "things to protect") in beliefspace. Essentially, this article, the way I understand it, mostly points at an "empirical cluster in conceptspace" of possible failure modes, and proposes possible solutions to some of them, so that the reader can deduce and infer the empirical cluster of solutions to those failure modes. The general rule could be put as "Make rationality your best means, but never let it become an end in any way." - though I suspect that I'm making a generalization that's a bit too simplistic here. I've been reading the sequences in jumbled order, and I'm particularly bad at reduction, which is one of the Sequences I haven't finished reading yet.
2Nick_Tarleton8yIt's very easy to believe that you're being driven by something outside yourself, while primarily being driven by self-image. It's also very easy to incorrectly believe this about someone else.
0komponisto8ySometimes I wonder if the only people who aren't driven primarily by self-image/status-seeking are sociopaths (the closest human analogue of UFAI).
5wedrifid8yMy understanding of sociopaths makes this seem like approximately the opposite of true. It is the drives other than seeking self-image and status that are under-functioning in sociopaths.
3komponisto8yWhat then do you call someone like the Joker from Batman -- someone who cares not at all how they fit into or are perceived by human society, except as instrumental to gaining whatever (non-human-relationship-based) thrill or fix they are after?
7nshepperd8yFictional?
-2wedrifid8yBeat me to the exact one word reply I was about to make!
3komponisto8yThe reply is a non-sequitur, because even if one accepted the implied unlikely propsition that no such persons exist or ever have existed, the terminological question would remain.
9geniuslevel208yI don't think so: psychiatry has no need for terms that fail to refer. (On the other hand, psychiatry might have a term for something that doesn't exist--because it once was thought to have existed.)
0komponisto8yAt the risk of stating the obvious: I did not intend to restrict the terminological question to psychiatry specifically. But in any event: you could say the same thing about zoology. And yet we still have the word unicorn.
3[anonymous]8yUnicorns were indeed once thought to have actually existed.
1wedrifid8yYour understanding of the "non-sequitur" fallacy is evidently flawed. You asked a question. The answer you got is not only a literally correct answer that follows from the question it is practically speaking the It isn't non-sequitur. It's the most appropriate answer to a question that constitutes a rhetorical demand that the reader must generalize from fictional evidence [http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/]. But you want another answer as well? Let's try: This question does not make sense. The Joker isn't someone who doesn't care how they are perceived. He is obsessed with his perception to the extent that he, well, dresses up as the freaking Joker and all of his schemes prioritize displaying the desired image over achievement over pragmatic achievement of whatever end he is seeking. No, he cares a hell of a lot about status and perception and chooses to seek infamy rather than adoration. Thrill seeking fix? That's a symptom of psychiatric problems for sure, but not particularly sociopathy. Some labels that could be applied to The Joker: Bipolar, Schizophrenic, Antisocial Personality Disorder. Sociopath doesn't really capture him but could be added as an adjunct to one (probably two) of those.
7[anonymous]8yCharitable interpretation of komponisto's comment [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j3j]: ‘If a human didn't care about social status except instrumentally, what would be the psychiatric classification for them?’ (Charitable interpretation of nshepperd's comment [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j3n]: ‘Outside of fiction, such people are so vanishingly rare that it'd be pointless to introduce a word for them.’)
-1wedrifid8yI'm afraid the first interpretation is incompatible with this comment (because the Joker reference conveys significant information). Actually, this does qualify as a charitable interpretation of something kompo made elsewhere (grand-neice comment or something). This distinction matters primarily in as much as it means you have given a highly uncharitable interpretation of nshepperd's comment. By simple substitution it would mean you interpret him as saying: Rather than being clearly correct nshepperd becomes probably incorrect. Many (or most) people with autism could fit that description for a start.
1komponisto8yIt was not intended to do so; army1987's paraphrase is correct. The thought in my original comment would have been better expressed as: "Sometimes I wonder if the only people who aren't motivated by status are antisocial."
-4wedrifid8yThis intent does not make the paraphrase correct, even within the scope of 'charitable'. More to the point, it does prevent the paraphrase of nshepperd's comment from being uncharitable. Army1987 put words in nsheppard's mouth that are probably wrong rather than the obviously correct statement he actually made. He described this process as 'charitable'. It is the reverse.
3komponisto8yI was talking about my comment only; I make no claim that army1987's paraphrase of nshepperd's comment is likewise accurate.
1Vladimir_Nesov8y(I'm not sure if I'm mistaken about the following interpretation and you instead mean that this particular intent doesn't make the paraphrase (of komponisto's comment) correct; in that case I'm not following what you are saying at all.) I expect the intended meaning of "correct" was correspondence with intended meaning. In this sense, the intent is relevant, and it seems that the paraphrase does correspond to the intended meaning as described by komponisto in grandparent. The grandparent is talking only about army1987's paraphrase of komponisto's comment, not about the paraphrase of nsheppard's comment (which I agree is better described as "uncharitable"), so I'm not seeing the relevance of this statement in a reply to grandparent. (Disagree with some connotations of "obviously correct" in the quote, as the case is not that clear overall, even as it is pretty clear in one sense.)
1[anonymous]8yThe statement he actually made --taken literally and ignoring the poor example komponisto had chosen, as the “someone like” makes clear that it was intended to be just an example-- is that the word he would use for “someone who cares not at all how they fit into or are perceived by human society, except as instrumental to gaining whatever (non-human-relationship-based) thrill or fix they are after” is “fictional”. How is that “obviously correct”?
5common_law8yThere was no demand to "generalize" from fictional evidence, except to recognize the theoretical possibility a sociopathic character who is indifferent to status concerns. The intended question is whether such characters can exist and if so what's their diagnosis. Your response "fictional" would be reasonable if you went on to say, "that's a fiction; such a pathology doesn't exist in the real world." Or at least, "It's atypical" or "it's rare''; "sociopaths usually go for status." Or, to go with your revised approach, "psychopaths go for status as they perceive it, but it doesn't necessarily conform to what other people consider status." (This approach risks depriving "status" of any meaning beyond "narcissistic gratification.") The answer, anyway, is that psychopaths have an exaggerated need to feel superior. When they fail at traditional status seeking, they shift their criteria away from what other people think. They have a sense of grandiosity, but this can have little to do with ordinary social status. Psychopaths are apt to be at both ends of the distribution with regard to seeking the ordinary markers of status. Objectionable personal psychological interpretation removed at 2:38 p.m.
2wedrifid8yThankyou.
1wedrifid8yThat's an untenable interpretation of the written words and plain rude. (Claiming to have) mind read negative beliefs and motives in others then declaring them publicly tends to be frowned upon. Certainly it is frowned upon me.
0JoshuaZ8yThe simplest minimally charitable interpretation of the remark seems to be saying that in a slightly snarky fashion.
2common_law8yIn my humble opinion, snarkiness is a form of rudeness, and we should dispense with it here. Moreover, since we have a politeness norm, it isn't so clear that the interpretation you offer is charitable!
0JoshuaZ8yHis behavior is not consistent with what is generally described as sociopathy. Again, Ronson's book may help here.
1komponisto8ySo again, what would be the term for the (apparently distinct) phenomenon that I mean to refer to? Is this covered in Ronson's book as well (presumably for purposes of contrast)?
3JoshuaZ8yI'm not sure that your phenomenon exists to any substantial extent in the real world. Also, keep in mind that categorizing mental illness is in general difficult. It isn't that uncommon to have issues where one psychologist will diagnose someone as schizophrenic, while another will say the same person is bipolar, etc even as everyone agrees there's something deeply wrong with them. So even if your people in your like-the-Joker category exists in some form, it may be that there isn't any term for them.
-1[anonymous]8yundefined
-1wedrifid8yApparently distinct? What do you mean by that? "A coherent concept that can be described as part of a counterfactual reality?" Sure, it just isn't something that is instantiated in an actual human being. That's what medical science deals with and that's where the term 'sociopath' is used and definied. You're after "literary criticism". Or, given the subject matter, TVTropes [http://tvtropes.org/pmwiki/pmwiki.php/SelfDemonstrating/TheJoker?from=Main.TheJoker] . The best term among them is probably Chaotic Evil [http://tvtropes.org/pmwiki/pmwiki.php/Main/ChaoticEvil]. The Joker even gives it the tagline. Laughably Evil [http://tvtropes.org/pmwiki/pmwiki.php/Main/LaughablyEvil] also works. That trick with the pencil [http://www.youtube.com/watch?v=tUToULbEfAE] is one of Heath Ledger's best moments. If it does happen to be that would be a remarkable coincidence. It would be similar in nature but less extreme than Ronson happening to make comparison's to Yudkowskian "Baby Eaters".
0komponisto8yI'm afraid in this comment and in your other [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j5q] you are allowing your debating skills to obscure any substantive discussion that my original comment might have prompted. And yes, I fully anticipate that your wit is sharp enough to offer a retort to the effect that the comment in question deserved no better response. Since I don't at this precise moment regard the topic as sufficiently interesting to justify the level of effort I am having to put into this conversation, I will simply note my disagreement and move on.
-4wedrifid8yHow dare you! You accused [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j5a] me of employing one of the most basic (and in my opinion the most dire) logical fallacies---when I most certainly didn't, either denotatively or connotatively. Of course I'm going to reply. It's personally offensive to me as well as false. As for substance, you were given plenty---even if you didn't like it. Even the second [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j5q] of the two comments you are trying to frame as merely clever and insubstantial tried to analyse the question from multiple angles, including challenging your description of the psychological traits of the fictional character in question and giving a best effort attempt to give you the diagnosis you were seeking: I'm not a psychiatrist and The Joker isn't real but if I was and he was those really are the kind of labels that myself and my colleagues are likely to apply, in various combinations. We wouldn't all agree---even with actual humans our diagnoses often differ and the Joker, being the creation of cartoon writers not remotely trying to be realistic, is harder to fit into a distinct category than most humans. JoshuaZ gave you substance too, including a reference to resources that explain what sociopathy is actually like. I'm reminded of the recent discussion of Eliezer's rumored fully general mind-hacks [http://lesswrong.com/lw/eoz/ey_politics_is_the_mind_killer_sighting_at/7j4v]. Even his proof [http://lesswrong.com/lw/rn/no_universally_compelling_arguments/] that such a thing is impossible can't prove anything except that that's what he wants people to think. Having that much wit would be rather handy! Sure, I think I'm clever but I don't think that is your problem here. I think the problem is that you were mistaken about an aspect of reality, clung to an untenable position instead of updating, aggressively defended generalization from fictionalized evidence [http://lesswrong.com/lw/k9
2komponisto8yI asked "what is the term for X?" and you (or, strictly, another commenter, whose comment you endorsed) replied "Fictional!". You know perfectly well that that was nothing but a wisecrack reply. To state the freaking obvious, the meaning of "fictional" is "not real" and is thus much, much, broader than what I was looking for. For one thing, the term includes heroes as well as villains! There are plenty, plenty of fictional characters who do not meet the description I provided (a description which was not even intended to be taken literally, but merely as a pointer to the closest empirical cluster -- as is the standard convention in ordinary human conversation, which this was intended as an instance of, because [newsflash!] the original comment was an offhand remark!) And no, I did not in that instance mean to accuse you of a fallacy. The "non sequitur fallacy" is only one of two commonly used senses of the term "non sequitur". The other is a remark which is inappropriate in the context. For example, if I say "The moon is made of green cheese", and you, instead of saying "What?! No it isn't", say instead, "I wonder whether my uncle Harry would like to buy a new car", that could be described as a "non sequitur" -- an utterance which isn't an appropriate way to follow the previous one. That is what I meant to accuse you of. Maybe it was an ill-considered accusation, maybe there is a better, more precise term for a wisecrack remark that superficially appears to answer the question but actually doesn't and is merely a rhetorical way to dismiss the question and cause the asker to lose status....but I didn't think of it in time -- I was too busy acting quickly to fend off what I expected would be an onslaught of upvotes for you (or, rather, your confederate), maybe even accompanied by downvotes for me. Anybody trying to be charitable would realize, would assume, that the fictional character was cited only for the sake of convenience. Now, evidently we have a substantive
0wedrifid8yTo be sure, I expressed disagreement regarding the inappropriateness too but the difference in interpretation regarding whether the 'fallacy' sense applies is interesting (well, slightly, anyhow). By my reading both senses apply. The first ("WTF? That's completely irrelevant.") is obviously there. While your question [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j3j] and nsheppard's reply [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j3n] constitute a simple question and answer pair they also convey implied arguments. That is, a rhetorical question with an answer that invalidates the implied argument of that question. If the answer is non-sequitur ("Well, that was random") then the implied argument is, in fact, fallacious reasoning. Note that even if the question is interpreted to be nothing more than an expression of curiosity the answer still represents an argument. Something along the lines of "The Joker is fictional. Psychiatric diagnosis categories are created for real people. There doesn't need to be any psychiatric label that applies to a category represented by a fictional entity." That implied argument would certainly be falacious if the answer was irrelevant. The above said I can certainly see why you could legitimately interpret the fallacy as not applying and I am naturally willing to retroactively change my claimed offense to the charge that I was saying things that make no sense in the context. ;) My original charitable interpretation was abandoned when "fictional" was challenged as non-sequitur and the Joker was maintained over a series of comments. The most significant benefit-of-the-doubt destroyer was actually a reply to this comment [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j5j] by JoshuaZ that doesn't seem to exist any more. For what it is worth if you had said "Lex Luthor" I would have agreed that he (approximately) represents real sociopaths and even agreed that such people are the closest thing that we
7Wei_Dai8yDo you think, in retrospect, it might have been better to give an answer like "I doubt that there are enough people in reality who fit your description for there to be an established term for the category." instead of "fictional"? It seems like that would have gotten your point across more clearly and helped avoid a lot of the subsequent side-track into whether "fictional" is a sensible answer or not.
0wedrifid8yAbsolutely not. Nsheppard's [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j3n] is perhaps the most salient comment in the entire thread, closely followed by genius's follow up [http://lesswrong.com/lw/9c/mandatory_secret_identities/7j6r]. This site would be a worse place if it was not made. I would of course not have expressed my agreement with nsheppard if I had predicted that it would receive a hostile response but would most certainly have defended nsheppard if the 'non-sequitur' accusations were then directly leveled at him instead of me. (Your answer is a good one too, and I would have liked to see that comment made in addition to the 'fictional' comment.) I note that nsheppard's "fictional" answer remains at +5 at the time of this comment and this is despite it being subjected to a tantrum which can usually be expected to significantly lower the rating. This indicates that my continued endorsement of his reply is actually in line with consensus. There are other things I would of course write differently in retrospect, and participants who I have learned to interact with differently (if at all) in the future---but the 'fictional' comment is most definitely not the place at which I would intervene to counterfactually change the past if I could. If you'll pardon me while I reciprocate with a similar question, why did you think it was a good idea to ask me the quoted question? By my estimation even casually following my comments for a month would be enough to predict with significant confidence that that kind of reply to a rhetorical question is something that I would reflectively endorse myself making or upvote from others. Most people could probably predict that even just having read the context in this thread. Of course I am going to disagree. The aforementioned entirely predictable disagreement doesn't mean that you can't assert your position but it does mean that if you ask a direct question then my possible responses are ignore [http://lessw
5Wei_Dai8yIt's the latter. In fact even after reading your comment I still don't understand why you think "fictional" is a good reply in addition to my suggestion. You said But I don't understand why this is true. Can you explain more? I guess this explains why you didn't explain more why you still endorse "fictional". Let me clarify: my preferences are that the original discussion didn't get side-tracked, but once we're already side-tracked, I don't think a shorter side-track is necessarily better than a longer one, if for example the longer one is more likely to resolve the disagreement in a way that would prevent future side-tracks like it. I was hoping that either 1) once you considered my alternative answer and my reasons for why it's better, you would agree with me that it would have been a good idea to use that instead of "fictional", in which case we would be able to communicate better in the future and avoid similar side-tracks, or 2) you would disagree and explain why, in a way that makes me realize I've been having some false beliefs or behaving suboptimally. I get the feeling from this that you don't like rhetorical questions, but I'm not sure if that's the case, or if it is, why. Do you prefer that I had phrased my comment like the following? (Or let me know if I should just wait for your post to explain this.)
0wedrifid8yI'm glad to hear this, I much prefer it to David's interpretation. Perhaps, but it would be unwise. I have done far more explaining than is optimal already and my model of observed social behavior in this context is not one that predicts reason to change minds. ie. In a context where this kind of disengenuity [http://lesswrong.com/lw/9c/mandatory_secret_identities/7jfn] is above -3 supplying reasons would be an error similar in kind to bringing a knife to a gun fight. Note that this isn't to say you are too mind killed to communicate with, rather it is to say that systematic voting and replying based on already intrenched political affiliations would overwhelm any signal regarding the actual subject matter, leaving you an inaccurate perception of how the subject matter is perceived in general. I don't mind them, they are appropriate from time to time. I am aware, however, that they are often given privileged status such that answering them directly in a way that doesn't support the implied argument is sometimes considered 'missing the point' rather than rejecting it. Rhetorical questions are a powerful dark arts technique and don't need additional support and encouragement when they fail. Absolutely. Or, rather, if you had believed as David did that the answer to the question was pretty damn obviously "No" then your original comment would be a far more personal act of aggression than this one would have been. But I don't think this is because it was a rhetorical question but rather because it would be a form that is more personal, presumptive, condescending and disingenuous. The only general problem with 'rhetorical questions' that would be pertinent is that they are often just as socially effective at supporting bullshit [http://en.wikipedia.org/wiki/On_Bullshit] as supporting coherent positions. (The 'bullshit' here refers to the countefactually-known-to-be-false assumption that I would agree with you if I reflected. It does not apply if either you were since
4Wei_Dai8yI disagree. I think you probably have a bias in how you interpret voting patterns, and the situation is not as politicized as you think. However, I am more curious about what your reasons are than how others judge your reasons, so if you continue to worry about giving me an inaccurate perception of how the subject matter is perceived in general, please send me a PM with your reasons. It seems to me that rhetorical questions are more of a dark arts technique when you're making a speech and can use them to lead your audience to a desired conclusion. In a debate or discussion on the other hand, it seems easy to counter a rhetorical question by laying out the implied argument and then pointing out whatever flaws might exist in it. I think I often use rhetorical questions for hedging [http://changingminds.org/techniques/questioning/rhetorical_questions.htm#hed]: which seems like a pretty reasonable use.
-3wedrifid8yI gave a specific example near the context of this quote and that comment is actually representative of the specific subset of rhetorical questioning that I hold in contempt. If you are right and I am incorrect about the merits of such comments then I would consider myself so fundamentally confused when reasoning about the quality of comments like those that anything I have to say about that topic really is almost worthless. There is a corollary there as well. Debates are roughly equivalent to (or a subset of) speeches when it comes to rhetoric use. Discussions are different. Note that if rhetorical questions of the kind david describes [http://lesswrong.com/lw/9c/mandatory_secret_identities/7jfu] (where the speaker believes the recipient almost certainly disagrees with the implied answer but the speaker wants to persuaded the audience) and this is done in the context of "discussion" then the speaker is being disingenuous and it is really a debate or speech to the audience. Yes. You should note that most of the grandparent consisted of saying that rhetorical questions per se aren't something I oppose. (Note that I do believe it is unwise for me to continue this conversation. While I succumbed to the temptation to respond to textual stimulus with this comment you may consider me weakly-to-moderately precommitted to not responding further.)
1Wei_Dai8yYou may well be right about the merits of comments like that, but wrong about the situation being very political. Maybe people are refraining from voting comments like it down because they do not recognize their low merit, rather than because of political affiliations. On the other hand, if you are wrong about the quality of those comments, saying what you have to say is still not worthless because by doing so you may be convinced that you are wrong (e.g., if you explained your reasons fully then someone could perhaps point out a flaw in them that you missed before), which would be a benefit to yourself as well as to the LW community. So I don't think this is a good reason for stopping. What would be a good reason is if there's a good chance you'll actually collect and organize what you have to say into a post, in which I'll be patient and look forward to it.
0wedrifid8yYes, I believe that they don't recognize the low merit. An expected utility calculation applies and my estimation is that I have erred on the side of too much explaining, not too little. Another good reason would be that I find arguing with you about what posts should be made to be both fruitless and unpleasant. I find that the difference in preferences, assumptions and beliefs constitute an inferential distance that does not seem to be successfully crossed---I don't find I learn anything from your exhortations and don't expect to convince you of anything either. Note that I applied rudimentary tact and mentioned only the contextual reason because no matter how many caveats I include it is always going to come across as more personal and rude than I intend to be (where that intent would be the minimum possible given significant disagreement [http://www.overcomingbias.com/2008/09/disagreement-is.html]). Since this is something of a pattern you should note that a tendency to make it difficult to end conversations with you gracefully makes it less practical to engage in such conversations in the first place. Let's assume that you are right and the reason expressed for withdrawing was a bad one---for emphasis, let's even assume that for some reason me ending a particular conversation is both epistemically and instrumentally irrational as well as immoral. Even in such a case you choosing push a frame where I should continue a conversation or should explain myself to you or others would still give incentive to avoid the conversation if my foresight allows, to avoid the awkwardness and anticipated social cost. What I am saying is that there is a tradeoff to making comments like the parent. It may achieve some goals that you could have (persuasion of someone regarding the wrongess of ending a particular conversation perhaps) but come with the cost or reducing the likelyhood of future engagement. Whether that trade off is worth it depends on your preferences and what y
8Wei_Dai8yOk, I think I figured it out. It seems rather obvious in retrospect and I'm not sure what took me so long. You have a very different view of the current state of LW than I do. Whereas I see mostly reasonable efforts at truth seeking with only occasional forays into politics, you see a lot more social aggression and political fights. Whereas I think komponisto's comment was at worst making a honestly mistaken point or asking a badly phrased question, you interpret it as dark arts and/or social aggression, and think that the appropriate response is a counterattack/punishment, which is good for LW because it would deter such aggression/dark arts from him and others in the future. I guess that from your perspective, "fictional" serves as such a counterattack/punishment, whereas my suggested answer would only blunt his attack but not deliver a counter-punch. If my guess is correct, I'm quite alarmed. Your view of LW has the potential to become a self-fulfilling prophecy, because if you are wrong about the current state of LW, by treating others as enemies when they are just honestly mistaken or phrasing things badly, you're making them into enemies and politicizing discussions that weren't political to begin with. Furthermore you're a very prolific commenter and viewed as a role model by a significant number of other LWers who may adopt your assessment and imitate your behavior, thereby creating a downward spiral of LW culture. I would urge you to reconsider, but since you don't like my exhortations, I feel like I should at least indicate to others that there is significant disagreement about whether your assessment and behavior are normative.
1[anonymous]8yDid the fictional Joker matter have something to do with politics? Am I missing something? Or do you mean politics in the sense of "Activities concerned with the acquisition or exercise of authority or status"?
0TheOtherDave8yQuestion: is it your sense that wedrifid views LessWrong as unusually ridden with social aggression, or views komponisto's comment as demonstrating exceptional social aggression? Or merely that he views these things as containing social aggression, like most forums and exchanges?
0wedrifid8yAs an answer to the slightly different question of what Wedrifid sees himself seeing, it would be probably less than most forums and in general typical of human interactions. In fact, seeing a human community without any social aggression would just be creepy [http://en.wikipedia.org/wiki/Uncanny_valley] and probably poorly functioning unless the humans were changed in all sorts of ways to compensate.
0TheOtherDave8y(nods) FWIW, I'm entirely unsurprised by this. What I'm not quite sure of is whether Wei Dai shares our view of what you believe in this space. I'm left with a niggling suspicion that you and he are not using certain key terms equivalently.
0wedrifid8yThis is almost certainly the case, and one of the things that made conversation difficult.
-1wedrifid8yI disagree with Wei Dai on all points in the parent and find his misrepresentation of me abhorrent (even though he is quite likely to be sincere). I hope that Wei Dai's ability to persuade others of his particular mind-reading conclusion is limited. My most practical course of action---and the one I will choose to take---seems to be that of harm minimisation. I will not engage with---or, in particular, defend myself against---challenges by Wei Dai beyond a one sentence reply per thread if that happens to be necessary. I have been making this point from the start. That which Wei Dai chooses to most actively and strongly defend tends to be things that are bad for the site (see the aggressive encouragement of certain kinds of 'contrarians' in particular). I also acknowledged that Wei Dai's perspective would almost certainly be the reverse.
7Vladimir_Nesov8yI'm confused. I expect saying "your interpretation of my model of LW is wrong, I'm not seeing that much of political fighting on LW" would be sufficient for changing Wei's mind. As it is, your responses appear to be primarily about punishing the very voicing of (incorrect) guesses about your (and others') beliefs or motives, as opposed to clarifying those beliefs and motives. (The effect it has on me is for example that I've just added the "appear to be" disclaimer in the preceding sentence, and I'm somewhat afraid of talking to you about your beliefs or motives.) Why this tradeoff? I'd like the LW culture to be as much on the ask side [http://lesswrong.com/lw/375/ask_and_guess/] as possible, and punishing for voicing hypotheses (when they are wrong) seems to push towards the covert uninformed guessing.
4Vaniver8ySort of- punishing guessing also makes the "what are your goals here?" question more attractive relative to the "I think your goals are X. Am I right?" question. That said, I agree that discouraging voicing hypotheses should be done carefully, because I agree that LW culture should be closer to ask than guess.
0wedrifid8yThankyou for adding the disclaimer. My motives in that comment were not primarily about punishing the public declaration of false, negative motives and instead the following of practical incentives I spent three whole paragraphs patiently explaining in the preceeding comment [http://lesswrong.com/lw/9c/mandatory_secret_identities/7k0t]. It would have been worse to make an unqualafied public declaration that my motives were that which they were not, in direct contradiction to my explicitly declared reasoning, than a qualified one. After all, "appear" is somewhat subjective such that mind of the observer is able to perceive whatever it happens to perceive and your perceptions can constitute a true fact about the world regardless of whether they are accurate perceptions. I would of course prefer it if people refrained from making declarations about people's (negative) motives (for the purpose of shaming them) out of courtesy, rather than fear. Yet if you don't believe courtesy to apply and fear happens to reduce the occurrence that is still a positive outcome. Note that I take little to no offense at you telling people that I am motivated to punish instances of the act "mind read negative motives in others then publicly declare them" because I would endorse that motive in myself and others if they happen to have it. The only reason the grandparent wasn't an instance of that (pro-social) kind of punishment was because there were higher priorities at the time. I recently made the observation: That is something I strongly endorse. It is a fairly general norm in the world at large (or, to be technical, there is a norm that such a thing is only to be done to enemies and is a defection against allies). I consider that to be a wise and practical norm. Thinking that it can be freely abandoned and that such actions wouldn't result in negative side effects strikes me as naive. I took it as a personal favor when the user I was replying to in the above removed the talk about
7Wei_Dai8yFrom my perspective, what I did was to hypothesize that you had the motive to do good but wrong beliefs. The beliefs I attributed to you in my guess was that komponisto's comment constituted social aggression and/or dark arts, and therefore countering/punishing it would be good for LW. I do not understand in what sense I hypothesized "negative motives" in you or where I said or implied that you should be shamed (except in the sense that having systematically wrong beliefs might be considered shameful in a community that prides itself on its rationality, but I'm guessing that's not what you mean). You said you didn't punish me in this instance but that you would endorse doing so, and I bet that many of the people you did punish are in the same bewildered position of wondering what they did to deserve it, and have little idea how they're supposed to avoid such punishments, except by avoiding drawing your attention. The fact that * you do not have just one pet peeve but a number of them, * your frequent refusals to explain your beliefs and motives when asked, * your tendency to further punish people for more perceived wrongs while they are trying to understand what they did wrong or trying to explain why you may be mistaken about their wrongness, and * your apparent akrasia regarding making posts that might explain how others could avoid being punished by you, All of these do not help. And I note that since you like to defend people besides yourself against perceived wrongs, there is no reliable way to avoid drawing your attention except by not posting or commenting.
3wedrifid8yEDIT: This reply applies to a previous version of the parent. I'm not sure whether it applies to the current version since just a glance at the new bulleted list was too much. Yes, were I to have actually objected in this manner to you comment I clearly would have objected to the attribution of "false beliefs result in " based on untenable mind-reading and not "sinister motives". You will note that Vladimir referred to both. As it happens I was not executing punishment of either kind and so chose to discuss insinuation of false motives rather than insinuation of toxic beliefs because objecting to the former was the stance I had already taken recently and is the one most significantly objectionable. You will note that "punishment" here refers to nothing more than labeling a thing and saying it is undesirable. In recent context it refers to the following, in response to some rather... dramatic and inflammatory motives: I do endorse such a response. It is a straightforward and rather clearly explained assertion of boundaries. Yes, a technical analysis of the social implications makes such boundary assertion and the labeling of behaviors as 'rude' entails a form of 'punishment'. This is an (arguably) nuanced and low level analysis of how social behaviors work and I note that by the same analysis your own comments tend to be heavily riddled with both punishments and threats. Since this is an area where you use words differently and tend to make objections in response to low level analysis I will note explicitly that under more typical definitions of 'punishment' that would not describe your behavior as frequently having the social implication of punishment I would also reject that word applying to most of what I do. I assert that there is no instance where I have 'punished' people for accusing me of believing things or having motives that I do not have where I have not been abundantly clear about what I am objecting to. Because not only is this not something that c
4Vladimir_Nesov8yYou do explain things, but simultaneously you express judgment about the error, which distracts (and thereby detracts) from the explanation. It doesn't seem to be the case that the punishment consists only of the explanation. An explanation would be stating things like "I don't actually believe this", while statements like [http://lesswrong.com/lw/9c/mandatory_secret_identities/7kkb] "Nothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up." communicate your judgment about the error, which is additional information that is not particularly useful as part of the explanation of the error. Also, discussing the nature of the error would be even more helpful than stating what it is, for example in the same thread Wei still didn't understand his error after reading your comment, while Vaniver's follow-up [http://lesswrong.com/lw/9c/mandatory_secret_identities/7kkj] clarified it nicely: "his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences" (with some flaws, like saying "best"/"worst", but this is beside the point). (I didn't refer to either, I was speaking more generally than this particular conversation. Note how this is an explanation of the way in which your guess happens to be wrong, which is distinct from saying things like "your claims to having mind-reading abilities are abhorrent" etc.)
0wedrifid8yAre significant. It does matter whether or not actual words expressed are being ignored or overwhelmed by insinuations and 'hypotheses' that the speaker believes and would have others believe. It is not-OK to say that people believe things that their words right there in the context say something completely different. Yes, that is intended. The error is a social one for which it is legitimate to claim offense [http://lesswrong.com/lw/13s/the_nature_of_offense/]. That is, to judge that the thing should not be done and suggest to observers also consider that said thing should not be done. Please see my earlier explanation regarding why outlawing the claiming of offense for this type of norm violation is considered detrimental (by me and, implicitly, by most civilised social groups). The precise details of how best to claim offense can and should be optimised for best effect. I of course agree that there is much that I could do to convey my intended point in such a way that I am most likely to get my most desired outcomes. Yet this remains an optimisation of how to most effectively convey "No, incompatible, offense". So was I, with the statement this replies to. So no, it isn't.
0Vladimir_Nesov8yI understand that, my point is that this is the part of the punishment that explains something other than the object-level error in question, which is the distinction Wei was also trying to make. (I guess my position on offense is that one should deliberately avoid taking or expressing offense in all situations. There are other modes of social enforcement that don't have offense's mind-killing properties.) Okay.
0wedrifid8yThat doesn't seem right, although perhaps you define "offence claiming" more narrowly than I. I'm talking about anything up from making the simple statement "this shouldn't be done". Basically the least invasive sort of social intervention I can imagine, apart downvoting and body language indications---but even then my understanding is that is where most communication along the lines of 'offense taking' actually happens.
1Wei_Dai8yI highly value LessWrong and can't think of any reasons why I would want to do it harm. My [http://lesswrong.com/lw/btc/how_can_we_get_more_and_better_lw_contrarians/] past [http://lesswrong.com/lw/13s/the_nature_of_offense/] attempts [http://lesswrong.com/lw/ehg/underacknowledged_value_differences/] to improve it seems to have met with wide approval (judging from the votes, which are generally much higher than my non-community-related posts), which has caused me to update further in the direction of thinking that my efforts have been helpful instead of harmful. I understand you don't want to continue this conversation any further, so I'll direct the question to others who may be watching this. Does anyone else agree with Wedrifid's assessment, and if so can you tell me why? If it seems too hard to convince me with object-level arguments, I would also welcome a psychological explanation of why I have this tendency to defend things that are bad for LW. I promise [http://lesswrong.com/lw/6pg/experiment_psychoanalyze_me/] to do my best not to be offended by any proposed explanations.
0wedrifid8yNothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up. Once again, to be even more clear, Wei Dai's sincerity and pro-social intent have never been questioned. Indeed, I riddled the entire preceding conversation from my first reply onward with constant disclaimers to that effect to the extent that I would have considered any more to be outright spamming.
0Wei_Dai8yI'm saying that I can't think of any reasons, including subconscious reasons, why I might want to do it harm. It seems compatible with your words that I have no conscious reasons but do have subconscious reasons.
3Vaniver8yI suspect his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences.
1wedrifid8yOr, far more likely, having the best motives and getting slightly bad consequences. Having the worst consequences is like getting 0 on a multiple-choice test or systematically losing to an efficient market. Potentially as hard as getting the best consequences and a rather impressive achievement in itself.
0Wei_Dai8yOk, so does anyone agree that he is right (that I misunderstand the dynamics of the system), and if so, tell me why?
9TheOtherDave8y(sigh) OK, my two cents. I honestly lost track of what you and wedrifid were arguing about way back when. It had something to do with whether "fictional" was a useful response to someone asking about how to categorize characters like the Joker when it comes to the specifics of their psychological quirks, IIRC, although I may be mistaking the salient disagreement for some other earlier disagreement (or perhaps a later one). Somewhere along the line I got the impression that you believe wedrifid's behavior drags down the general quality of discourse on the site (either on net, or relative to some level of positive contribution you think he would be capable of if he changed his behavior, I'm not sure which) by placing an undue emphasis on describing on-site social patterns in game-theoretical terms. I agree that wedrifid consistently does this but I don't consider it a negative thing, personally. [EDIT: To clarify, I agree that wedrifid consistently describes on-site social patterns in game-theoretical terms; I don't agree with "undue emphasis"] I do think he's more abrupt and sometimes rude (in conventional social terms) in his treatment of some folks on this site than I'd prefer, and that a little more consistent kindness would make me more comfortable. Then again, I think the same thing of a lot of people, including most noticeably Eliezer; if the concern is that he's acting as some kind of poor role model in so doing, I think that ship sailed with or without wedrifid. I'm less clear on what wedrifid's objection to your behavior is, exactly, or how he thinks it damages the site. I do think that Vaniver's characterization of what his objection is is more accurate than your earlier one was. [EDIT: Reading this comment [http://lesswrong.com/lw/9c/mandatory_secret_identities/7kko], it seems one of the things he objects to is you opposing his opposition to engaging with Dmitry. For my own part, I think engaging with Dmitry was a net negative for the site. Whether
4Wei_Dai8yThe difference between Eliezer and wedrifid is that wedrifid endorses his behavior much more strongly and frequently. With Eliezer, one might think it's just a personality quirk, or an irrational behavioral tendency that's an unfortunate side effect of having high status, and hence not worthy of imitation. I didn't mean to sound very confident (if I did) about my guess of his objection. My first guess [http://lesswrong.com/lw/9c/mandatory_secret_identities/7k6t] was that he and I had a disagreement over how LW currently works, but then he said "I disagree with Wei Dai on all points in the parent" which made me update towards this alternative explanation, which he has also denied, so now I guess the reason is a disagreement over how LW works, but not the one that I specifically gave. (In case someone is wondering why I keep guessing instead of asking, it's because I already asked and wedrifid didn't want to answer, even privately.) Thanks! What I'm most anxious to know at this point is whether I have some sort of misconception about the social dynamics on LW that causes me to consistently act in ways that are harmful to LW. Do you have any thoughts on that?
1TheOtherDave8yI certainly agree with you about frequently. I have to think more about strongly, but off hand I'm inclined to disagree. I would agree that wedrifid does it more explicitly, but that isn't the same thing at all. Haven't a clue. I'm not really sure what "harmful to LW" even means. Perhaps unpacking that phrase is a place to start. What do you think harms the site? What do you think benefits it?
2[anonymous]8yThe difference needn't lie in your motives, conscious or unconscious. You might simply have bad theories about how groups develop. (A possibility: your tendency to understate the role of social signaling in what sometimes pretends to be an objective search for truth.) But your blindness to potential motives is also problematic--and not just because of the motives themselves, if they exist. For an example of a motive, you might have an anti-E.Y. motive because he hasn't taken your ideas on the Singularity as seriously as you think they deserve--giving much more attention to a hack job from GiveWell. Well, you wanted a possible example. There are always possible examples.
1wedrifid8yLet it be known that I, Wedrifid, at this time and at this electronic location do declare that I do not believe that Wei Dai has conscious or unconscious motives to sabotage lesswrong. Indeed the thought is so bizarre and improbable that it was never even considered as a possibility by my search algorithm until Wei brought it up. It really seems much more likely to me that Wei really did think that chastising those who tried to prevent the feeding of Dmytry was going to help the website rather than damage it. I also believe that Wei Dai declaring war on "Fictional" as a response to "What do you call the Joker?" is based on a true, sincere and evidently heartfelt belief that the world would be a better place without "fictional" (or analogous answers) as a reply in similar contexts. Enemies are almost never innately evil [http://lesswrong.com/lw/i0/are_your_enemies_innately_evil/]. (Another probably necessary caveat: That word selection is merely a reference to a post that contains the relevant insight. Actual enemy status is not something to be granted so frivolously. Actively considering agents enemies rather than merely obstacles involves a potentially significant trade-off when it comes to optimization and resource allocation and so is best reserved for things that really matter.)
0TheOtherDave8yIt is not clear to me that the distinction between a discussion that takes place in public, and speech to an audience, is as crisp as you seem to suggest here.
0wedrifid8yI did not intend to suggest any crisp distinction. Indeed, I was trying to weaken the 'crispness of distinction' from the preceding comment.
0TheOtherDave8yThen I completely misunderstood "Debates are roughly equivalent to (or a subset of) speeches when it comes to rhetoric use. Discussions are different. " If your precommitment to not respond further doesn't extend to include spinoff discussions like the one I'm implicitly starting here, then I encourage you to clarify my understanding if possible. But if it does, that's OK too.
0wedrifid8ySomething like a spectrum, with some things being more clearly debate like and some things being more clearly discussion like. Also assume an "I'll concede that" before "discussions are different".
2TheOtherDave8yIf a third-party observer's perspective helps: your preferences seemed sufficiently predictable to me that I'd tentatively understood Wei Dai's question as primarily a rhetorical one, intended to indirectly convey the suggestion that it would have been better to give such a response.
2wedrifid8yI was wary of making that suggestion because that would mean the whole "avoid a lot of the subsequent side-track into whether 'fictional' is a sensible answer or not" was more overtly insincere and hypocritical than I expect wei_dai to be. If I hadn't given Wei this benefit of the doubt I would not have answered straightforwardly as I did and instead had to evaluate how best to mitigate the damage from unwelcome social aggression.
4JoshuaZ8ySociopaths care a lot about status, and the most extreme sociopaths respond to attempts to reduce their status with violence. I strongly suggest Jon Ronson's "The Psychopath Test" for a highly informative and amusing introduction to psychopathy/sociopathy and its symptoms.