I moved and copies this discussion out of the latest MTG color-wheel post, since I would prefer the discussion on the post to stay on the object-level.

Commentary by Conor Moreton:

[Meta/cultural note: as of this writing, the parent comment I made in reply to CoolShirtMcPants' elaboration on horoscopes is at -2, which I think is a bad sign re: LW culture in general. CSMcP was making a broad claim of the form "categorical psych tools are bad," which is both a) reasonable and b) in context the sort of claim Scott sighs about in his excellent post Yes We Have Noticed The Skulls. It was a knee-jerk, base-rate, cached objection to an entire category of Thing based on that category being generally bad/useless/misleading, when the post in question was about a specific instance, started out with a link to Fake Frameworks, made explicit bids to be treated fresh/in good faith, and was written as the 29th entry in a series of posts that have been generally agreed to contain non-zero value and rationalist virtue.

The comment above implicitly (and maybe clumsily) made the claim "I suspect you're only using your generally sensible prior, and I think it's better in this case to construct a posterior that combines your generally sensible prior in a rational way with awareness of the source and context."

Regardless of whether the posterior ends up being "yeah, still bullshit" or "maybe I'll give this more charity than I otherwise would have," the requested operation "instead of just commenting using your base rate, combine your base rate with your sense of whether a given person has demonstrated 'worth listening to' nature" is one that LessWrongers should absolutely engage in, on the regular.

i.e. I strongly believe that the thrust of the comment ("You're leaving out context that a LWer ought not leave out, and this is somewhat undermining the point you're trying to make") is correct, defensible, and prosocial given the community's goals. I think it was innocuous at worst, and definitely not the sort of thing that ought to be in negative territory. I'm taking it as an instance of a class, and making an impassioned defense of other comments in its class, and saying "Don't downvote these."

If LessWrongers in general are inclined to downvote one member making a request of this sort of another member ("please engage in the kind of double-check we know humans often need and which is not costly to perform"), this does not reflect well on the explicit goal of building a community where solid epistemic norms are incentivized. If LessWrongers in general have a bucket error that makes them e.g. treat the above comment as a status move or a clumsy attempt to argue from authority (rather than as a bid for people to use their System 2 to do Bayesian updating on whether or not a source has indeed demonstrated itself credible, and to what degree, or not), this does not reflect well on the question of whether our average member is Actually Doing The Thing.]

Edit: user Vaniver made an insightful reply to a symmetrical case above, which I appreciated and endorse.

Referenced other comments:

CoolshirtMcPants:

Grouping things into categories seems like a great way to form biases and have a limited scope of how to think about people. Do we really need to have another system that categorizes personalities?

This might seem like a harsh example, but horoscopes is an example of what this system reminds me of.

The overall picture I see with this, is that we are giving limited value to a word that could mean many things, and could be interpreted in many ways, when most things are more complex than this (especially people). Basically, it seems to me to be a vague way to express something. I personally am of the mindset that it's of a benefit to not group people into categories.

gjm:

So, let's suppose astrologers have come up with 12 personality types, and then they claim to be able to tell which one fits you best by careful consideration of your date and time of birth. It seems to me that the big problem there is with the second half, not the first. (Even bigger problems if they claim to be able to use the information to predict what's going to happen to you, of course.) But the second half is exactly what isn't present in Conor's analysis here.

I don't know whether the advantages of having a small number of personality archetypes and pigeonholing people as combinations of those archetypes outweigh the disadvantages. Nor do I have any opinion worth listening to about whether the particular set of 5 archetypes described here is a particularly good one. But "it resembles one thing astrologers do" is not a good argument against it.

CoolShirtMcPants:

You did mention making useful predictions with your 5 color types, which was why I wasn't afraid to go with horoscopes as an example.

That's paraphrase seems to have taken quite a leap in what I was trying to say. I can elaborate if you'd like. It's just that this post doesn't appear much different from other personality tests that group people into categories. Psychology used to have a similar approach of trying to categorize people, but as they make more advance and learn how different everybody is from each other. They have learned to have a more open approach to people rather than a category to put them in and work from a bias to determine what steps to do next.

Conor Moreton (referenced in comment at the top of this post):

The context that's being neglected in your comment is "Conor's clearly put a lot of thought and cycles into rationality, and staked a specific claim that this one's better/more useful in practice than all the other wrong-but-usefuls."

You're saying "it doesn't appear much different," which is a fine hypothesis to have, but it doesn't engage with whether or not my voucher provides useful Bayesian evidence.

Separate thread:

habryka (reply to comment referenced at the top of this post):

I downvoted your last three comment, but sadly don't currently have time to go into a longer explanation of why, since I expect it would take a while and I have a bunch of deadlines the next few days. But I figured I would be transparent about it.

Happy to double crux about it at any point over the next few weeks, after things are a bit less busy.

Conor's reply to habryka:

I mean, there's not necessarily a need to double crux. LessWrong is what it is, and I'm unlikely to be able to shift or change it's culture, as a lone individual. But if it's the culture described at the end of my above comment, it's not where I belong and it's not where I'll stay.

I do note significant object-level disappointment and dismay that you downvoted the last one. That seems like strong negative evidence to me (since you as a founder do have powerful levers on the culture of this site you have created), whereas the other comments are more ambiguous.

2nd Conor comments:

On second thought, maybe there is a need to double crux, because as I mull and think further, I find myself believing this is perhaps THE crucial point, and suspecting that LW lives or dies based on this thing as a first-order factor—that basically nothing else matters even half as much as this thing.

(To the point that if the long comment is at zero-or-negative after a few days, I will take that as conclusive proof that I should leave and not come back, because I misunderstood what LW was for and who was taking part.)

Reply by gjm:

I am not habryka and claim no particular insight into his attitudes or opinions, but I remark that I see two quite different types of reason why someone might downvote your earlier comment, and that it seems like the conclusions you should draw from them are quite different. (1) Disagreement with what it says: "Conor wants LWers to give one another more benefit-of-the-doubt than I think they should." (2) Disapproval of its methods. "Conor got downvoted, and responded with an indignant complaint about how this looks like a sign that LW as a whole is epistemically messed up; I don't think that's a healthy response to getting downvoted." It looks to me as if you're assuming #1, but as if #2 is actually at least as likely.

(In case it matters: I have not downvoted any of your comments in this thread or, so far as I can recall, anywhere else. I am not sure whether I agree with your criticism of CSMP's criticism; I'm sympathetic in principle but think that if you want to make "generic" criticisms of a specific proposal inappropriate, then you need to do more work explaining why you think the specific proposal addresses or invalidates those generic criticisms. And I do think your responses in this thread seem a bit excessive, drawing grand (albeit tentative) conclusions about the health of the LW community from very slender and ambiguous evidence and threatening to leave if one particular comment of yours doesn't meet with approval expressed by upvotes.)

... And now I see habryka has in fact responded, and that his reason was neither my #1 nor my #2 but (I claim) much nearer #2 than #1.

New Comment
70 comments, sorted by Click to highlight new comments since: Today at 7:12 AM
[-]Zvi7y120

This entire exchange reinforces my worry that Karma will get taken too seriously and become a rather large Goodheart's Law problem. We can't kill Karma, we need it or the site can't exist, but perhaps we should encourage not taking it too seriously slash trying to lessen its importance.

8habryka7y
Yeah, I was similarly worried. I would be open to testing to see what happens when we make karma invisible, or at least trivially inconvenient to access, but still have it function the same way. I.e. we could have a week or two in which I make that UI change, and then we check in and see how the experiment went.

Loren ipsum

9Ben Pace7y
If I understand you: your OP is saying "Try on this fake framework" and the commenter said "This class of frameworks isn't on average worth using" and you said "But have you considered that my writing this post is evidence this particular one might be useful, and so tried it on for that reason?". There's a certain rare sort of person who, when handed a fake framework like this, will set aside 30 minutes to 'try it on'. They'll think about some interpersonal problems they've had lately, like a feeling of unease with a co-worker or a fight with their spouse, and try to classify the relevant people. Then they'll randomly classify a bunch of their friends. And then if it seemed useful, they'll add this framework to their list of tools for dealing with such situations. (This I think is what you refer to here.) However. I think that most folks' experience is (implicitly) "Does something in this immediately feel resonant? Do I perhaps also notice that the person I'm close to who had suicidal ideation is a red?" and if not... they'll get to the end of the post and move on. Most folks don't have the (very practical) TAP as I referred to in the prior paragraph. [Added: From knowing you personally and your strong operationalisation skills (way more useful for me than taking Anna's class on bucket errors was simply reading your ~6 examples of people making them) - this is the sort of typical mind fallacy I expect you particularly might fall prey too.] People are mostly going "Does the argument feel obviously true/useful?" and so people interpret you saying "But doesn't my bringing this up count as evidence you should try it" as "But doesn't my bringing this up count as evidence that it's true". FWIW I downvoted your comment and wrote a several-paragraph response against social bayesianism (though didn't post it because the thing I wrote felt unclear and thus counterproductive), but looking at what I wrote, I responded to the thing I understood you as saying, which was "Yes
2Ben Pace7y
I note that the nearby-universe Ben who does have that TAP, finds your comment to be trivially true. Like, I totally think it's not worth the time trying the average thing in the relevant reference class, but totally would try this one for half-an-hour if you brought it up the way you did (without you even needing to prompt me). (Current universe Ben is quite busy, but can see a himself doing this in 2 months.)
0Conor Moreton7y
Loren ipsum
7Ben Pace7y
Hmm... if you genuinely meant to say "Have you stopped to consider to what extent my opinion counts as evidence or not, including possibly deciding that it's neutral or anti-evidence?" then I just want to say "No." and I claim this is the correct thing to do. I genuinely think that social bayes/aumanning is a bad idea. To capture what I expect is a 4,000-word post in a catchy sentence: If I don't understand something, just because Conor believes it's true, doesn't cause me to understand it any better. As I say, I do take your claim that MTG-colours are useful as sufficient evidence for me to try it (conditional on me having the sort of life where I have the time and mental habit to try rationality techniques I get recommended - I still haven't practiced the things Anna recommended to me at my CFAR workshop). I don't even need reminding of that, it's just true. If that's not what you meant though, I do have a disagreement with you. Added: I also do think that social aumanning is, in general, motivated by status, and is not helpful to truth-seeking (but that this is non-obvious and that many good rationalists do it). I do feel worried to say this because I feel you might decide that I have said the Worst Thing In The World (TM).
4ialdabaoth6y
Look, fuckers. Coming out against "social Bayesianism" is like a communist trying to ban money because everyone should just get what they need automatically. Except it's not 'LIKE' that, it IS that. Awarding arguments credit based on who says them *is a thing we do as humans*. You can drive it underground where you can't regulate it, or you can acknowledge it explicitly and try to craft it into something that fucking *works* in the direction you want it to (say, epistemic truth, if you're into that). But you can't just wish it away. I love y'all but sweet baby Jesus.
9Raemon6y
My impression is that there's a minimum-inevitable amount of it, but that it's possible to have systems/situations that make it even stronger, and there's opportunity to think about that and alleviate that. Facebook makes it really obvious if certain people like things. (It usually shows me if Eliezer 'liked' a thing, presumably because he has a high/dense network. This means I don't even have the opportunity to form an opinion on it before deciding with my social-brain what it means that Eliezer liked it.) You can curtail that by... just not making that information prominent. There are similar choices available on LW with regards to whether to show how much karma something has. (You could potentially hide people's usernames too, but that a) comes with weird complications and b) seems more like something that'd drive stuff underground rather than be helpful) I think Ben's argument was something like "Conor's original comment was explicitly saying 'you should value my opinion because of my expertise'", and that this is something that inflates social bayesianism beyond its default levels. I think Conor's argument (and I can imagine your argument at least in some similar conditions) is that being able to evaluate expertise and incorporate expertise (and keep it distinct from halo effects) is in fact an important skill to cultivate, which comes with it's own set of "good norms to cultivate." Which does seem true to me, although it's unclear to me if this particular instance actually was a good exemplar of that.
3Rob Bensinger6y
Presumably this is also useful information for the rest of your brain, though, if Eliezer-likes are entangled with evidence about other things. FB seems to be doing this particular thing, in the particular case, approximately right: it doesn't usually overtly display who liked what until I go check; and in the cases where it does display that, it's generally because it's correctly sending me things Eliezer liked, and being transparent that that's what it's filtering on. Ideally FB would make it trivial for me to subscribe/unsubscribe from particular users' "likes", though, and fiddle with personalized settings re who can like what, when likes are viewable at all, etc.
3Raemon6y
So, my current belief is that the right way to do this is to *not* be blatant about how you're doing the filtering. Yes, Eliezer liking something is evidence (to me) that it's a better-than-average thing. But a better way seems like, on LW, would be: a) posts/comments are shown initially via filtering that takes in a lot of inputs (some combination of recent-ness, how much karma it has (which takes as an input who liked it) Therefore, I can trust that information coming to me is important enough to be worth my time. BUT, I can still form a first impression of it based on my own judgment (the 'it's worth your time' information has enough inputs that my brain isn't driven to try and derive anything from it) b) then I can read comments by people that give me further information like "this person who is a trained economist liked it, this person who's judgment I generally trust disliked it, etc" Facebook is an adversarial algorithm I *don't* trust to show me relevant things in the first place, and it shows me the "who liked a thing" first. I think there's a number of things going on, some good, some bad. But I have a suspicion that this has trained my "social bayesian system" to be weighted more heavily relative to my "think things through without social info" system. For LessWrong, we have a number of options on what information to highlight and what incentives to output. We could choose to show upvote/downvote information publicly. We could choose to enable "quick response" or "FB React" style comments (that makes it easier to see if Eliezer liked a thing but didn't have time to leave an explicit written-out comment saying so). If we went that route, we could choose to make those React-Style-Comments prominent, or always sort them to the bottom so you first have to wade through more information-dense comments. I can imagine it turning out to be best to have FB-React style comments or similar things, but my intuition is it's better for LW in general to force peopl
4Conor Moreton6y
Loren ipsum
7gjm6y
How hard is it to get one other human to do that? Not very hard, I think. Here, I'll do it: I don't think Conor was quite claiming that we should value his opinion because of his expertise, although he was saying something (a) readily mistaken for that and (b) not entirely unlike it. But that's not the same question as "How hard is it to be sure that one other human will do that without being asked?". Lots of mistakes go uncorrected, here and everywhere else. Most of the time, people (even smart and observant people) don't notice mistakes. Most of the time, people (even honest and helpful people) who notice mistakes don't point them out. In this case, it's not like you (initially) made it perfectly clear and explicit what exact claim you were making, and it seems to me as if you're expecting more mind-reading from your audience than it's reasonable to expect. Even in a community of smart truth-seeking people. Let's just recall what you actually said at first: This seems to me like exactly what you would have written if you had been making the claim that we should value your opinion because of your expertise. (Well, not exactly expertise, but something like it, and I don't think that distinction is the one you're trying to draw here.) And, for the avoidance of doubt, I don't in fact think there's anything wrong with saying (something like) that we should value your opinion because of your expertise. I'll go further: what you actually, originally, wrote makes much more sense as "you should value my opinion because I've thought about this a lot and am worth listening to" than as "you should consider the fact that I've thought about this a lot and the other stuff I've written, and then decide whether that's evidence for or against my opinion", which is IIUC what you are now saying you meant. I think it is very, very understandable and not at all a sign that we are living in some Orwellian world of history-revision that this discussion is not full of people who are
3Conor Moreton6y
Loren ipsum
6gjm6y
I have (honestly, I assure you) failed to see where you "did ask, more than once" for others to endorse your account of what you were saying. I just took another (admittedly cursory, because I need to be somewhere else in five minutes) look over the thread and still can't see it. Let me say for the avidance of doubt that I do not begrudge the time I took to write the above, or the fact of my having written it, and that I am not irritated, and that I neither had nor have any interest in lowering your status. As for "genuinely meant" versus "actually said", I stand by what I wrote above: when I read what you actually originally wrote, I cannot see how it says what you now say it said. It rather conspicuously avoids making any very precise claim, so I won't say it definitely says what Ben says it does -- but his reading seems a more natural one than yours. I am very happy to accept that what you are now saying is what you always meant, and I am not for an instant suggesting that there's anything dishonest or insincere in what you're now saying, but after reading and re-reading those words I cannot see how it says what-you-say-it-said rather than what-Ben-said-it-said, and in particular I cannot see how it is reasonable to complain that Ben was "uncharitable and inaccurate". (This is not any kind of recantation of my earlier "I don't think Conor was quite claiming ...", precisely because I do accept what you say about what you meant by those words. But when you're judging other people's reactions to those words, what those words actually say at face value is really important.)
3Ben Pace6y
I'm sort of confused by this comment. Conor's comment doesn't actually (to my eyes) say what you said it says. If Conor had wanted to say "You should try this because I said it's good" then there's a lot of comments he could've written that would be more explicit than this one. He could've said Howeve, what Conor said was Not "your comment doesn't engage with the fact that my voucher provides useful Bayesian evidence". The explicit meaning is "You didn't use social evidence - have you consider doing so?" while being agnostic about the outcome of such a reasoning step. In general there are a lot of ways someone would write a comment that explicitly states what the outcome of such reasoning could be, and the fact that Conor wrote it in one of the few ways that specifically doesn't say his voucher should be trusted is sort of a surprising fact, and thus evidence he didn't want to say his voucher should be trusted. I don't think Conor intended to say exactly what he said, and his motivation was not status-based. Analagously, if a friend says "This coffee shop I went to is great, you should try it!" and you said "You've given me no argument about why this coffee shop should reliably produce better products than the dozens of other coffee shops in the area" you may say "You're right, I only wanted to let you know that I thought it was great, and if you think I've generally got good judgement about things like this you might find value in trying it" and that's generally fine. Note: I keep being quoted as saying Conor "genuinely meant" something he didn't say. I didn't say that. (Wow this is a fun game of he-shaid she-said.) I said "You said X, but I think you meant to say Y, but if you genuinely meant X, then I disagree." I'm not denying that he said X, and I think he said X.
6Rob Bensinger6y
Might have something to do with people coming to the same line with different priors? E.g., based on coming from different points on the ask-guess spectrum, or from different varieties of ask/guess. For a combination of reasons -- such as "it's rude to outright assert that you're an authority, so people regularly have to imply it and talk around it," and "it's just not that common for people to have zero interest/stake in a conversation, or to deliberately avoid pushing for their interest" -- it's not surprising that some people's prior is skewed toward other interpretations, such that you need to very heavy-handedly and explicitly clarify what you mean (possibly even explicitly disavowing the wrong interpretation) before you can shift those people away from their prior. Priors just feel like how the world is, though; it's not natural (and often not possible) to distinguish the "plain" or "surface" meaning from the text from your assumptions about what people would most often mean by that text.
4gjm6y
I think I took it as read that Conor was saying his voucher provides useful (and positive) evidence because that seemed the only way to make sense of his saying what he said in response to what he said it to. I mean, you can't tell whether CoolShirtMcPants had considered whether Conor's testimony was evidence from what he wrote; all you can tell is that apparently he didn't think it was. In any case, I'm now definitely confused about who has at what times thought Conor meant what by what. What I remain confident of is that Conor did not so clearly not say that we should take his endorsement as evidence as to make it unreasonable to say he did; and I think your comments above should give Conor good reason to reconsider his characterization of what you said before as "uncharitable" (given that strictly only people, not words, can be uncharitable, and that it's hard to see why someone uncharitably disposed would write what you did above). And I think there are too many levels of he-said-she-said going on here...
5Conor Moreton6y
Loren ipsum
3weft6y
Personal support: I do not think that comment should have been negative. I upvoted to counteract. I take you at your word that you meant what you say. I see similar problems you do with the R-community and their trendsetters/decision makers. Status-raising/Compliments: I loved your articles. In fact, they turned out to be the only thing that was making LesserWrong interesting to me as opposed to just a bunch of AI/Machine Learning stuff I totally don't care about (I am not the target audience of this site). If you leave, I probably will too, by which I mean going back to checking it once a month-ish to see if anything particularly interesting has been written. Without your articles there really isn't much here for me, that I can't get by checking individual blogs, which it turns out I have to do anyways since not everything is crossposted to frontpage. Advice/Uncompliment: The Green in me doesn't like that you aren't just letting this go. The problematic signs you see I think are true, but you aren't going to be able to change it by having neverending debates. Accept the community as-is, or move on (but tell me where you're going, so I can read you elsewhere).
6habryka6y
I did mention this to a few people in private who seemed to misunderstand you in this respect. I think from the discussion that is currently available, it's forgivable that none of the people who did not chat with you in private have this misunderstanding of the situation. I think your original sentence was easy to interpret that way. I apologize for not correcting people on this in public. We have a bunch of major feature-launches upcoming, and I currently don't have the capacity to follow all the recent discussion closely, and be a super productive participant. I don't think that anyone outside of me, Ben Pace and maybe Vaniver really had the context to correct people on this confidently. (I spent the last 10 minutes reading through your past comments, but didn't find any clarification that felt to me like it was clear enough so that someone without large amount of context would have confidently come to a correct model of what you intended to say, so I don't think this is really a failing of any of the people who are passively reading the site.)
4Raemon6y
Note: I have not engaged directly with your points since posting a few days ago "this is what I currently understand your point to be. If this is your point, then I am pretty confused about what it is you think we're disagreeing about. I will not be able to usefully engage further until you clarify that." (I don't think you're obligated to have responded, but it is a brute fact about the world that Ray is not able to productively engage with this further until you've done so. We've chatted in private channels about discussing things elsewhere/elsewhen and that is still my preference) https://www.lesserwrong.com/posts/ZdMnP77yEE3wWPoXZ/continuing-the-discussion-thread-from-the-mtg-post/hgfgRFwuGsCpknCSr
3Conor Moreton7y
Loren ipsum
7Unreal7y
This seems like a VERY important point to Double Crux on! I'm excited to see it come up. Would love to read about a Double Crux on this point. (Perhaps you two could email back and forth and then compile the resulting text, with some minor edits, and then publish on LW2?) Personally, I agree with Ben Pace, and the fact that it 'might be able to be done right' is not a crux. But I could see changing my mind.
2Conor Moreton7y
Loren ipsum
3Unreal7y
I'm still into the idea of reading a transcript after-the-fact. Or at least a summary. Do you believe the situation above RE: the MTG Color Wheel is an example of a time "when you have to take action and can't figure it out yourself"?
1Conor Moreton7y
Loren ipsum
5whpearson7y
I think I rate "strength of confidence in a person" low when trying to decide whether to really engage in a model. Other factors like "tractibility of a problem area to modelling" or "importance of problem area" are much more important. "Ease of engagement" is probably why I engaged with the mtg post as much as I did, but my low expectation in the problem areas tractibility means I probably won't try it out for very long.
2Chris_Leong7y
Worth considering: http://lesswrong.com/lw/5j/your_price_for_joining/ I guess the question I have is why you consider this issue so important. Your MTG Color Wheel Post is currently on 47 upvotes. The community has been very receptive to the vast majority of your ideas, as shown by engagement and upvotes. People have definitely noticed that there is a pattern of you posting quality content. I suppose the point I am making is that there are many people who would be jealous of the status and respect that you have accumulated in the community so quickly, despite the fact that it is well deserved. I don't mean to throw a "you should be feeling good" onto you, I just think that summarising how I (and I suspect other people) understand the situation makes it easier for you to respond.
1Conor Moreton7y
Loren ipsum

Making a claim like "I claim that a "true" LWer, upon noticing that they were developing a model of me as being butthurt and complaining, would be surprised" seems like an unfair social move to me. It is generally considered rude to say "actually my model of you is totally compatible with saying you're butthurt and complaining" or even "I haven't kept track of you enough to have any sort of prior on this and so am going with my observations," so people who believe those things aren't going to comment.

It is also internally consistent that someone might downvote you and have questioned their knee-jerk reaction. My understanding is that a downvote just means "less of this on LW please," and "even though this person is not being whiny they're certainly not taking the steps I would reasonably expect to avoid being mistaken for whiny" is a good reason to downvote. It seems a bit excessive to demand argumentation from everyone who clicks a button.

2Conor Moreton7y
Loren ipsum

Loren ipsum

WHO SUMMONS THE GR*cough* *wheeze* goddamnit.

Yeah. The thing is, it's waaay less like "magic buttons" that you push to escape the paradigm, and waaay more like trying to diffuse a bomb, that's strapped to your soulmate's skull, on the back of an off-road vehicle that's driving too fast over rough terrain.

Which isn't to say that it can't be done.

Lemme give an example of a move that *might* work, sometimes:

====
"You're playing status games," says X.

"What? No, I'm not," says Y.

"Yes, you are. You just pulled a lowering-Z's-status move. It was pretty blunt, in fact."

"Wh—ah, oh. Oh. Right, I guess—yeah, I can see how that interpretation makes perfect sense if you're playing status games."

"I'm not talking about whether I'm playing status games. I'm saying you are."

"Uh. I'm not, or at least not in the way you're thinking. Like, I grant that if you put on your status glasses my actions only make sense in terms of trying to put Z down or whatever, but if you put on some other glasses, like your engaging in truthseeking discourse glasses, you'... (read more)

5Chris_Leong7y
That was broadly my point, the main reason why I didn't say that was because I recognise that some people have unusual preferences that make decision make sense that would appear irrational from the standpoint of someone assuming normal preferences. I've got my frustrations with the community too, for example, when I tried to convince people to take hypothetical seriously. Or when it was clear that the community was in decline, but it was impossible to take action on it. That made me go away for a while and engage less with the community. But, I decided to give it another go after doing a lot of debating and just learning a lot more in general and I've found that I'm now getting better responses. I can now predict the most likely ways that my posts will be misunderstood and throw in the appropriate disclaimers. There are still lots of ways in which we aren't rational, but that is why we often call ourselves aspiring rationalists and the site Less Wrong. I agree that we still have large flaws in an absolute sense, but I haven't been able to find another site where I can go to have a discussion that is better. Maybe its different for you, maybe your time is best spent elsewhere, but your metric does not feel like a very accurate health of the site. Like, if I'm being really honest, I'm tempted to go and upvote the comment right now just to diffuse the situation - but is that what you're trying to measure? The votes on comments are much less reliable than the votes on posts anyway because many people read the post, browse a few comments, then consider themselves finished on the post and never come back.
4Raemon7y
Haven't finished reading this yet, but important point: But note that for it to end up at -3 on net, that means that either a bunch of less-weighted users downvoted it, or at least three heavy hitters, or some combination I don't think the math there is right (and this is a confusing thing about the site for now, not sure what to do with it) - assuming the comment started at 5, this is one-and-half heavy hitters, or 3 random people, which feels pretty different to me. (3 karma power is really easy to get) And the difference feels fairly significant.
5Conor Moreton7y
Loren ipsum
4Raemon7y
True/fair, but I think this is something people are going to intuit wrong a lot in situations that vary on "who was actually doing the down voting", so I wanted to make sure to note it here.
2Conor Moreton7y
Loren ipsum
7whales7y
For what it's worth, I was another (the other?) person who downvoted the comment in question early (having upvoted the post, mostly for explaining an unfamiliar interesting thing clearly). Catching up on all this has been a little odd to me. I'm obviously not a culture lord, but also my vote wasn't about this question of "the bar" except (not that I would naturally frame it this way) perhaps as far as I read CoolShirtMcPants as doing something similar to what you said you were doing---"here is my considered position on this, I encourage people to try it on and attend to specifically how it might come out as I imply"---and you as creating an impasse instead of recognizing that and trying to draw out more concrete arguments/scenarios/evidence. Or that even if CSMP wasn't intentionally doing that, a "bar" should ask that you treat the comment that way. On one hand, sure, the situation wasn't quite symmetric. And it was an obvious, generic-seeming objection, surely already considered at least by the author and better-expressed in other comments. But on the other hand, it can still be worth saying for the sake of readers or for starting a more substantive conversation; CSMP at least tried to dig a little deeper. And in this kind of blogging I don't usually see one person's (pseudonymously or otherwise) staking out some position as stronger evidence than another's doing so. Neither should really get you further than deciding it's worth thinking about for yourself. This case wasn't an exception. (I waffled on saying anything at all here because your referendum, if there is one, appears to have grown beyond this, and all this stuff about status seems to me to be a poor framing. But reading votes is a tricky business, so I can at least provide more information.)
3tristanm7y
I understand that there may be costs to you for continued interaction with the site, and that your primary motivations may have shifted, but I will say that your continued presence may act as a buffer that slows down the formation of an orthodoxy, and therefore you may be providing value by remaining even if the short term costs remain negative for a while.
5Screwtape7y
Hrm. I would like it if Conor stuck around, since I think the content produced in the last 30 days was enjoyable and helpful to me, but I also think paying costs to slow down the formation of an LW orthodoxy that doesn't align with his goals would be a bad investment of energy. If it was costless or very low cost or if preventing the orthodoxy/causing it to form in a way that aligned with his goals was possible, then it would probably be worth it. I am not in Conor's head, but if I was in their place I wouldn't be convinced to stick around as just a delaying tactic. A much more convincing reason might be to stick around, take notes of who does engage with me the way I wanted to engage with people, and then continue to post here while mostly just paying attention to those people.
2Zvi7y
I don't think this is how one avoids playing status games. It's not a simple 'ignore status games and get to work.' You don't get to do that. Ever. I know, it sucks, and yes Brent Dill is laughing somewhere in the background. You definitely don't get to do that while pointing out that someone should update their reactions to what you're saying based on the fact that you are one making the statement. I realize that this might be a factually accurate statement that you would make even if no monkey brains were involved, but that doesn't matter. Even more than that, the defense of "I realize this looks like a status move but that is a coincidence that I wasn't paying attention to" is not a defense a community can allow, if it wants to actually avoid status moves. See Just Saying What You Mean Is Literally Impossible. This is not something people can just turn off. The way you avoid playing status games is that you play the status game of 'keep everything in balance' rather than not thinking about such things at all and asserting that your monkey brain isn't steering your decisions at all. Yes, you really do need to think about who and what is being lowered or raised in status by everything you say, just like all the other implications of everything you say, and then you need to make sure you cancel those effects out. At least when they're big enough. When having a discussion with someone I want to treat as an equal, who wants to treat me likewise, we both keep careful track of whether someone is gaining or losing status, and do subtle things to fix it if that gets too out of hand. Does that take up a non-trivial amount of our cognitive effort? It can, and yes that sucks, but not paying attention to status games is not a way to not play, it's a way to play without realizing what you're doing. Does it mean occasionally sitting there and taking it when your status is getting lowered even if you don't think the move in question is 'fair' in order to maintain balance? Ye

My current state is of being very curious to learn why Conor believes that this is one of the most important variables on LW. It's something that to me feels like a dial (how much status-dialogue is assumed in the comments) that improves discourse but is not necessary for it, while I think Conor thinks it's necessary for us to be able to actually win. This is surprising, and Conor believes things for reasons, so I will ask him to share his information with me (time permitting, we're currently both somewhat busy and on opposite continents).

8Conor Moreton7y
Loren ipsum

I have a strong, and possibly scary claim to make.

Social reality is *important*. Moreso, it *has gears*.

No, that's not a strong enough phrasing.

Social reality has *physics*.

It is very hard for humans to understand them, since we exist at or near its metaphorical Planck scale. But, there are actual, discernible principles at work. This is why I use terms like "incentive slope" or "status gradient" - I'm trying to get people to see the socio-cultural order as a structure that can be manipulated. I'm trying to get people to see White with Blue's eyes.

You have goals. You have VERY ADMIRABLE GOALS. But even if I disagreed adamantly with your goals, they're your *goals*. They're your values. I can notice that I vehemently disagree with them, and declare war on you, or I can notice that I adamantly agree with them, and offer alliance. (I think you've noticed which side of that I wound up falling on.)

That said, you also have claims about what procedures and heuristics achieve your goals and maximize your values. Those CANNOT, themselves, be values. They are how your values interface with reality, and reality has a physics. It is actu... (read more)

7Conor Moreton6y
Loren ipsum

(One of two posts, this one attempting to just focus on saying things that I'm pretty confident I'd endorse on reflection)

I think this is a noteworthy moment of "Double Crux is really the thing we need here", because I think people are holding very different Cruxes as the thing that matters, and we either need to find the Common Crux or identify multiple Cruxes at the same time for anything good to happen.

Connor's Crux as I understand it - The LessWrong movement will fail if it does not expect people to invest effort to doublecheck their assumptions, check rationality in the moment.

(This seems totally, 100% true to me. I can't say how Zvi, Ben or whoever else feels but I'd be willing to bet they basically agreed, and are not arguing with you because of disagreement on that)

Zvi's Crux as I understand it - The manner in which people give each other feedback is going to get filtered through some kind of status game, the only question is which one and how we implement in a way that ends up in the service of truth. And that Conor's implementation is not currently doing a good enough job to win the game (either here or elsewhere)

Ben's Cru... (read more)

4Raemon7y
Upon reflection, I can't think of anything further I can say that isn't dependent on first having heard Conor make an argument that assumes the listener is 100% on board with the claim that we should expect people to do-rationality-in-the-moment, and that whatever disagreement is going on is going on despite that. (it may turn out other people don't share that assumption, just noting that I personally will not be able to contribute usefully until such a point)
3Ben Pace7y
(note for until I return: this is a virtuous comment and I'm really happy you wrote it. Also this is no longer my crux at all, although I still think social aummaning is mostly not good epistemology)
5Conor Moreton7y
Loren ipsum
7ESRogs6y
I think this is frequently the tone of Zvi's writing. So, for what it's worth, he's not being super extra lecture-y towards you than normal. ;-)
4tristanm7y
I think avoiding status games is sort of like trying to reach probabilities of zero or one: Technically impossible, but you can get arbitrarily close, to the point where trying to measure the weight that status shifts are assigned within everyone's decision making is lowered to be almost non-measurable. I'm also not sure I would define "not playing the game" as within a group, making sure that everyone's relative status is the same. This is simply a different status game, just with different objectives. It seems to me that what you suggest doing would simply open up a Pandora’s Box of undesirable epistemic issues. Personally, I want the people who consistently produce good ideas and articulate them well to have high status. And if they are doing it better than me, then I want them to have higher status than myself. I want higher status for myself too, naturally, but I channel that desire into practicing and maintaining as many characteristics that I believe aid the goals of the community. My goal is almost never to preserve egalitarian reputation at the expense of other goals, even among people I respect, since I fear that trying to elevate that goal to a high priority carries the risk of signal-boosting poor ideas and filtering out good ones. Maybe that’s not what you’re actually suggesting needs to be done, maybe your definition doesn't include things like reputation, but does consider status in the sense of who gets to be socially dominant. I think what I consider my crux is that it’s less important to make sure that “mutual respect” and “consider equal in status, to whatever extent status actually means” mean the same thing, and more important that the “market” of ideas generated by open discourse maintains a reasonable distribution of reputation.
2habryka7y
I basically agree with this.
2Conor Moreton7y
Loren ipsum
4Ben Pace7y
Your linking this here is quite against the spirit of that post's hashtag ;-)
8habryka7y
Like, if the LessWrong 2.0 meta section is not the correct place to discuss what belongs and doesn't belong on LessWrong, then I don't know what to do anymore.
2Conor Moreton7y
Loren ipsum
0ozymandias7y
This comment confuses me. (For clarity's sake, that is the bio of my tumblr.)
7habryka7y
I read it as just a humorous, basically contenless, statement that Conor was amused by your bio.
3the gears to ascension7y
I don't understand how to navigate tumblr, is there more than one post there?
2habryka7y
Ah, no, changed "discussion" to "commentary". Just that one thing. But it was created right after the discussion on the MTG thread, so I figured it was commenting on the same thing.
1ozymandias7y
That prompted my complaint but was not the sole thing I was complaining about; it's happened on multiple posts I thought were interesting. (Also, gah, getting my tumblr posts linked off-tumblr is weird.)
1habryka7y
Let me know if you don't want me to link your tumblr posts to here, if anything else shows up. Trying to keep the spheres separate seems like a reasonable preference to me.

Also, someone seems to have deleted my comment on the old post and not copied it here.

2habryka7y
Hmm, I can't find any deleted comments by you. What did the comment start with?
1Chris_Leong7y
Nm, wasn't that important...

Upvote/downvote this comment if you wanted to downvote Connor's original long comment (mentioned at the top of this post)

[-]gjm7y110

At present, the parent comment to this one says: "Upvote/downvote this comment if you wanted to downvote Connor's [sic] original long comment (mentioned at the top of this post)". It is ... not perfectly clear to me what this means. Upvote this if you wanted to downvote Conor's comment? Vote on this the same way as you wanted to on Conor's comment? Something else? I'm guessing that the latter might be the intention, but right now it kinda says the opposite.