Is there a concrete proposal or problem statement we can examine? It feels like the generalities have been pretty well discussed, and we're at the point where details and implementation is probably more useful as a predictor of impact than the high-level desiderata. Or a more concrete metric we can discuss the ranges we want to see and the goodhart risks.
A few clarifying questions for the mods (or others tempted to declare any document "official"):
- Are the problem cases not being downvoted enough? Why is this orthogonal to voting?
- one possible answer - harm is done before the votes start to come in.
- another possible answer - users are voting wrong, or on dimensions that don't enforce these norms.
Approximately a. feels right to me. I dislike wasting people's time viewing and downvoting something if I can confidently predict it. That's for posts. For comments, there's always a temptation to look at the downvoted ones to see what's happening. Also a matter of volume. If 2-3% of posts get downvoted, seems fine. If it's 20% or more, that's a lot of pollution and I'd rather have an earlier stage of filtering.
- Is this normative or positive? Are we saying that we want posts/comments to be this way and we will ... do something new to enforce that? Or just clarifying and describing what gets moderated already.
I'm not familiar with that distinction. But it's both in my thinking currently. Will have a post up soon explaining how I'm seeing it.
- Do you think thenumber of posts/comments is too high, too low, or just about right?
- same for the distribution of posts/comments per user.
All else equal, more is better. I think more in terms of Signal-to-Noise ratio and keeping that good.
- Do you think posting/commenting is a skill or a talent?
Both.
- Does the median (or p25) user have any avenues for learning how to make good posts/comments other than trying it and getting feedback?
Yes. First, reading other people's stuff should be instructive. Also I think trying it and not posting would also help you improve just using your own taste.
- Is there a post-quality-level that doesn't improve the site, but DOES improve the poster's skill/knowledge enough to be able to improve the site over time?
Yes, and sometimes us moderators will approve content for that reason.
- Is the problem mostly about Said/Duncan disagreements, among long-time posters, or about newbies and noise? Or both (though why do you think the same solution applies to both problems)?
Said/Duncan disagreements feel rarer to me and less of a problem in a direct way. However, that conflict concerned norm enforcement and so clarifying norms helps with the noise, and also clarify things in context of accusations/claims from a conflict like that.
This post is an argument that the answer to its title question is "no", right? I agree.
Also from the Twelve Virtues:
Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it. You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory.” But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
I think the last three months are a pretty definitive demonstration that talking about "norms" is toxic and we should almost never do it. I'm not interested, at all, in "norms." (The two posts I wrote about them were "defensive" in nature, arguing that one proposed norm was bad as stated, and expressing skepticism about the project of norms lists.)
I'm intested in probability theory, decision theory, psychology, math, and AI. Let's talk about those things, not "norms." If anyone dislikes a comment about probability theory, decision theory, psychology, math, or AI, you can just downvote it and move on with your day! I think that will make everyone much happier than any more time devoted to prosecuting or defending against claims of violations of supposed "norms"!
I also think it makes sense to have a pretty strong bias against talking about what the "norms" of a space are, instead of asking about what thing is true, or what thing is optimal under various game-theoretic considerations.
That said, there is definitely a real thing that the "norms" of a space are talking about. Different spaces share different assumptions. There is value in coordinating on shared meaning of words and shared meaning of gestures of social punishment and reward. It seems quite important to help people orient around how people in a space communicate.
When Ruby asked me for feedback on this stuff yesterday, the thing that I said was something like: "There is clearly an art of discourse that LessWrong as a collective should aim to get better at. I think a core part of what is important to communicate to new users are the lessons of the art of discourse that LessWrong (and the LessWrong moderators) have figured out so far. But the ultimate attitude towards the art of discourse should be one of looking out together at reality, not the existing userbase telling other people confidently what the true art of discourse is."
Of course, similarly to how I don't think it makes sense to relitigate whether the christian god exists on this website, I also think there are certain aspects of the art of discourse that I would like to mostly assume as true, and put the onus on the individual to overcome a pretty high burden of proof before we go into discussions on that topic. Two things I feel pretty confident in here are:
And many others that don't seem super worth going into right now.
Separately from this, it is actually helpful for moderators to set concrete and specific rules about behavior, when possible. Moderators need to be able to concretely limit, incentivize and disincentivize behavior based on their current best model of the art of discourse (which sometimes will be wrong, and they will incentivize things wrongly, and that's the cost of doing business, though hopefully we can notice when they are wrong and they can correct their models over time).
Sometimes the only way to define a rule is to talk about fuzzy lines, and this will sometimes require talking about what kind of intentions are commonly associated with bad outcomes, or other correlations that are signs of harm, without being able to point to the harm itself directly (in the same way as my best model of Magnus Carlsson is to just say that he is aiming to win a chess game, it should not be surprising that many social situations also only really have a short description in the space of intentions). It makes sense to be consistent with them, and these do form a canon of rules that seem important to communicate to people, though where possible it should be made clear how these rules derive from the moderator's model of the art of discourse.
This does result in something kind of similar to a set of norms, though this framing on it at least feels more grounded, and tries to communicate more clearly that there is a ground truth here, and hopefully some way to have productive conversations about whether any given rule and incentive structure will help or hurt.
I'm afraid I don't have the time for a full writeup, but the Stack Exchange community went through a similar problem: should the site have a place to discuss the site? Jeff Atwood, cofounder, said [no](https://blog.codinghorror.com/meta-is-murder/) initially, but the community wanted a site-to-discuss-the-site so badly, they considered even a lowly phpBB instance. Atwood eventually [realized he was wrong](https://blog.codinghorror.com/listen-to-your-community-but-dont-let-them-tell-you-what-to-do/) and endorsed the concept of Meta StackExchange.
I agree with basically everything you said. My main worries for LW are insularity and new people having lower standards. (These worries are anticorrelated, and I'm not sure if this itself is a third problem or if it means they're actually only one problem.) I think it's reasonable to try to attract new people but then throw them at material that tries to inculcate high epistemic standards. Or maybe it's even reasonable to try to design the mechanisms and incentives of LW to get the results you want.
In my experience a list of explicit norms isn't usually that helpful to users themselves. But it can be helpful if moderators need to do a lot of moderation actions, and want to stay on the same page with each other and with the users. But hopefully we can avoid that being necessary for another order of magnitude?
I would prefer to call it guidelines, and to generally frame the thing not as "you must follow this, or you will get banned" but rather "we have a community of polite and productive discourse (though we are not perfect), and following these guidelines will probably help you fit in nicely".
We already have some informal norms. Making them explicit is potentially useful. Not just for us, but maybe for someone who would like to replicate the quality of "being much better than internet's average" on some website unrelated to rationality or AI.
On the other hand, sometimes the norm proposals get so complicated and abstract, that I do not really believe I would be capable of following (or even remembering) them in everyday life. Like, maybe it's me being dumb or posting too late at night, but sometimes the debates get so meta that I do not even understand what both sides are saying, so it is scary to imagine that some of that gets codified as an official norm to follow.
As you say, we already have informal norms. And those norms determine what gets upvoted/downvoted, and also what moderators may take action on. To the extent those norms exist and getting acted on already, it seems pretty good to me to try to express them explicitly.
I think the challenge might be accurately communicating what enforcement of the norms looks like so people aren't afraid of the wrong thing. I can see not warning them enough (if we lied and said there's no possibility of banning ever), or warning them too much and they think we scrutinize every comment.
Seems hard, because I want to say "yes, if you fail at too many of these, we will give you a warning, and then a rate limit, and eventually ban you", that's a necessary part of maintaining a garden, but we also want people to not get too afraid.
Also currently we plan to experiment with "automoderation" where, for example, users with negative karma get rate-limited, and seems good to be able to automatically send them and say "very likely you're getting downvoted for doing something on <list> wrong".
norm proposals get so complicated and abstract
Yeah, that does seem like a good goal. Under my current thinking, what gets upheld by the moderators is our understanding of what good discourse looks like, and the list is trying to gesture at that. And then maybe it is challenging because my models of good discourse will have pieces that are pretty meta? I'm not sure, will see what comes up when I try to write more things out.
Upvoted, and I think I'd probably answer "yes" if you rewrote the title to be active (specifying WHO wrote the document, not just that it came into being), and used a different word than "norm", perhaps "site discussion preferences".
Use of the term "norm" is BOTH too jargon-ey for new users, AND too fuzzy for what you want to convey. Norms are socially enforced, without a visible authority structure, usually very unevenly and opaquely. Norms are evolved within a group, rather than written and durable.
When these expectations are legible, and come from an authority, we call them "rules" or at least "guidelines". When they're explicitly not seriously enforced, "desiderata" or "preferences" are closer.
Declaring a "true underlying commitment" to cut the enemy with every move is not the same as actually doing it, and could be counterproductive, reducing pressure from particular moves to be effective.
If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.
How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception.
Noting that I don't think pursuing truth in general should be the main goal: some truths matter way, way more to me than other truths, and I think that prioritization often gets lost when people focus on "truth" as the end goal rather than e.g. "make the world better" or "AI goes well." I'd be happy with something like "figuring out what's true specifically about AI safety and related topics" as a totally fine instrumental goal to enshrine, but "figure out what's true in general about anything" seems likely to me to be wasteful, distracting, and in some cases counterproductive.
I think the more precise thing LW was founded for was less plainly "truth" but rather "shaping your cognition so that you more reliably attain truth", and even if you specifically care about Truths About X, it makes more sense to study the general Art of Believing True Things rather than the Art of Believing Truth Things About X.
the Way of the Void
This, TBH. Maybe also the Litany of Tarski points at the same thing. I feel like that's the wording that left the deepest impression on me, at least on the epistemic side. "Rationalists should win," I think did it for me on the instrumental side, although I'm afraid that one is especially prone to misinterpretation as tribalism, rather than as the Void of decision theory as originally intended.
I would be very worried about the effects of enshrining norms in a list. Like, we have implicit norms anyway. It's not like we can choose not to have them, but trying to cement them might easily get them wrong and make it harder to evolve them as our collective knowledge improves. I can perhaps see the desire to protect our culture from the influx of new users in this way, but I think there are probably better approaches.
Like maybe we could call them "training wheels" or "beginner suggestions" instead of "norms".
I also like the idea of techniques of discourse engaged in by mutual consent. We don't always have to use the same mode. Examples are things like Crocker's Rules, Double Crux, Prediction Markets, Bets, Street Epistemology, and (I suppose) the traditional debate format. Maybe you can think of others. I think it would be more productive to explore and teach techniques like these rather than picking any one style as "normal". We'd use the most appropriate tool for the job at hand.
I think a different perspective is “should we reify the judgments of popular, high status and locally powerful LessWrong users of virtues and vices as site norms, and provide a user guide as to how to conform to that vision of virtue?”
My first ever comment on here was dumb and got downvoted. I self corrected, bumbled along, and eventually sort of found a way to fit in that suits me OK.
I think local social pressure helped me to do that, and if you don’t respect the extant community enough to think you have a lot to learn from them, then what are you doing hanging out with them?
But I think it’s, I don’t know, a sense of fun and excitement and usefulness and novelty that makes a user see the sense of rationalist virtue and want to fit in harmoniously in the ecosystem.
So any such site norms, I think, need to be about keeping it fun and interesting for old and new users both!
Sometimes it feels like all the virtue and site norm posts are a bit stodgy/threatening/frustrated-sounding, although to be fair some of that attitude is warranted.
But on the whole I guess I would hope any such site norms would be positive and lighthearted and inspiring.
Like, I would love to see things like a list of posts and comments and dialogs that people think exemplify the True Spirit of LessWrong, and why they like them. Concrete examples of LessWrong at its best would be interesting in their own right, and I think they do more to help set people in the right direction.
So any such site norms, I think, need to be about keeping it fun and interesting for old and new users both!
Sometimes it feels like all the virtue and site norm posts are a bit stodgy/threatening/frustrated-sounding, although to be fair some of that attitude is warranted.
+1. The thing I miss most about when EY stopped posting on this site is that his nonfiction writing is orders of magnitude more enjoyable than that of almost everyone else.
Nowadays I often dread the frequent walls of text in posts and comments, which are presumably written from their authors' commendable desire to express themselves as accurately as possible. I wish more of them also had a commensurate desire (and skill) to delight their readers.
On reflection, I think there is value in BOTH publishing official rules/guidelines that describe moderator actions and the rationale behind when they'll be used, AND unofficial "norms", which is prediction of what is likely to be upvoted and get good engagement, and what is likely to be downvoted and not generate useful discussion.
The key is that official things are NORMATIVE - they include a top-down demand and specify enforcement actions. It doesn't need to be (and IMO shouldn't be) algorithmic-level specific - there's still space for human judgement, but those humans are identified as special roles within the site, not the general populace. Norms are NOT official, and any description of them is POSITIVE - it's a prediction of what unofficial crowd reactions will be.
Norms tend to be FAR more contextual and uneven than rules, even fairly loose rules. Norms are far less legible, as there's no authority to keep them consistent or understandable. Norms are generally less egalatarian than rules, as crowds tend to weight popularity and individual fame more highly than rules do.
Mostly, I think it's great to give newbies more advice, and it's great to give mods an easier job, but both of those jobs become much harder if you don't acknowledge that there are multiple different kinds of evaluation which are applied to posts and comments.
very insightful, but something sets me on edge that truthseeking is being compared by central example to trying to hurt other humans. My intuition is that that will leak unhealthy metaphor, but I also don't explicitly see how it would do so and therefore can't currently give more detail. (this may have something to do with my waking up with a headache.)
I suppose there are a lot more Void metaphors in the Tao Te Ching that we could borrow instead, although maybe not all of them are as apt. Yudkowsky likened rationality to a martial art in the Sequences. It's along the same theme as the rest of that. Martial arts are centered around fighting, which can involve hurting other humans, but more as a pragmatic means to an end rather than, say, torture.
I tried experimenting with framing my comments in terms of norms or norm design recently, and the only things that that I found relevant with any consistency is Scott's asymmetric weapons, and incentives that make them more applicable (though it's a tricky point, I've made a mistake of over-applying it only 3 days ago).
I think most good discourse norms fail to be relevant to rationality in particular, so it's important to avoid any association between them and rationality.
Similarly, a rationalist isn't just somebody who respects the Truth.
All too many people respect the Truth.
A rationalist is somebody who respects the processes of finding truth.
To start making lists, there need to be already enough things that could become norms and are clearly relevant to rationality, in its non-inflated sense. Norms are ornery beasts, they require great robustness in their targeting such that there is little damage when they start tramping all over nuance, and you can't swap them out once they are in place.
Hello Ruby,
Massive Edit: I made this comment into a post, which I didn't want to post, in its entirety.
I still believe the essence of the comment is relevant, so I'm replacing my original comment with these two sentences from my post:
I'm writing this because I do not believe fixing peripheral things on LW is enough.
LW stands at a crossroads. Ahead lies clarification of essence, identity and focus.
Kindly,
Caerulea-Lawrence
It might be useful to encourage more comments along the lines of "I'm upvoting/downvoting this because of X" and have people vote agree/disagree to evolve our shared norms.
Explanations are useful, especially to new members. But sometimes I just don't know what to say... something rubs me the wrong way, but the inferential distance may be too large to explain why.
An example: here is a new user's first post. I feel like it is obviously bad but... if you can shortly and clearly explain why, please go ahead and do it, because I can't.
In my mind, the entire proposal translates as "we should solve the coordination problem by coordinating to solve the problem", which is like "duh, if we were able to do that, we wouldn't have this problem in the first place". It feel internally incoherent, like "we should treat information with military-grade security" but also "people should be incentivized to provide this information by getting access to the information provided by others", which again is "this is not how military-grade security works", plus the obvious ways to hack such incentive system like "provide lots of mostly useless information". But when I write it this way, it feels like a strawman, and maybe it is, I am not sure.
I try to provide verbal feedback, but sometimes it is too difficult. And also, I do not really want to spend 30 minutes thinking about an optimal explanation for downvoting an article which already has negative karma. But also I am aware that the new user who makes the first post and gets downvoted without explanation is probably curious why. :(
EDIT:
Probably a meta-advice for new users: do not write too long articles; if your argument can be split into multiple steps, post them separately. Then you are more likely to get useful feedback.
Possible moderation policy: New users should have length limit on their articles. With the explanation why we want then to split complex arguments into multiple steps.
Though, reading that linked article again.. actually that one sounds like one step, so this probably wouldn't help. Ok, I'm giving up.
My post wasn't about providing verbal feedback to the author. It was about writing comments that help create shared norms about what should be downvoted.
"I'm downvoting this comment because it argues against a strawman" is a way to promote the norm of voting down strawmen. Building a shared understanding about what sort of writing should be downvoted because it violates that norm is useful.
Wait. Are you thinking that this is necessary to ENCOURAGE more downvotes, or to EXPLAIN to newbies (or help them predict; same thing) why they get downvoted?
These are different outcomes, and likely need different solutions.
Neither. If you look at the Said/Duncan conflict, different established users have different beliefs about what the norms should be. Writing long posts about norms is one way to have a discussion that develops shared understanding about norms. Being explicit about why one casts votes in individual cases is another way to develop a shared understanding about norms.
Now I'm feeling bait-and-switched. The first benefit listed in the post is "It's a great onboarding tool for new users to help them understand the site's expectations and what sets it apart from other forums", and many of the comments talk about new users. That's a TOTALLY different issue than the Said/Duncan posting styles, which is going to take a nuanced and judgement-filled moderation/voting system, not a one-size-fits-all official guideline.
That's the first benefit listed, but the second is:
It provided a recognized standard that both moderators and other users can point to and uphold, e.g. by pointing out instances where someone is failing to live up to one of the norms
Instead of having a fixed standard to point to, I think it's better to naturally evolve norms and do that by people being explicit about their views when they vote.
I agree it often feels hard to point out why things aren't good, even when they clearly. My experience is that I've gotten better at this with practice, and the mod team has been collecting typical reasons and explanations things aren't good. I think we'll share these soon (and you'll see them applied), which might help other people be able to articulate which things aren't good either.
Wary of this line of thinking, but I'll concede that it's a lot easier to moderate when there's something written to point to for expected conduct. Seconding the other commenters that if it's official policy then it's more correctly dubbed guidelines rather than norms.
I'm struck by the lack any principled center or shelling point for balancing [ability to think and speak freely as the mood takes you] vs any of the thousand and one often conflicting needs for what makes a space nice/useful/safe/productive/etcetera. It seems like anyone with moderating experience ends up with some idea for a workable place to draw those lines, but it rarely seems like two people end up with exactly the same idea, and articulating it is fraught. This would really benefit from some additional thought and better framing, and is pretty central to what this forum is about (namely building effective communities around these ideas) rather than purely a moderation question.
To get this written and shared quickly, I haven't polished it much and the English/explanation is a little rough. Seemed like the right tradeoff though.
Recently, a few users have written their sense of norms for rationalist discourse, i.e. Basics of Rationalist Discourse and Elements of Rationalist Discourse. There've been a few calls to adopt something like these as site norms for LessWrong.
Doing so seems like it'd provide at least the following benefits:
My current feeling is creating some lists as an onboarding tool seems good, but doing anything like declaring a list of Site Norms is fraught.
The True Norm of LessWrong is that with each motion, you should aim towards truth. I think it's actually worth quoting the entire 12th virtue here (emphasis added).
If we were to declare site norms, I'd want to do in a way that made it very clear to new users and everyone else that our true underlying commitment was to truth and good decisions, not a particular list of good things to do that we'd written up.
I'd also want to have a process that caused the list to get reviewed periodically and updated as arguments and evidence came in. Though that might be challenging, and I'd worry about it getting stuck in place because the norms people operate on are the ones they think other people agree with it, and it's hard to get common knowledge after the first announcement.
Supposing though that it's clear the list is just a surface level manifestation of the underlying goal, and that you also generate a really good list. I still think there's some further way things go wrong:
I think if there's a list of Site Norms and we tell users that these are the criteria their contributions are judged on, we'll get some "Goodharting" rather than the true underlying motion towards truth. Maybe this is better than no concrete instruction? I wouldn't want to do it wrong.
Relatedly, if the Site Norms get invoked in moderation, I'd worry about people getting too fixated on that, start rules-lawyering, etc. One user accuses another of not following X, Y, Z norms, moderators have to weigh in and figure out if it that's true, etc, observers get roped into adjudication of "was that really a strawman?" or whatever.
And truth maximization probably doesn't look like norm-violation-minimization. Optimizing hard for a list of Site Norms will likely just get in the way of productive Babble and focusing on cutting the enemy. Sometimes people who say new, interesting stuff break the "rules" and say some dumb stuff too. In other words, users should be backchaining from whether discussion seems to be making progress on figuring stuff out, not on compliance. If new users show up and mods and other users start pointing to the list of norms, I think that's what new users (and older users) will start to conform to, and lose sight of what matters.
Another thought here is that while a list of written site norms would have the nice property of you can get clearer common knowledge and explicit buy-in for them, they have the disadvantage of being static and simplified/compressed relative to more organic norm enforcement.
Right now, there's a set of implicit norms enforced by the active LessWrong userbase. Each user has their own sense of what's good and bad pointed at approximately the same values, with a large degree of overlap (though not perfectly), and when you post or comment, the people who view it will respond based on their sense of it. Individuals personal sense of what's good (which hopefully is defined as approximately "conducive to truth") is probably a more complicated nuanced function than a written list of norms would be. So when multiple members of the LessWrong population view your content and judge it by their own lights, it get assessed by something more nuanced and dynamic (in that users (and the site as whole) can develop their sense of what's good over time[2]).
If we get too anchored on a list of explicit site norms, the site's judgment gets channeled via this more compressed thing, and also via the LessWrong team's judgment in finalizing it, and their judgment yet again in enforcing it/deciding on interpretation. This seems good if I think the LessWrong team's judgment is better than the broader population in aggregate, but I currently don't think that's true, and am wary of policing the site with much more reliance on our judgment than we currently do.
Currently we do take a lot of moderation action, but almost always that's on users who've been downvoted a bunch (thus indicating the judgment on the LessWrong population) or users who we're quite confident would get downvoted if we let them post. There's not zero of our judgment in there, but signals from other people are big part of it.
Those are some arguments and considerations. I think it'd be good to have some kind of list if it's properly disclaimed. "List of things commonly considered good for truthseeking discourse" that's more of an onboarding tool than something people get called out for violating. If we can pull that off. Not sure. My top goal here is to get feedback from others on thinking about this.
Feedback appreciated.
It was useful for me for Duncan to call out "Maintain at least hypotheses consistent with the available information", as I think historically I've failed to do that.
I suppose dynamic and changing is good if you think people's judgment gets better over time, and something static is better if you're worried about drift.