I would like to propose two other guidelines:
Be aware of asymmetric discourse situations.
A discourse is asymmetric if one side can't speak freely, because of taboos or other social pressures. If you find yourself arguing for X, ask yourself whether arguing for not-X is costly in some way. If so, don't take weak or absent counterarguments as substantial evidence in your favor. Often simply having a minority opinion makes it difficult to speak up, so defending a majority opinion is already some sign that you might be in an asymmetric discourse situation. The presence of such an asymmetry also means that the available evidence is biased in one direction, since the arguments of the other side are expressed less often.
Always treat hypotheses as having truth values, never as having moral values.
If someone makes [what you perceive as] an offensive hypothesis, remember that the most that can be wrong with that hypothesis is that it is false or disfavored by the evidence. Never is a hypothesis by itself morally wrong. Acts and intentions can be immoral; hypotheses are neither of those. If you strongly suspect that someone has some particular intention with stating a hypothesis, then be honest and say so explicitly.
The latter guideline was inspired by quotes from Ronny Fernandez and Arturo Macias. Fernandez:
No thought should be heretical. Making thoughts heretical is almost never worth it, and the temptation to do so is so strong, that I endorse the strict rule "no person or ideology should ever bid for making any kind of thought heretical".
So next time some public figure gets outed as a considerer of heretical thoughts, as will surely happen, know that I am already against all calls to punish them for it, even if I am not brave enough to publicly stand up for them at the time.
(He adds some minor caveats.)
Macias:
The separation between value and fact, between "will" and "representation" is one of the most essential epistemological facts. Reality is what it is, and our assessment of it does not alter it. Statements of fact have truth value, not moral value. No descriptive belief can ever be "good" or "bad." (...) no one can be morally judged for their sincere opinions about this part of reality. Or rather, of course one must morally judge and roundly condemn anyone who alters their descriptive beliefs about reality for political convenience. This is exactly what is called “motivated thought”.
Don't jump to conclusions—maintain at least two hypotheses consistent with the available information.
... or be ready and willing to generate a real alternative to your main hypothesis, if asked or if it seems like it would help another user.
Most of these seem straightforwardly correct to me. But I think of the 10 things in this list, this is the one I'd be most hesitant to present as a discourse norm, and most worried about doing damage if it were one. The problem with it is that it's taking an epistemic norm and translating it into a discourse norm, in a way that accidentally sets up an assumption that the participants in a conversation are roughly matched in their knowledge of a subject. Whereas in my experience, it's fairly common to have conversations where one person has an enormous amount of unshared history with the question at hand. In the best case scenario, where this is highly legible, the conversation might go something like:
A: I think [proposition P] because of [argument A]
B: [A] is wrong; I previously wrote a long thing about it. [Link]
In which case B isn't currently maintaining two hypotheses, and is firmly set in a conclusion, but there's enough of a legible history...
I feel uncomfortable with this post's framing. It feels like someone went into a garden I spend my time in and unilaterally put up a sign with a list of guidelines people should follow in the garden, with no ability to enforce these. I know that I can choose on my own whether or not to follow these guidelines, based on whether I think they are good ideas, but newcomers to the garden will see the sign and assume they have to follow them. I would have vastly preferred that the sign instead say "I personally think these norms would be neat, here's why."
(to clarify: the garden = lesswrong/the rationalist community. the sign = this post)
I mean, I have a deep and complicated view, and this is a deep and complicated view, and compressing down the combination of those into "agree" or "disagree" seems like it loses most of the detail. For someone coming to LW with very little context, this seems like a fine introduction to me. It generally seems like straightforward corollaries from "the map is not the territory".
Does it seem to me like the generators of why I write what I write / how I understand what I read? Well... not that close, with the understanding that introspection is weak and often things can be seen more clearly from the outside. I'll also note as an example that I did not sign on to Sabien's Sins when it was originally posted.
Some specific comments:
I have a mixed view of 3 and 4, in that I think there's a precision/cost tradeoff with being explicit or operationalizing beliefs. That said, I think the typical reader would benefit from moving in that direction, especially when writing online.
I think 5 is a fine description of 'rationalist discourse between allies'. I think the caveat (i.e. the first sentence of the longer explanation) is important enough that it probably should have made it into the guidelin...
[Mod] I think they're nice principles to aspire to and I appreciate it when people follow them. But I wouldn't want to make them into rules of what LW comments should be like, if that's what you mean.
I'd probably agree with it in some contexts, but not in general. E.g. this article has some nice examples of situations where "do the effortful thing or do nothing at all" is a bad rule:
...Different people have different levels of social skills. In particular, different levels of fluency or dexterity at getting people to satisfy their wants. (And of course, your dexterity varies based on context.) I think of these in four stages.
Stage 1: Paralysis.
You don't dare make the request. Or you've gotten to the point where you need the thing so badly that you're too overwhelmed to communicate clearly at all. You may not even be consciously aware that you need the thing, you're just suffering for the lack of it.
Stage 2: Rude request.
You make it clear that you want something, but you express it inappropriately. You come across as boorish, pushy, childish, or desperate.
Stage 3: Polite request.
You express your desire calmly, pleasantly, and in an appropriate context. You come across as reasonable and respectful.
Stage 4: Automatic abundance.
You don't even have to ask. Either through luck, planning, subtly guiding the social situation, or very high status, you automatically get what you desire with
Right, that was my impression. The reason I asked was that regardless of how much we all agree about any given set of guidelines (such as those described in the OP), it’s all for naught if we disagree about what is to be done when the guidelines aren’t followed. (Indeed that seems to me to be the most important disagreement of this whole topic.)
(This comment is just notes that I took while I was reading, and not some particular single thought I put together after reflecting on the whole post.)
I'm so honored to be on your list of "unusually good rationalist communicators". I really want to see your description of what each of us does 2-10x more than random LW users. Not just because I want you to talk about me; mostly I imagine it would be really educational for me to read some of these people's writings with your perceptions in mind, especially if I first read an excerpt from each and try to write down for myself what they are doing unusually well. I certainly think my own writing is a lot stronger on some of your discourse norms than on others.
>Some ways you might feel when you're about to break the Nth Guideline:
<3<3<3 that you included these
Question about Guideline 4: Where do you think my tendency (or Anna's tendency, or Renshin's tendency) to communicate in the form of interpretive poetry instead of unambiguous propositions falls with respect to Guideline 4? Or, more precisely: What thoughts do you have when you hold "interpretive poetry that results from attempts to express intuitions" up next to "m...
I would also love a more personalized/detailed description of how I made this list, and what I do poorly.
I think I have imposter syndrome here. My top guess is that I do actually have some skill in communication/discourse, but my identity/inside view really wants to reject this possibility. I think this is because I (correctly) think of myself as very bad at some of the subskills related to passing people's ITTs.
Duncan has just replied on Facebook to my request for descriptions of what each person on his list does. He says it's fine to copy his reply over here.
*
Julia Galef: something like, a science reporter whose hobby side project is doing science reporting for middle schoolers, and it’s in fact good and engaging and not stupid and boring. Wholesomeness, clarity, a tendency to correctly predict which parts of the explanation will break down for the audience and a corresponding slow and careful focus on those sections. Not unrelatedly: a sort of statesmanlike, reliable diplomacy; genuinely civilized debate; not the sort of person who will ever ever ever contribute to a discussion going off the rails. Grounding, stabilizing, sane-itizing.
Anna Salamon: something like how modern AI art programs can take a verbal prompt and spit out endless variations of image, and can take an image and spit out endless variations of interpretation and description. An ability to do co-Focusing, to find matches for felt senses, to find MISmatches between a felt sense and the preexisting model, and zero in on the delta, and rapidly put words to the delta, ad infinitum. An ability to make the proposition “some t...
I think the short statement would be a lot weaker (and better IMO) if "inability" were replaced with "inability or unwillingness". "Inability" is implying a hierarchy where falsifiable statements are better than the poetry, since the only reason why you would resort to poetry is if you are unable to turn it into falsifiable statements.
Curated.
I like this post a lot, and think there's a decent chance I end up using it as a reference. I saw an earlier draft of this by-accident a year ago, and think this version is a significant improvement, has most of the correct caveats, etc.
I'm not endorsing them as "the norms of LessWrong". I haven't read through each expansion in full detail, and perhaps more importantly, haven't thought the implications of everything (often someone will say "it'd good to do X in this circumstance because Y", and I'll nod along going "yeah Y sounds great, X makes sense", and then later someone point out "emphasizing X has cost Z" that suddenly sign-flips my opinion, or add enough nuance to significantly change my belief.
I know there are at least a couple places here I have some hesitations on fully endorsing as stated.
But, I feel fairly good at least going-on-record saying "Each of the ideas here is something anyone doing 'rationalist discourse' should be familiar with as a modality, and shift into at least sometimes" (which I think is a notch below "guideline", as stated here, which is in turn a notch below 'rule', which some people misinterpreted the post as saying).
I'll hopefully have more to say soon after stewing on everything here more.
Here are some places I disagree a bit, or want to caveat:
#10 feels new, and not well-argued-for yet.
I think point #10 is pointing in a good direction, and I think there are good reasons to tend-to-hold-yourself towards a higher standard when directly modeling or assessing other's internal states. It seems at least considering to make this "a guideline." But, whereas the other 9 points seem to basically follow from stuff stated in the sequences or otherwise widely agreed upon, #10 feels more like introducing a new rule. (I think "be a bit careful psychologizing people" is more like an agreed upon norm, but it's very fuzzy, and the way everyone else implements it feels pretty different from how Duncan implements it."
I do think that "better operationalize 'be careful psychologizing' is an important (and somewhat urgent) problem", so I have it on my TODO list to write a post about it. It may or may not jive with Duncan's operationalization.
I do think there is some amount of "Duncan just cares about this in a way that is relatively atypical, and I haven't heard anyone else really argue for". "Hold yourself to the absolute highest standard" feels like a phrasing I don't expect anyone els...
Note: Duncan has blocked Zack from commenting on his posts, so Zack can’t respond here.
I’m almost certain that I’ve commented on this before, and I really don’t mean to start that conversation again… but since you’ve mentioned, elsethread, my potential disagreements with Duncan on rule/norm enforcement, etc., I will note for the record that I think this facility of the forum (authors blocking individual members from commenting on their posts) is maybe the single worst-for-rational-discussion feature of Less Wrong. (I haven’t done an exhaustive survey of the forum software’s features and ranked them on this axis, hence “maybe”; but I can’t easily think of a worse one.)
Maybe placing a button that leads to a list of blocked users under each post (with a Karma threshold to avoid mentioning blocked users that could be spammers or something, with links to optional explanations for individual decisions) would get the best of both worlds? Something you don't need to search for. (I'd also add mandatory bounds on durations of bans. Banning indefinitely is unnecessarily Azkaban.) Right now AFAIK this information is private, so even moderators shouldn't discretionally reveal it, unless it's retelling of public info already available elsewhere. (Edit: It's not private, thanks to Said Achmiz for pointing this out.)
Being able to protect your posts seems important to some people, and an occasional feud makes it a valuable alternative to involvement of moderators. But not seeing the filtering acting on a debate hurts clarity of its perception.
Because the claim you’re making is “two completely different things are effectively the same.”
So you say. But I say that that the claim you are making is “two things that are effectively the same are completely different”.
Thus by the same token, it’s you who should be making a genuine attempt to figure out what I mean!
But of course this is silly. Again: you disagree with a thing I said, and that’s fine. Tell me why. This shouldn’t be hard. What, in brief, is the difference you see between these two (allegedly) obviously different things?
(speaking loosely) This is such a weird conversation, wtf is happening.
(speaking not so loosely) I think I'm confused? I have some (mutually compatible) hypotheses:
H1) the concept "burden of proof" is doing a lot of STUFF here somehow, and I don't quite understand how or why. (Apparently relevant questions: What is it doing? Why is it doing it? Does "burden of proof" mean something really different to Duncan than to Said? What does "burden of proof" mean to me and where exactly does my own model of it stumble in surprise while reading this?)
H2) Something about personal history between Duncan and Said? This is not at all gearsy but "things go all weird and bad when people have been mad at each other in the past" seems to be a thing. (Questions: Could it be that at least one of Duncan and Said has recognized they are not in a dynamic where following the rationalist discourse guidelines makes sense and so they are not doing so, but I'm expecting them to do so and this is the source of my dissonance? Are they perhaps failing to listen to each other because their past experiences have caused strong (accurate or not) caricatures to exist in the head of the other, such that each person is...
Additionally, though this is small/circumstantial, I'm pretty sure your comment came up much faster than even a five-minute timer's worth of thought would have allowed, meaning that you spent less time trying to see the thing than it would have taken me to write out a comment that would have a good chance of making it clear to a five-year-old.
Another possibility is that he did some of his thinking before he read the post he was replying to, right? On my priors that's even likely; I think that when people post disagreement on LW it's mostly after thinking about the thing they're disagreeing with, and your immediate reply didn't really add any new information for him to update on. Your inference isn't valid.
In my experience, Said is pretty good at not jumping to conclusions in the 'putting words in their mouth' sense, tho in the opposite direction from how your guideline 6 suggests. Like, my model of Said tries to have a hole where the confusions are, instead of filling it with a distribution over lots of guesses.
I remember at one point pressing him on the "but why don't you just guess and get it right tho" point, but couldn't quickly find it; I think I might have been thinking of this thread on Zetetic Explanation. I don't use his style, but it does seem coherent to me and I'm reluctant to declare it outside the bounds of rational conversation, and more than once have used Said as the target audience for a post.
"My conversational partner is willing to flex their sixth guideline muscles from time to time" is a prerequisite for my sustained/enthusiastic participation in a conversation.
This seems right and fair to me, and I think you and others feeling this way is a huge force behind the "we're going to try to make LW fun again" moderation push of the last ~5 years.
I want to be on record as someone who severely disagrees with OP's standards. I want that statement to be visible from my LessWrong profile.
Here are N of my own standards which I feel are contrary to the standards of OP's post:
Thanks for taking the time to register specific disagreement!
My reactions to this small sampling of your standards:
I think that 1 is quite important, and valuable, but subordinate to the above in the context of discourse specifically trying to be rational (so we do have disagreement but probably less than you would expect).
I think that characterizing this stuff as "going through the motions" is a key and important mistake; this is analogous to people finding language requests tedious and onerous specifically because they're thinking in one way and feel like they're being asked to uselessly and effortfully apply a cosmetic translation filter; I think that applying cosmetic translation filters is usually bad.
I just straightforwardly agree with you on 3, and I don't think 3 is actually in conflict with any of the things in the post.
4 is the place where I feel closest to "Maybe this should supplant something in the list." It feels to me like my post is about very basic kicks and blocks and punches, and 4 is about "why do we practice martial arts?" and it's plausible that those should go in the other order.
5 feels to me as if it's pretty clearly endorsed by the post, with the caveat tha...
so I read in Rational Spaces for almost a decade, and almost never commented. when i did commented, it was in places that i consider Second Foundation. your effort to make Less Wrong is basically the only reason I even tried to comment here, because i basically accepted that Less Wrong comments are to adversarial for safe and worthwhile discussion.
In my experience - and the Internet provide with a lot of places with different discussion norms - collaboration is the main prediction of useful and insightful discussion. I really like those Rational Spaces when there is real collaboration on truth-seeking. I find a lot of interesting ideas in blogs where comments are not collaborative but adversarial and combative, and I sometimes found interesting comments, but i almost never found interesting discussion. I did, however find a lot of potentially-insightful discussions when the absent of good will and trust and collaboration and charity ruined perfectly good discussion. sometimes it was people deliberately pretend to not understand what people said, and instead attacking strawman. sometimes (especially around politics) people failed to understand what people say and was unable to...
I wish this had been called "Duncan's Guidelines for Discourse" or something like that. I like most of the guidelines given, but they're not consensus. And while I support Duncan's right to block people from his posts (and agree with him far on discourse norms far more than with the people he blocked), it means that people who disagree with him on the rules can't make their case in the comments. That feels like an unbalanced playing field to me.
> Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth.
It's not clear to me what the word "convergence" is doing here. I assume the word means something, because it would be weird if you had used extra words only to produce advice identical to "Aim for truth, and behave as if your interlocutors are also aiming for truth". The post talks about how truthseeking leads to convergence among truthseekers, but if that were all there was to it then one could simply seek truth and get convergence for free. Apparently we ought to seek specifically convergence on truth, but what does seeking convergence look like?
I've spent a while thinking on it and I can't come up with any behaviours that would constitute aiming for truth but not aiming for convergence on truth, could you give an example?
Thank you for making this post-- I found it both interesting and useful for making explicit a lot of the more vague ideas I have about good discussions.
I have a question/request that's related to this: Does anyone have advice for what you should do when you genuinely want to talk to someone about a contentious topic-- and you think they're a thoughtful, smart person (meaning, not an internet troll you disagree with)-- but you know they are unlikely to subscribe to these or similar discourse norms?
To be frank, I ask this because I'm transgender (female-to-male) and like to discuss ideas about sexuality, sex, and gender with other trans people who aren't part of the rationalist/adjacent community and just have different discourse norms.
To give an example, let's say I mention in a post that it feels relevant to my experiences that my sex (at birth) is female, so I still identify as being "female" in some sense even though I'm socially perceived as male now. There's a good chance that people will see this as asserting that trans women aren't female in that same sense, sometimes even if I take care to explicitly say that isn't what I mean. So in that case it's specifically p...
I'll preface my comment by acknowledging that I'm not a regular LessWrong user and only marginally a member of the larger community (I followed your link here from Facebook). So, depending on your intended audience for this, my comments could be distinctively useful or unusually irrelevant.
I'm terribly grateful for the context and nuance you offer here. The guidelines seem self-evidently sensible but what makes them work is the clarity about when it is and isn't worth tolerating extra energy and pain to follow them. A few notes that are almost entirely meta:
1) I suspect that nearly all objections people have to these can be forestalled by continued editing to bake in where and how they properly apply -- in particular, I imagine people emotionally reacting against these because it's so uncomfortable to imagine being hit with criticism for not following these guidelines in cases like:
For most practical purposes, this is the end of the post. All remaining sections are reference material, meant to be dug into only when there's a specific reason to; if you read further, please know that you are doing the equivalent of reading dictionary entries or encyclopedia entries and that the remaining words are not optimized for being Generically Entertaining To Consume.
If you're going to have this kind of disclaimer being this emphatic, then I'd really recommend putting everything below into a separate post. I haven't read this post yet largely bec...
Thanks!
If I were to rephrase this in my own words, it'd be something like:
"There's a kind of expectation/behavior on some people's behalf, where they get unhappy with any content that requires them to put in effort in order to get value out of it. These people tend to push their demand to others, so that others need to contort to meet the demand and rewrite everything to require no effort on the reader's behalf. This is harmful because optimizing one variable requires sacrifices with regard to other variables, so content that gives in to the demand is necessarily worse than content that's not optimized for zero effort. (Also there's quite a bit of content that just can't be communicated at all if you insist that the reader needs to spend zero effort on it. Some ideas intrinsically require an investment of effort to understand in the first place.)
The more that posts are written in a way that gives in to these demands, the more it signals that these demands are justified and make sense. That then further strengthens those demands and makes it ever harder to resist them in other contexts."
Ideally I'd pause here to check for your agreement with this summary, but if I were ...
So my disagreement with this model is that it sounds like you're modeling patience as a quantity that people have either more or less of, while I think of patience as a budget that you need to split between different things.
Like at one extreme, maybe I dedicate all of my patience budget to reading LW articles, and I might spend up to an hour reading an article even if its value seems unclear, with the expectation that I might get something valuable out of it if I persist enough. But then this means that I have no time/energy/patience left to read anything that's not an LW article.
It seems to me that a significant difficulty with budgeting patience is that it's not a thing where I know the worthwhile things in advance and just have to divide my patience between them. Rather finding out what's worthwhile, requires an investment of patience by itself. As an alternative to spending 60% of my effort budget on one thing, I could say... take half of that and spend 5% on six things each, sampling them to see which one of them seems the most valuable to read, and then invest 30% on diving into the one that does seem the most valuable. And that might very well give me a better return.
On my m...
[Thought experiment meant to illustrate potential dangers of discourse policing]
Imagine 2 online forums devoted to discussing creationism.
Forum #1 is about 95% creationists, 5% evolutionists. It has a lengthy document, "Basics of Scientific Discourse", which runs to about 30 printed pages. The guidelines in the document are fairly reasonable. People who post to Forum #1 are expected to have read and internalized this document. It's common for users to receive warnings or bans for violating guidelines in the "Basics of Scientific Discourse" document. T...
Well, the story from my comment basically explains why I gave up on LW in the past. So I thought it was worth putting the possibility on your radar.
I want to say that I really like the Sazen -> expansion format, and I like the explanation -> ways you might feel -> ways a request might look format even more.
1 to 4 and 6 to 9 I just straightforwardly agree with.
My issue with 5 should properly be its own blog post but the too-condensed version is something like, those cases where the other person is not also trying to converge on truth are common enough and important enough that I don't blame someone for not starting from that assumption. Put another way, all of the other rules seem ...
...list of a dozen in-my-estimation unusually good rationalist communicators...
Genuine thanks for making this and actually posting the names. My monkey brain has decided these lists are a thing that happen now and I really need to make being on one a new life goal.
the guidelines are descriptive of good discourse that already exists; here I am attempting to convert them into prescriptions
There is no value in framing good arguments as prescriptive, and delayed damage in cultivating prescriptive framings of arguments. A norm is either unnecessary when there happens to be agreement, or exerts pressure to act against one's better judgement. The worst possible reason for that agreement to already be there is a norm that encourages it.
...given the goals of clear thinking, clear communication, and collaborative truth-seek
A norm is either unnecessary when there happens to be agreement, or exerts pressure to act against one's better judgement.
I think you're missing the value of having norms at the entry points to new subcultures.
LessWrong is not quite as clearly bounded as a martial arts academy; people do not agree to enter it knowing that there will be things they have to do (like wearing a uniform, bowing, etc).
And yet it is a nonstandard subculture; its members genuinely want it to be different from being on the rest of the internet.
Norms smooth that transition—they help someone who's using better-judgment-calibrated-to-the-broader-internet to learn that better-judgment-calibrated-to-here looks different.
see the disconnect—the reason I think X is better than Y is because as far as I can tell X causes more suffering than Y, and I think that suffering is bad."
I think the X's and Y's got mixed up here.
Otherwise, this is one of my favorite posts. Some of the guidelines are things I had already figured out and try to follow but most of them were things I could only vaguely grasp at. I've been thinking about a post regarding robust communication and internet protocols. But this covers most of what I wanted to say, better than I could say it. So thanks!
"Can you try passing my ITT, so that I can see where I've miscommunicated?"
...is a very difficult task even by standards of "good discourse requires energy". To present anything but a strawman in such case may require more time than the general discussion - not necessarily because your model actually is a strawman but because you'd need to "dot many i's and cross many t's" - I think that's the wording.
(ETA: It seems to me like it is directly related to obeying your tenth guideline.)
I think this, or something like this, should be in a place of prominence on LessWrong. The Best Of collection might not be the place, but it's the place I can vote on, so I'd like to vote for it here.
I used "or something like this" above intentionally. The format of this post — an introduction of why these guidelines exist, short one or two sentence explanations of the guideline, and then expanded explanations with "ways you might feel when you're about to break the X Guideline" — is excellent. It turns each guideline into a mini-lesson, which can be broke...
This is great. I notice that other people have given caveats and pushback that seems right to me but that I didn't generate myself, and that makes me nervous about saying I endorse it. But I get a very endorse-y feeling when I read it, at any rate.
(I have a vague feeling there was something that I did generate while reading? But I no longer remember it if so.)
Another feeling I get when I read it is, I remember arguments I've had in rat spaces in the past, and I want to use this essay to hit people round the head with.
...Track (for yourself) and distinguish
"User buttface123 is a dirty liar." → "I've caught user buttface123 lying three times now." → "I've seen user buttface123 say false things in support of their point [three] [times] [now], and that last time was after they'd responded to a comment thread containing accurate info, so it wasn't just simple ignorance. They're doing it on purpose."
Nitpick:
The second arrow seems like it's going in the wrong direction, in that the third statement seems to be making more inferences than the second one. Mostly just because "They're doing it on purpose." seems...
When you say "straightforwardly false", do you intend to refer to any particular theory of truth? While I have long known of different philosophical concepts and theories of "truth", I've only recently been introduced to the idea that some significant fraction of people don't understand the words "true" and "false" to refer at-least-primarily to correspondent truth (that is, the type of truth measured by accurate reflection of the state of the world). I am not sure if that idea is itself accurate, nor whether you believe that thing about some/many/most others, or what your individual understanding of truth is, so I find it hard to interpret your use of the word "false".
If you think this post would be stronger with more real-world examples of each guideline (either failures to follow it, or stellar examples of abiding by it), then please keep your radar open for the next few days or weeks, and send me memorable examples. I am not yet sure what I'll do with those, but having them crowdsourced and available is better than not having them at all, or trying to collect them all myself.
Also: I anticipate substantial improvements to the expansions over time, as well as modest improvements to the wording of each of the short/rela...
I propose another discussion norm: committing to being willing to have a crisis of faith in certain discussions and if not, de-stigmatizing admitting when you are, in fact, unwilling to entertain certain ideas or concepts, and participants respecting those.
I give this a +9, one of the most useful posts of the year.
I think that a lot of these are pretty non-obvious guidelines that make sense when explained, and I continue to put effort in to practicing them. Separating observations and inferences is pro-social, making falsifiable claims is pro-social, etc.
I like this document both for carefully condensing the core ideas into 10 short guidelines, and also having longer explanations for those who want to engage with them.
I like that it’s phrased as guidelines rather than rules/norms. I do break these from time ...
If you think this is a consensus guide, I think you should add it to a wiki page. I am happy to do so.
If people think that shouldn't be the case, I'd ask what the wiki is for other than for broad consensus opinions.
It's analogous to a customer complaining "if Costco is going to require masks, then I'm boycotting Costco." All else being equal, it would be nice for customers to not have to wear masks, and all else being equal, it would be nice to lower the barrier to communication such that more thoughts could be more easily included.
Just a small piece of feedback. This paragraph is very unclear, and it brushes on a political topic that tends to get heated and personal.
I think you intended to say that the norms you're proposing are just the basic cost of en...
It seems like 'social status' is mentioned exactly once:
"In reality, everyone's morality is based on status games." → "As far as I can tell, the overwhelming majority of people have a morality that grounds out in social status."
Which really seems like a key point that should be further reinforced by other sections, considering the topic discussed and your expressed desires, not tucked away obliquely in an isolated quote box.
I think you are intending something obvious to be implied by your comment but I’m not sure what it is
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
I have a minor nitpick, which I would not normally comment on, but I think this post deserves to be more polished and held to a higher standard than regular posts.
...Additionally, there is a tendency among humans to use vague and ambiguous language that is equally compatible with multiple interpretations, such as the time that a group converged on agreement that there was "a very real chance" of a certain outcome, only to discover later, in one-on-one interviews, that at least one person meant that outcome was 20% likely, and at least one other meant it was 8
for a basics, this post is long, and I have a lot of critique I'd like to write that I'd hope to see edited. However, this post has been posted to a blogging platform, not a wiki platform; it is difficult to propose simplifying refactors for a post. I've downvoted for now and I think I'm not the only one downvoting, would be curious to hear reasons for downvotes from others and what would reverse them. would be cool if lesswrong was suddenly a wiki with editing features and fediverse publishing. you mention you want to edit; looking forward to those, hoping to upvote once edited a bit.
unrelated, did you know lesswrong has a "hide author until hovered" feature that for some reason isn't on by default with explanation? :D
Introduction
This post is meant to be a linkable resource. Its core is a short list of guidelines (you can link directly to the list) that are intended to be fairly straightforward and uncontroversial, for the purpose of nurturing and strengthening a culture of clear thinking, clear communication, and collaborative truth-seeking.
There is also (for those who want to read more than the simple list) substantial expansion/clarification of each specific guideline, along with justification for the overall philosophy behind the set.
Prelude: On Shorthand
Once someone has a deep, rich understanding of a complex topic, they are often able to refer to that topic with short, simple sentences that correctly convey the intended meaning to other people with similar context and expertise.
However, those same short, simple sentences are often dangerously misleading, in the hands of a novice who lacks the proper background. Dangerous precisely because they seem straightforward and comprehensible, and thus the novice will confidently extrapolate outward from them in what feel like perfectly reasonable ways, unaware the whole time that the concept in their head bears little or no resemblance to the concept that lives in the expert's head.
Good shorthand in the hands of an experienced user need only be an accurate fit for the already-existing concept it refers to—it doesn't need the additional property of being an unmistakeable non-fit for other nearby attractors. It doesn't need to contain complexity or nuance—it just needs to remind the listener of the complexity already contained in their mental model. It's doing its job if it efficiently evokes the understanding that already exists, independent of itself.
This is important, because what follows this introduction is a list of short, simple sentences comprising the basics of rationalist discourse. Each of those sentences is a solid fit for the more-complicated concept it's gesturing at, provided you already understand that concept. The short sentences are mnemonics, reminders, hyperlinks.
They are not sufficient, on their own, to reliably cause a beginner to construct the proper concepts from the ground up, and they do not, by themselves, rule out all likely misunderstandings.
All things considered, it seems good to have a clear, concise list near the top of a post like this. People should not have to scroll and scroll and sift through thousands of words when trying to refer back to these guidelines.
But each of the short, simple sentences below admits of multiple interpretations, some of which are intended and others of which are not. They are compressions of complex points, and compressions are inevitably lossy. If a given guideline is new to you, check the in-depth explanation before reposing confidence in your understanding. And if a given guideline stated-in-brief seems to you to be flawed or misguided in some obvious way, check the expansion before spending a bunch of time marshaling objections that may well have already been answered.
Further musing on this concept: Sazen
Guidelines, in brief:
0. Expect good discourse to require energy.
What does it mean for something to be a "guideline"?
Think of the above, then, as a set of priors. If a guideline says "Do [X]," that is intended to convey that:
Thus, given the goals of clear thinking, clear communication, and collaborative truth-seeking, the burden of proof is on a given guideline violation to justify itself. There will be many cases in which violating a guideline will in fact be exactly the right call, just as the next marble drawn blindly from a bag of mostly red marbles may nevertheless be green. But if you're doing something that's actively contra to one of the above, it should be for a specific, known reason, that you should be willing to discuss if asked (assuming you didn't already explain up front).
Which leads us to the Zeroth Guideline: expect good discourse to (sometimes) require energy.
If it did not—if good discourse were a natural consequence of people following ordinary incentives and doing what they do by default—then it wouldn't be recognizable as the separate category of good discourse.
A culture of (unusually) clear thinking, (unusually) clear communication, and (unusually) collaborative truth-seeking is not the natural, default state of affairs. It's endothermic, requiring a regular influx of attention and effort to keep it from degrading back into a state more typical of the rest of the internet.
This doesn't mean that commentary must always be high effort. Nor does it mean that any individual user is on the hook for doing a hard thing at any given moment.
But it does mean that, in the moments where meeting the standards outlined above would take too much energy (as opposed to being locally unnecessary for some other, more fundamental reason), one should lean toward saying nothing, rather than actively eroding them.
Put another way: a frequent refrain is "well, if I have to put forth that much effort, I'll never say anything at all," to which the response is often "correct, thank you."
It's analogous to a customer complaining "if Costco is going to require masks, then I'm boycotting Costco." All else being equal, it would be nice for customers to not have to wear masks, and all else being equal, it would be nice to lower the barrier to communication such that more thoughts could be more easily included.
But all else is not equal; there are large swaths of common human behavior that are corrosive or destructive to the collaborative search for truth. No single contributor or contribution is worth sacrificing the overall structures which allow for high-quality conversation in the first place—if one genuinely does not have the energy required to e.g. put forth one's thoughts while avoiding straightforwardly false statements, or while distinguishing inference from observation (etc.), then one should simply disengage.
Note that there is always room for discussion on the meta level; it is not the case that there is universal consensus on every norm, nor on how each norm looks in practice (though the above list is trying pretty hard to limit itself to norms that are on firm footing).
Note also that there is a crucial distinction between [fake/performative verbal gymnastics], and [sincere prioritization of truth and accuracy]—more on this in Sapir-Whorf for Rationalists.
For most practical purposes, this is the end of the post. All remaining sections are reference material, meant to be dug into only when there's a specific reason to; if you read further, please know that you are doing the equivalent of reading dictionary entries or encyclopedia entries and that the remaining words are not optimized for being Generically Entertaining To Consume.
Where did these come from?
I tinkered with drafts of this essay for over a year, trying to tease out something like an a priori list of good discourse norms, and wrestling with various imagined subsets of the LessWrong audience and trying to predict what objections might arise, and the whole thing was fairly sprawling and I ultimately scrapped it in favor of just making a list of a dozen in-my-estimation unusually good rationalist communicators, and then writing down the things that made those people's discourse stand out to me in the first place, i.e. the things it seems to me that they do a) 10-1000x more frequently than genpop, and b) 2-10x more frequently than the median LessWrong user.
That list comprised:
I claim that if you contrast the words produced by the above individuals with the words produced by the rest of the English-speaking population, what you find is approximately the above ten guidelines.
In other words, the guidelines are descriptive of good discourse that already exists; here I am attempting to convert them into prescriptions, with some wiggle room and some caveats. But they weren't made up from whole cloth; they are in fact an observable part of What Actually Works In Practice.
Some of the above individuals have specific deficits in one or two places, perhaps, and there are some additional things that these individuals are doing which are not basic, and not found above. But overall, the above is a solid 80/20 on How To Talk Like Those People Do, and sliding in that direction is going to be good for most of us.
Why does this matter?
In short: because the little things add up. For more on this, take a look at Draining the Swamp as an excellent metaphor for how ambient hygiene influences overall health, or revisit Concentration of Force, in which I lay out my argument for why we should care about small deltas on second-to-second scales, or Moderating LessWrong, which is sort of a spiritual precursor to this post.
Expansions
1. Don't say straightforwardly false things.
... and be ready and willing to explicitly walk back unintentional falsehoods, if asked or if it seems like it would help your conversational partner.
In normal social contexts, where few people are attending to or attempting to express precise truth, it's relatively costless to do things like:
Most of the times that people end up saying straightforwardly false things, they are not intending to lie or deceive, but rather following one of these incentives (or similar).
However, if you are actively intending to create, support, and participate in a culture of clear thinking, clear communication, and collaborative truth-seeking, it becomes more important than usual to break out of those default patterns, as well as to pump against other sources of unintentional falsehood like the typical mind fallacy.
This becomes even more important when you consider that places like LessWrong are cultural crossroads—users come from a wide variety of cultures and cannot rely on other users sharing the same background assumptions or norms-of-speech. It's necessary in such a multicultural environment to be slower, more careful, and more explicit, if one wants to avoid translation errors and illusions of transparency and various other traps and pitfalls.
Some ways you might feel when you're about to break the First Guideline:
Some ways a First Guideline request might look:
2. Track and distinguish your inferences from your observations.
... or be ready and willing to do so, if asked or if it seems like it would help your conversational partner (or the audience). i.e. build the habit of tracking the distinction between what something looks like, and what it definitely is.
The first and most fundamental question of rationality is "what do you think you know, and why do you think you know it?"
Many people struggle with this question. Many people are unaware of the processing that goes on in their brains, under the hood and in the blink of an eye. They see a fish, and gloss over the part where they saw various patches of shifting light and pattern-matched those patches to their preexisting concept of "fish." Less trivially, they think that they straightforwardly observe things like:
... and they miss the fact that they were running a bunch of direct sensory data through a series of filters and interpreters that brought all sorts of other knowledge and assumptions and past experience and causal models into play. The process is so easy and so habitual that they do not notice it is occurring at all.
(Where "they" is also meant to include "me" and "you," at least some of the time.)
Practice the skill of slowing down, and zooming in. Practice asking yourself "why?" after the fashion of a curious toddler. Practice answering the question "okay, but if there were another step hiding in between these two, what would it be?" Practice noticing even extremely basic assumptions that seem like they never need to be stated, such as "Oh! Right. I see the disconnect—the reason I think X is worse than Y is because as far as I can tell X causes more suffering than Y, and I think that suffering is bad."
This is particularly useful because different humans reason differently, and that reasoning tends to be fairly opaque, and attempting to work backward from [someone else's outputs] to [the sort of inputs you would have needed, to output something similar] is a recipe for large misunderstandings.
Wherever possible, try to make explicit the causes of your beliefs, and to seek the causes underlying the beliefs of others, especially when you strongly disagree. Work on improving your ability to tease out what you observed separate from what you interpreted it to mean, so that the conversation can track (e.g. "I saw A," "I think A implies B," and "I don't like B" as three separate objects. If you're unable to do so, for instance because you do not yet know the source of your intuition, try to note out loud that that's what's happening.
Some ways you might feel when you're about to break the Second Guideline:
Some ways a Second Guideline request might look:
3. Estimate and make clear your rough level of confidence in your assertions.
... or be ready and willing to do so, if asked or if it seems like it would help another user.
Humans are notoriously overconfident in their beliefs, and furthermore, most human societies reward people for visibly signaling confidence.
Humans, in general, are meaningfully influenced by confidence/emphasis alone, separate from truth—probably not literally all humans all of the time, but at least in expectation and in the aggregate, either for a given individual across repeated exposures or for groups of individuals (more on this in Overconfidence is Deceit).
Humans are social creatures who tend to be susceptible to things like halo effects, when not actively taking steps to defend against them, and who frequently delegate and defer and adopt others' beliefs as their own tentative positions, pending investigation, especially if those others seem competent and confident and intelligent. If you expose 1000 randomly-selected humans to a debate between a quiet, reserved person outlining an objectively correct position and a confident, emphatic person insisting on an unfounded position, many in that audience will be net persuaded by the latter, and others will feel substantially more uncertainty and internal conflict than the plain facts of the matter would have left them feeling by default.
Thus, there is frequently an incentive to misrepresent your confidence, for instrumental advantage, at the cost of our collective ability to think clearly, communicate clearly, and engage in collaborative truth-seeking.
Additionally, there is a tendency among humans to use vague and ambiguous language that is equally compatible with multiple interpretations, such as the time that a group converged on agreement that there was "a very real chance" of a certain outcome, only to discover later, in one-on-one interviews, that at least one person meant that outcome was 20% likely, and at least one other meant it was 80% likely (which are exactly opposite claims, in that 20% likely means 80% unlikely).
Thus, it behooves people who want to engage in and encourage better discourse to be specific and explicit about their confidence (i.e. to use numbers, and to calibrate your use of numbers over time, or to flag tentative beliefs as tentative, or to be clear about the source of your belief and your credence in that source).
Some ways you might feel when you're about to break the Third Guideline:
Some ways a Third Guideline request might look:
4. Make your claims clear, explicit, and falsifiable, or explicitly acknowledge that you aren't doing so (or can't).
... or at least be ready and willing to do so, if asked or if it seems like it would help make things more comprehensible.
It is, in fact, actually fine to be unsure, or to have a vague intuition, or to make an assertion without being able to provide cruxes or demonstrate how it could be proven/disproven. None of these things are disallowed in rational discourse.
But noting aloud that you are self-aware about the incomplete nature of your argument is a highly valuable social maneuver. It signals to your conversational partner "I am aware that there are flaws in what I am saying; I will not take it personally if you point at them and talk about them; I am taking my own position as object rather than being subject to it and tunnel-visioned on it."
(This is a move that makes no sense in an antagonistic, zero-sum context, since you're just opening yourself up to attack. But in a culture of clear thinking, clear communication, and collaborative truth-seeking, contributing your incomplete fragment of information, along with signaling that yes, the fragment is, indeed, a fragment, can be super productive.)
Much as we might wish that everyone could take for granted that disagreement is prosocial and productive and not an attack, it is not actually the case. Some people do indeed launch attacks under the guise of disagreement; some people do indeed respond to disagreement as if it were an attack even if it is meant entirely cooperatively; some people, fearing such a reaction, will be hesitant to note their disagreement in the first place, especially if their conversational partner doesn't seem open to it.
The more clear it is what, exactly, you're trying to say, the easier it is for other people to evaluate those claims, or to bring other information that's relevant to the issue at hand.
The more your assertions manage to be checkable, the easier it is for others to trust that you're not simply throwing spaghetti at the wall to see what sticks.
And the more you're willing to flag your own commentary when it fails on either of the above, the easier it is to contribute to and strengthen norms of good discourse even with what would otherwise be a counterexample. Pointing out "this isn't great, but it's the best that I've got" lets you contribute what you do have, without undermining the common standard of adequacy.
Some ways you might feel when you're about to break the Fourth Guideline:
Some ways a Fourth Guideline request might look:
5. Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth.
... and be ready to falsify your impression otherwise, if evidence starts to pile up.
The goal of rationalist discourse is to be less wrong—for each of us as individuals and all of us as a group to have more correct beliefs, and fewer incorrect beliefs.
If two people disagree, it's tempting for them to attempt to converge with each other, but in fact the right move is for both of them to try to see more of what's true.
If you are moving closer to truth—if you are seeking available information and updating on it to the best of your ability—then you will inevitably eventually move closer and closer to agreement with all the other agents who are also seeking truth.
However, when conversations get heated—when the stakes are high—when the other person not only appears to be wrong but also to be acting in poor faith—that's when it's the most valuable to keep in touch with the possibility that you might be misunderstanding each other, or that the problem might be in your models, or that there might be some simple cultural or norms mismatch, or that your conversational partner might simply be locally failing to live up to standards that they do, in fact, generally hold dear, etc.
It's very easy to observe another person's output, evaluate it according to your own norms and standards, and conclude that you understand their motives and that those motives are bad.
It is not, in fact, the case that everyone you engage with is primarily motivated by truth-seeking! Even in enclaves like LessWrong, there are lots of people who are prioritizing other goals over that one a substantial chunk of the time.
But simple misunderstandings, and small, forgivable, recoverable slips in mood or mental discipline outnumber genuine bad faith by a large amount. If you are running a tit-for-tat algorithm in which you quickly respond to poor behavior by mirroring it back, you will frequently escalate a bad situation (and often appear, to the other person, like the first one who broke cooperation).
Another way to think of this is: it pays to give people two extra chances to demonstrate that they are present in good faith and genuinely trying to cooperate, because if they aren't, they'll usually prove it soon enough anyway. You don't have to turn the other cheek repeatedly, but doing so once or twice more than you would by default goes a long way toward protecting against false positives on your bad-faith detector.
This behavior can be modeled, as well—the quickest way to get a half-derailed conversation back on track is to start sharing pairs of [what you believe] and [why you believe it]. To demonstrate to your conversational partner that those two things go together, and show them the kind of conversation you want to have.
(This is especially useful on the meta level—if you are frustrated, it's much better to say "I'm seeing X, and interpreting it as meaning Y, and feeling frustrated about that!" than to just say "you're being infuriating.")
You could think of the conversational environment as one in which defection strategies are rampant, and many would-be cooperators have been trained and traumatized into hair-trigger defection by repeated sad experience.
Taking that fact into account, it's worth asking "okay, how could I behave in such a way as to invite would-be cooperators who are hair-trigger defecting back into a cooperative mode? How could I demonstrate to them, via my own behavior, that it's actually correct to treat me as a collaborative truth-seeker, and not as someone who will stab them as soon as I have the slightest pretext for writing them off?"
Some ways you might feel when you're about to break the Fifth Guideline:
Some ways a Fifth Guideline request might look:
6. Don't jump to conclusions—maintain at least two hypotheses consistent with the available information.
... or be ready and willing to generate a real alternative to your main hypothesis, if asked or if it seems like it would help another user.
There exists a full essay on this concept titled Split and Commit. The short version is that there is a large difference between a person who has a single theory (which they are nominally willing to concede might be false), and a person who has two fully distinct possible explanations for their observations, and is looking for evidence to distinguish between them.
Another way to point at this distinction is to remember that bets are different from beliefs.
Most of the time, you are forced to make some sort of implicit bet. For instance, you have to choose how to respond to your conversational partner, and responding-to-them-as-if-they-were-sincere is a different "bet" than responding-to-them-as-if-they-are-insincere.
And because people are often converting their beliefs into bets, and because bets are often effectively binary, they often lose track of the more complicated thing that preceded the rounding-off.
If a bag of 100 marbles contains 70 red ones and 30 green ones, the best bet for the first string of ten marbles out of the bag is RRRRRRRRRR. Any attempt to sprinkle some Gs into your prediction is more likely to be wrong than right, since any single position is 70% likely to contain an R.
(There's less than a 3% chance of the string being RRRRRRRRRR, but the odds of any other specific string are even worse.)
But it would be silly to say that you believe that the next ten marbles out of the bag will all be red. If forced, you will predict RRRRRRRRRR, because that's the least wrong prediction, but actually (hopefully) your belief is "for each marble, it's more likely to be red than green but it could pretty easily be green."
In similar fashion, when you witness someone's behavior, and your best bet is "this person is biased or has an unstated agenda," your belief should ideally be something like "this behavior is most easily explained by an unstated agenda, but if I magically knew for a fact that that wasn't what was happening, the next most likely explanation would be ______________."
That extra step—of pausing to consider what else might explain your observations, besides your primary theory—is one that is extremely useful, and worth practicing until it becomes routine. People who do not have this reflex tend to fall into many more pits/blindspots, and to have a much harder time bridging inferential gaps, especially with those they do not already agree with.
Some ways you might feel when you're about to break the Sixth Guideline:
Some ways a Sixth Guideline request might look:
7. Be careful with extrapolation, interpretation, and summary/restatement.
Distinguish between what was actually said and what it sounds like/what it implies/what you think it looks like in practice/what it's tantamount to, especially if another user asks you to pay more attention to this distinction than you were doing by default. If you believe that a statement A strongly implies B, and you are disagreeing with A because you disagree with B, explicitly note that "A strongly implies B" is a part of your model. Be willing to do these things on request if another person asks you to, or if you notice that it will help move the conversation in a healthier direction.
Another way to put this guideline is "don't strawman," but it's important to note that, from the inside, strawmanning doesn't typically feel like strawmanning.
"Strawmanning" is a term for situations in which:
Person B constructs a strawman, in other words, just so they can then knock it down.
...
There's a problem with the definition above; readers are invited to pause and see if they can catch it.
...
If you'd like a hint: it's in the last line (the one beginning with "Person B constructs a strawman").
...
The problem is in the last clause.
"Just so they can knock it down" presupposes purpose. Not only is Person B engaging in misrepresentation, they're doing it in order to have some particular effect on the larger conflict, presumably in the eyes of an audience (since knocking over a strawman won't do much to influence Person A).
It's a conjunction of act and intent, implying that the vast majority of people engaging in strawmanning are doing so consciously, strategically, and in a knowingly disingenuous fashion—or, if they're not fully self-aware about it, they're nevertheless subconsciously optimizing for making Person A's position appear sillier or flimsier than it actually is.
This does not match how the term is used, out in the wild; it would be difficult to believe that even twenty percent of my own encounters with others using the term (let alone a majority, let alone all of them) are downstream of someone being purposefully deceptive. Instead, the strawmanning usually seems to be "genuine," in that the other person really thinks that the position being argued actually is that dumb/bad/extreme.
It's an artifact of blind spots and color blindness; of people being unable-in-practice to distinguish B from A, and therefore thinking that A is B, and not realizing that "A implies B" is a step that they've taken inside their heads. Different people find different implications to be more or less "obvious," given their own cultural background and unique experiences, and it's easy to typical-mind that the other relevant people in the conversation have approximately the same causal models/context/knowledge/anticipations.
If it's just patently obvious to you that A strongly implies B, and someone else says A, it's very easy to assume that everyone else made the leap to B right along with you, and that the author intended that leap as well (or intended to hide it behind the technicality of not having actually come out and said it). It may feel extraneous or trivial, in the moment, to make that inference explicit—you can just push back on B, right?
Indeed, if the leap from A to B feels obvious enough, you may literally not even notice that you're making it. From the inside, a blindspot doesn't feel like a blindspot—you may have cup-stacked your way straight from A to B so quickly and effortlessly that your internal experience was that of hearing them say B, meaning that you will feel bewildered yourself when what seems to you to be a perfectly on-topic reply is responded to as though it were an adversarial non-sequitur.
(Which makes you feel as if they broke cooperation first; see the sixth guideline.)
People do, in fact, intend to imply things with their statements. People's sentences are not contextless objects of unambiguous meanings. It's entirely fine to hazard a guess as to someone's intended implications, or to talk about what most people would interpret a given sentence to mean, or to state that [what they wrote] landed with you as meaning [something else]. The point is not to pretend that all communication is clear and explicit; it's to stay in contact with the inherent uncertainty in our reconstructions and extrapolations.
"What this looks like, in practice" or "what most people mean by statements of this form" are conversations that are often skipped over, in which unanimous consensus is (erroneously) taken for granted, to everyone's detriment. A culture that seeks to promote clear thinking, clear communication, and collaborative truth-seeking benefits from a high percentage of people who are willing to slow down and make each step explicit, thereby figuring out where exactly shared understanding broke down.
Some ways you might feel when you're about to break the Seventh Guideline:
Some ways a Seventh Guideline request might look:
8. Allow people to restate, clarify, retract, and redraft their points.
Communication is difficult. Good communication is often quite difficult.
One of the simplest interventions for improving discourse is to allow people to try again.
Sometimes our first drafts are clumsy in their own right—we spoke too soon, or didn't think things through deeply enough.
Other times, we said words which would have caused a clone of ourselves to understand, but we failed to account for some crucial cultural difference or inferential gap with our non-clone audience, and our words caused them to construct a meaning that was very different than the meaning we intended.
Also, sometimes we're just wrong!
It's quite common, on the broader internet and in difficult in-person conversations, for people's early rough-draft attempts to convey a thought to haunt them. People will relentlessly harp on some early, clumsy phrasing, or act as if some point with unpleasant ramifications (which the speaker failed to consider) intended those ramifications.
What this results in is a chilling effect on speech (since you feel like you have to get everything right on your first try or face social punishment) and a disincentive for making updates and corrections (since those corrections will often simply be ignored and you'll be punished anyway as if you never made them, so why bother).
Part of the solution is to establish a culture of being forgiving of imperfect first drafts (and generous/light-touch in your interpretation of them), and of being open to walkbacks or restatements or clarifications.
It's perfectly acceptable to say something like "This sounds crazy/abhorrent/wrong to me," or to note that what they wrote seems to you to imply some statement B that is bad in some way.
It's also perfectly reasonable to ask that people demonstrate that they see what was wrong with their first draft, rather than just being able to say "no, I meant something subtly different" ad infinitum.
But if your conversational partner replies with "oh, gosh, sorry, no, that is not what I'm trying to say," it's generally best to take that assertion at face value, and let them start over. As with the sixth guideline, this means that you will indeed sometimes be giving extra leeway to people who are actually being irrational/unreasonable/bad/wrong, but most of the time, it means that you will be avoiding the failure mode of immediately leaping to a conclusion about what the other person meant and then refusing to relinquish that assumption.
The claim is that the costs of letting a few more people "get away with it" a little longer is better than curtailing the whole population's ability to think out loud and update on the fly.
Some ways you might feel when you're about to break the Eighth Guideline:
Some ways an Eighth Guideline request might look:
9. Don't weaponize equivocation/abuse categories/engage in motte-and-bailey shenanigans.
...and be open to eschewing/tabooing broad summary words and talking more about the details of your model, if another user asks for it or if you suspect it would lower the overall confusion in a given interaction.
Labels are great.
However, labels are a tool with some known failure modes. When someone uses a conceptual handle like "marriage," "genocide," "fallacy of the grey," or "racist," they are staking a claim about the relationship between a specific instance of [a thing in reality], and a cluster of [other things] that all share some similar traits.
That leads to some fairly predictable misunderstandings.
For instance, someone might notice that a situation has (e.g.) three out of seven salient markers of gaslighting (in their own personal understanding of gaslighting).
Three out of seven is a lot, when most things have zero out of seven! So it's reasonable for them to bring in the conceptual handle "gaslighting" as they begin to reason about and talk about the situation—to port in the intuitions and strategies that are generally useful for things in the category.
But it's very easy for people to fail to make clear that they're using the term "gaslighting" because it had specific markers X, Y, and Z, and that the situation doesn't seem to have markers T, U, V, or W at all, let alone considerations of whether or not their own idiosyncratic seven markers sync up with consensus understanding of gaslighting.
And thus the application of the term can easily cause other observers to implicitly conclude that all of T, U, V, W, X, Y, and Z are nonzero involved (and possibly also Q, R, and S that various other people bring to the table without realizing that they are non-universal).
Done intentionally, we call this weaponized equivocation or motte-and-bailey, i.e. "I can make the term gaslighting stick in a technically justified sense, and then abuse the connotation to make everybody think that you were doing all of the bad things involved in gaslighting on purpose and that you are a gaslighter, with all that entails."
But it also happens by accident, quite a lot. A conceptual handle makes sense to Person A, so they use it, and Person B both loses track of nuance and also injects additional preconceptions, based on their understanding of the conceptual handle.
The general prescription is to use categories and conceptual handles as a starting point, and then carefully check one's understanding.
Another way to think of this prescription is to recognize that the use of categories and conceptual handles is warping, in the sense that categories and conceptual handles are often like gravitational attractors pulling people's models toward a baseline archetype or stereotype. They tend to loom large, and obscure away detail, and generate a kind of top-down smoothing consensus or simplification.
That's super useful when the alternative is having them be lost out in deep space, but it's also not as good as using the category to get them in the right general vicinity and then deliberately not leaning on the category once they're close enough that you can talk about all of the relevant specifics in detail.
Some ways you might feel when you're about to break the Ninth Guideline:
Some ways a Ninth Guideline request might look:
10. Hold yourself to the absolute highest standard when directly modeling or assessing others' internal states, values, and thought processes.
Of the ten guidelines, this is the one which is the least about epistemic hygiene, and the most about social dynamics.
(It's not zero about epistemic hygiene, but it deserves extra emphasis for pragmatic reasons rather than philosophical ones.)
In short:
If you believe that someone is being disingenuous or crazy or is in the grips of a blindspot—if you believe that you know, better than they know themselves, what's going on in their head (or perhaps that they are lying about what's going on in their head)—then it is important to be extra cautious and principled about how you go about discussing this fact.
This is important because it's very easy for people to (reasonably) feel attacked or threatened or delegitimized when others are making bold or judgment-laden assertions about the internal contents of their mind/soul/values, and it's very hard for conversation to continue to be productive when one of the central participants is partially or fully focused on defending themselves from perceived social attack.
It is actually the case that people are sometimes crazy. It is actually the case that people are sometimes lying. It is actually the case that people are sometimes mistaken about the contents of their own minds, and that other people, on the outside, can see this more clearly. A blanket ban on hypotheses-about-others'-internals would be crippling to anyone trying to see clearly and understand the world; these things should, indeed, be thinkable and discussible, the fact that they are "rude" notwithstanding.
But by making those hypotheses a part of an open conversation, you're adding a great deal of social and emotional strain to the already-difficult task of collaborative truth-seeking with a plausibly-compromised partner. In many milieus, the airing of such a hypothesis is an attack; there are not a lot of places where "you might be crazy" or "I know more than you about how your mind works" is a neutral or prosocial move. If the situation is such that it feels genuinely crucial for you to raise such a hypothesis out loud, then it should also be worth correspondingly greater effort and care.
(See the zeroth guideline.)
Some simple actions that tend to make this sort of thing go less reliably badly:
For more on this, see [link to a future essay that is hopefully coming from either Ray Arnold or myself].
Some ways you might feel when you're about to break the Tenth Guideline:
Some ways a Tenth Guideline request might look:
(These requests deliberately written to appear somewhat triggered/hostile because that's the usual tone by the point such a request needs to be made, and a little bit of leeway on behalf of the beleaguered seems appropriate.)
Appendix: Miscellaneous Thoughts
This post was long, and was written over the course of many, many months. Below are some scattered, contextless snippets of thought that ended up not having a home in any of the sections above.
Some general red flags for poor discourse:
Some sketchy conversational movements that don't fall neatly into the above:
A skill not mentioned elsewhere in this post: the meta-skill of being willing to recognize, own up to, apologize for, and correct failings in any of the above, rather than hiding one's shame or doubling down or otherwise acting as if the problem is the mistake being seen rather than the mistake being made.
Appendix: Sabien's Sins
The following is something of a precursor to the above list of basics; it was not intended to be as complete or foundational as the ten presented here but was more surgically targeting some of the most frustrating deltas between this subculture's revealed preferences and my own endorsed standards. It was posted on Facebook several years ago; I include it here mostly as a historical curiosity.