I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's).

Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives.

The basics of rationalist discourse, as I understand them:

 

1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes. Try not to “winarguments using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all.

 

2. Non-Violence: Argument gets counter-argument. Argument does not get bullet. Argument does not get doxxing, death threats, or coercion.[1]

 

3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models. Where possible, avoid saying stuff that you expect to lower the net belief accuracy of the average reader; or failing that, at least flag that you're worried about this happening.

As a corollary:

3.1. Meta-Honesty. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors about your honesty".

 

4. Localizability. Give people a social affordance to decouple / evaluate the local validity of claims. Decoupling is not required, and indeed context is often important and extremely worth talking about! But it should almost always be OK to locally address a specific point or subpoint, without necessarily weighing in on the larger context or suggesting you’ll engage further.

 

5. Alternative-Minding. Consider alternative hypotheses, and ask yourself what Bayesian evidence you have that you're not in those alternative worlds. This mostly involves asking what models retrodict.

Cultivate the skills of original seeing and of seeing from new vantage points.

As a special case, try to understand and evaluate the alternative hypotheses that other people are advocating. Paraphrase stuff back to people to see if you understood, and see if they think you pass their Ideological Turing Test on the relevant ideas.

Be a fair bit more willing to consider nonstandard beliefs, frames/lenses, and methodologies, compared to (e.g.) the average academic. Keep in mind that inferential gaps can be large, most life-experience is hard to transmit in a small number of words (or in words at all), and converging on the truth can require a long process of cultivating the right mental motions, doing exercises, gathering and interpreting new data, etc.

Make it a habit to explicitly distinguish "what this person literally said" from "what I think this person means". Make it a habit to explicitly distinguish "what I think this person means" from "what I infer about this person as a result".

 

6. Reality-Minding. Keep your eye on the ball, hug the query, and don’t lose sight of object-level reality.

Make it a habit to flag when you notice ways to test an assertion. Make it a habit to actually test claims, when the value-of-information is high enough.

Reward scholarship, inquiry, betting, pre-registered predictions, and sticking your neck out, especially where this is time-consuming, effortful, or socially risky.

 

7. Reducibility. Err on the side of using simple, concrete, literal, and precise language. Make it a habit to taboo your words, do reductionism, explain what you mean, define your terms, etc.

As a corollary, applying precision and naturalism your own cognition:

7.1. Probabilism. Try to quantify your uncertainty to some degree.

 

8. Purpose-Minding. Try not to lose purpose (unless you're deliberately creating a sandbox for a more free-form and undirected stream of consciousness, based on some meta-purpose or impulse or hunch you want to follow).

Ask yourself why you're having a conversation, and whether you want to do something differently. Ask others what their goals are. Keep the Void in view.

As a corollary:

8.1. Cruxiness. Insofar as you have a sense of what the topic/goal of the conversation is, focus on cruxes, or (if your goal shifts) consider explicitly flagging that you're tangenting or switching to a new conversational topic/goal.[2]

 

9. Goodwill. Reward others' good epistemic conduct (e.g., updating) more than most people naturally do. Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility, unless someone has explicitly set aside a weirder or more rough-and-tumble space.[3]

 

10. Experience-Owning. Err on the side of explicitly owning your experiences, mental states, beliefs, and impressions. Flag your inferences as inferences, and beware the Mind Projection Fallacy and Typical Mind Fallacy.

As a corollary:

10.1. Valence-Owning. Err on the side of explicitly owning your shoulds and desires. Err on the side of stating your wants and beliefs (and why you want or believe them) instead of (or in addition to) saying what you think people ought to do.

Try to phrase things in ways that make space for disagreement, and try to avoid socially pressuring people into doing things. Instead, as a strong default, approach people with an attitude of informing and empowering them to do what they want.

Favor language with fewer and milder connotations, and make your arguments explicitly where possible, rather than relying excessively on the connotations, feel, fnords, or vibes of your words.


A longer, less jargony version of this post is available on the EA Forum.

 

 

  1. ^

    Counter-arguments aren't the only OK response to an argument. You can choose not to reply. You can even ban someone because they keep making off-topic arguments, as long as you do this in a non-deceptive way. But some responses to arguments are explicitly off the table.

  2. ^

    Note that "the topic/goal of the conversation" is an abstraction. "Goals" don't exist in a vacuum. You have goals (though these may not be perfectly stable, coherent, etc.), and other individuals have goals too. Conversations can be mutually beneficial when some of my goals are the same as some of yours, or when we have disjoint goals but some actions are useful for my goals as well as yours.

    Be wary of abstractions and unargued premises in this very list! Try to taboo these prescriptions and claims, paraphrase them back, figure out why I might be saying all this stuff, and explicitly ask yourself whether these norms serve your goals too.

    Part of why I've phrased this list as a bunch of noun phrases ("purpose-minding", etc.) rather than verb phrases ("mind your purpose", etc.) is that I suspect conversations will go better (on the dimension of goodwill and cheer) if people make a habit of saying "hm, I think you violated the principle of experience-owning there" or "hm, your comment isn't doing the experience-owning thing as much as I'd have liked", as opposed to "own your experience!!".

    But another part of why I used nouns is that commands aren't experience-owning, and can make it harder for people to mind their purposes. I do have imperatives in the post (mostly because the prose flowed better that way), but I want to encourage people to engage with the ideas and consider whether they make sense, rather than just blindly obey them. So I want people to come into this post engaging with these first as ideas to consider, rather than as commands to obey.

  3. ^

    Note that this doesn't require assuming everyone you talk to is honest or has good intentions.

    It does have some overlap with the rule of thumb "as a very strong but defeasible default, carry on object-level discourse as if you were role-playing being on the same side as the people who disagree with you".

New to LessWrong?

New Comment
48 comments, sorted by Click to highlight new comments since: Today at 1:52 PM

2. Non-Violence: Argument gets counter-argument. Argument does not get bullet. Argument does not get doxxing, death threats, or coercion.[1]

I'd want to include some kinds of social responses as unacceptable as well. Derision, mockery, acts to make the argument low status, ad hominems, etc. 

You can choose not to engage with bad arguments, but you shouldn't engage by not addressing the arguments and instead trying to execute some social maneuver to discredit it. 

I would expand "acts to make the argument low status" to "acts to make the argument low status without addressing the argument". Lots of good rationalist material, including the original Sequences, includes a fair amount of "acts to make arguments low status". This is fine—good, even—because it treats the arguments it targets in good faith and has a message that rhymes with "this argument is embarrassing because it is clearly wrong, as I have shown in section 2 above" rather than "this argument is embarrassing because gross stupid creeps believe it".

Many arguments are actually very bad. It's reasonable and fair to have a lower opinion of people who hold them, and to convey that opinion to others along with the justification. As you say, "you shouldn't engage by not addressing the arguments and instead trying to execute some social maneuver to discredit it". Discrediting arguments by social maneuvers that rely on actual engagement with the argument's contents is compatible with this.

Derision, mockery, acts to make the argument low status, ad hominems, etc.

I don't want to include these in "Non-Violence", because I'm thinking of that rule as relatively absolute. By comparison, "derision" and "mockery" should probably be kept to a minimum, but I'm not going to pretend I've never made fun of the Time Cube guy, or that I feel super bad about having done so.

I also think sometimes a person tries to output "light-hearted playing around", but someone else perceives it as "cruel mockery". This can be a hint that the speaker messed up a bit, but I don't want to treat it as a serious sin (and I don't want to ban all play for the sake of preventing this).

Similarly, "acts to make the argument low status" is a bit tricky to encode as a rule, because even things as simple as "generating a good counter-argument" can lower the original argument's status in many people's eyes. (Flawed arguments should plausibly be seen as lower-status than good arguments!)

And "ad hominem" can actually be justified when the topic is someone's character (e.g., when you're discussing a presidential candidate's judgment, or discussing whether to hire someone, or discussing whether someone's safe to date). So again it's tricky to delimit exactly which cases are OK versus bad.

I do think you're getting at an important thing here, it's just a bit tricky to put into words. My hope is that people will realize that those sorts of things are discouraged by:

1. Truth-Seeking: "Try not to 'win' arguments using symmetric weapons"

6. Reality-Minding: "Keep your eye on the ball, hug the query, and don’t lose sight of object-level reality."

9. Goodwill: "Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility"

10.1. Valence-Owning: "Favor language with fewer and milder connotations, and make your arguments explicitly where possible, rather than relying excessively on the connotations, feel, fnords, or vibes of your words."

(If people think it's worth being more explicit here, I'd be interested in ideas for specific edits.)

I don't want to include these in "Non-Violence", because I'm thinking of that rule as relatively absolute. By comparison, "derision" and "mockery" should probably be kept to a minimum, but I'm not going to pretend I've never made fun of the Time Cube guy, or that I feel super bad about having done so.

I've made fun of people on Twitter, but:

  1. Don't think that reflects well on me as a rationalist
  2. Don't think such posts are acceptable content for LessWrong.

You may not feel bad about mockery (I don't generally do so either), but do you think it reflects well on you as a rationalist?

 

I don't want to include these in "Non-Violence", because I'm thinking of that rule as relatively absolute.

I agree these aren't acts of violence, but I listened to the rest of the post and didn't hear you object to them anywhere else. This felt like the closest place (in that bad argument gets counterargument and doesn't get any of the things I mentioned).

 

Similarly, "acts to make the argument low status" is a bit tricky to encode as a rule, because even things as simple as "generating a good counter-argument" can lower the original argument's status in many people's eyes. (Flawed arguments should plausibly be seen as lower-status than good arguments!)

An appropriately more nuanced version would be something like: "acts to make an argument low status for reasons other than its accuracy/veracity, and conformance to norms (some true things can be presented in very unpleasant/distasteful ways [e.g. with the deliberate goal of being maximally offensive])".

You may not feel bas about mockery (I don't generally do so either), but do you think it reflects well on you as a rationalist?

I like this example! I do indeed share the intuition "mocking Time Cube guy on Twitter doesn't reflect well on me as a rationalist". It also just seems mean to me.

I think part of what's driving my intuition here, though, is that "mocking" sounds inherently mean-spirited, and "on Twitter" makes it sound like I'm writing the sort of low-quality viral personal attack that's common on Twitter.

"Make a light-hearted reference to Time Cube (in a way that takes for granted that Time Cube is silly) in a chat with some friends" feels pretty unlike "write a tweet mocking and deriding Time Cube", and the former doesn't feel to me like it necessarily reflects poorly on me as a rationalist. (It feels more orthogonal to the spirit of rationality to me, like making puns or playing a video game; puns are neither rationalist nor anti-rationalist.)

So part of my reservation here is that I have pretty different intuitions about different versions of "tell jokes that turn on a certain claim/belief being low-probability", and I'm not sure where to draw the line exactly (beyond the general heuristics I mentioned in the OP).

Another part of my reservation is just that I'm erring on the side of keeping the list of norms too short rather than too long. I'd rather have non-exhaustive lists and encourage people to use their common sense and personal conscience as a guide in the many cases that the guidelines don't cover (or don't cover until you do some interpretive work).

I worry that modern society is too norm-heavy in general, encouraging people to fixate on heuristics, patches, and local Prohibited Actions, in ways that are cognitively taxing and unduly 'domesticating'. I think this can make it harder to notice and appropriately respond to the specifics of the situation you're in, because your brain is yelling a memorized "no! unconditional rule X!" script at you, when in fact if you consulted your unassisted conscience and your common sense you'd have an easier time seeing what the right thing to do is.

So I'm mostly interested in trying to distill core aspects of the spirit of rationalist discourse, in the hope that this can help people's common sense and conscience grow (/ help people become more self-aware of aspects of their common sense and conscience that are already inside themselves, but that they aren't lucid about).

I suspect I've left at least one important part of "the spirit of rationalist discourse" out, so I'm mainly nitpicking your suggestions in case your replies cause me to realize that I'm missing some important underlying generator that isn't alluded to in the OP. I care less about whether "mockery" specifically gets called out in the OP, and more about whether I've neglected an underlying spirit/generator.

Maybe Goodwill is missing a generator-sentence that's something like "Don't lean into cruelty, or otherwise lose sight of what your conscience or common sense says about how best to relate to other human beings."

"acts to make an argument low status for reasons other than its accuracy/veracity, and conformance to norms (some true things can be presented in very unpleasant/distasteful ways [e.g. with the deliberate goal of being maximally offensive])".

Yeah, I like that more. I still worry that "low status" is vague and different people conceive of it differently, so I have the instinct that it might be good to taboo "status" here. "Conformance to norms" is also super vague; someone would need to have the right norms in mind in order for this to work.

I also don't want to call minor things like ad hominems "violent"!

(Actually, possibly I'm already watering down "violence" more than is ideal by treating "doxxing" and "coercion" as violent. But in this context I do feel like physical violence, death threats, doxing, and coercion are in a cluster together, whatever you want to call it, and things like mockery are in a different cluster.)

It seems to me that forms of mockery, bullying, social ostracization etc are actually in the same cluster. They all attack the opponent with something else than an argument, be it physical or not. If bullying doesn't count as violence, then the problem seems to be with labeling the cluster "violence". Maybe rule 2 shouldn't be called "non-violence", but "non-aggressiveness" or something like that.

They all attack the opponent with something else than an argument, be it physical or not.

And what, precisely, is an "attack"? Can you taboo that word and give a pretty precise definition, so we know what does and doesn't count?

I've seen people on the Internet use words like "bullying", "harassment", "violence", "abuse", etc. to refer to stuff like 'disagreeing with my political opinions'.

(The logic being, e.g.: "Anti-Semites have historically killed people like me. I claim that political opinion X (e.g., about the Israeli-Palestinian conflict) is anti-Semitic. Therefore you expressing your opinion is (1) a thing I should reasonably take as a veiled threat against me and an attempt to bully and harass me, and (2) a thing that will embolden anti-Semites and thereby further endanger me.")

I'm not saying that this reasoning makes sense, or that we should totally avoid words like "bullying" because they get overused in a lot of places. But I do take stuff like this as a warning sign about what can happen if you start building your social norms around vague concepts.

I'd rather have norms that either mention extremely specific concrete things that aren't up for interpretation (see how much more concrete "death threats" is than "bullying"), or that mention higher-level features shared by lots of different bad behavior (e.g., "avoid symmetric weapons").

And what, precisely, is an "attack"? Can you taboo that word and give a pretty precise definition, so we know what does and doesn't count?

How about "hurting a person or deminishing their credibility, or the credibility of their argument, without using a rational argument"? This would make it acceptable when people get hurt by rational arguments, or when their credibility is diminished by such an argument. The problem seems to be when this is achieved by something else than a rational argument.

Maybe this is not the perfect definition of the cluster which includes both physical violence and non-physical aggression, but the pure "physical violence" cluster seems in any case arbitrary. E.g. social ostracization can be far more damaging than a punch in the guts, and both are bad as a response to an argument insofar they are not themselves forms of argument.

I've seen people on the Internet use words like "bullying", "harassment", "violence", "abuse", etc. to refer to stuff like 'disagreeing with my political opinions'.

Yes, people do that, but them confusing disagreement with bullying doesn't mean disagreement is bullying. And the fact that disagreement is okay doesn't mean that bullying, mockery, etc. is a valid discourse strategy.

Moreover, the speaker can identify actions like mockery by introspection, so avoiding it doesn't rely on the capabilities of the listener to distinguish it from disagreement. The vagueness objection seems to assume the perspective of the listener, but rule 2 applies to us in our role as speakers. It recommends what we should say or do, not how we should interpret others. (Of course, there could be an additional rule which says that we, as listeners, shouldn't be quick to dismiss mere disagreements as personal attacks.)

How about "hurting a person or deminishing their credibility, or the credibility of their argument, without using a rational argument"?

"Hurting a person" still seems too vague to me (sometimes people are "hurt" just because you disagreed with them on a claim of fact), "Diminishing... the credibility of their argument, without using a rational argument" sounds similar to "using symmetric weapons" to me (but the latter strikes me as more precise and general: don't try to persuade people via tools that aren't Bayesian evidence for the truth of the thing you're trying to persuade them of).

"A rational argument", I worry, is too vague here, and will make people think that all rationalist conversation as to look like their mental picture of Spock-style discourse.

The problem seems to be when this is achieved by something else than a rational argument.

A lot of things can hurt people's feelings other than rational arguments, and I don't think the person causing the hurt is always at fault for those things. (E.g., maybe I beat someone at a video game and this upset them.)

but the pure "physical violence" cluster seems in any case arbitrary. E.g. social ostracization can be far more damaging than a punch in the guts, and both are bad as a response to an argument insofar they are not themselves forms of argument.

The point of separating out physical violence isn't to say "this is the worst thing you can do to someone". It's to draw a clear black line around a case that's especially easy to rule completely out of bounds. We've made at least some progress thereby, and it would be a mistake to throw out this progress just because it doesn't solve every other problem; don't let the perfect be the enemy of the good.

Other sorts of actions can be worse than some forms of physical violence consequentially, but there isn't a good sharp black line in every case for clearly verbally transmitting what those out-of-bound actions are. See also my reply to DragonGod.

Even "this is at least as harmful as a punch in the gut" isn't a good pointer, since some people are extremely emotionally brittle and can be put in severe pain with very minor social slights. I think it's virtuous to try to help those people flourish, but I don't want to claim that a rationalist has done a Terrible Thing if they ever do something that makes someone that upset; it depends on the situation.

I feel specifically uncomfortable with leaning on the phrase "social ostracization" here, because it's so vague, and the way you're talking about it makes it sound like you want rationalists to be individually responsible for making every human on Earth feel happy, welcome, and accepted in the rat community. "Ostracization" seems clearly bad to me if it looks like bullying and harassment, but sometimes "ostracizing" just means banning someone from an Internet forum, and I think banning is often prosocial.

(Including banning someone because of an argument! If someone keeps posting off-topic arguments, feel free to ban.)

"Hurting a person" still seems too vague to me (sometimes people are "hurt" just because you disagreed with them on a claim of fact),

Even "this is at least as harmful as a punch in the gut" isn't a good pointer, since some people are extremely emotionally brittle and can be put in severe pain with very minor social slights. I think it's virtuous to try to help those people flourish, but I don't want to claim that a rationalist has done a Terrible Thing if they ever do something that makes someone that upset; it depends on the situation.

As I said, if someone feels upset by mere disagreement, that's not a violation of a rational discourse norm.

The focus on physical violence is nice insofar violence is halfway clear-cut, but is also fairly useless insofar the badness of violence is obvious to most people (unlike things like bullying, bad-faith mockery, moral grandstanding, etc which are very common), and mostly irrelevant in internet discussions without physical contact, where most irrational discourse is happening nowadays, very nonviolently.

I feel specifically uncomfortable with leaning on the phrase "social ostracization" here, because it's so vague, and the way you're talking about it makes it sound like you want rationalists to be individually responsible for making every human on Earth feel happy, welcome, and accepted in the rat community.

That seems to me an uncharitable interpretation. Social ostracization is prototypically something which happens e.g. when someone gets cancelled by a Twitter mob. "Mob" insofar those people don't use rational arguments to attack you, even if "attacking you without using arguments" can't be defined perfectly precisely. (Something like the Bostrom witch-hunt on Twitter, which included outright defamation, but hardly any arguments.)

If you would consequently shun vagueness, then you couldn't even discourage violence, because the difference between violence and non-violence is gradual, it likewise admits of borderline cases. But since violence is bad despite borderline cases, the borderline cases and exceptions you cited also don't seem very serious. You never get perfectly precise definitions. And you have to embrace some more vagueness than in the case of violence, unless you want to refer only to a tiny subset of irrational discourse.

By the way, I would say banning/blocking is irrational when it is done in response to disagreement (often people on Twitter ban other people who merely disagree with them) and acceptable when off-topic or purely harassment. Sometimes there are borderline cases which lie in between, those are grey areas where blocking may be neither clearly bad nor clearly acceptable, but such grey areas are in no way counterexamples to the clear-cut cases.

love this post. meta-note: it would be really great to have visited link highlighting on by default on lesswrong, to make posts with very heavy referencing like this easier to navigate.

FYI we have a PR for up it up (I made it a week ago when you last requested it), but not merged in yet.

Feel free to delete this if it feels off-topic, but on a meta note about discussion norms, I was struck by that meme about C code. Basically, the premise that there is higher code quality when there is swearing.

I was also reading discussions in the linux mailinglists- the discussions there are clear, concise, and frank. And occasionally, people still use scathing terminology and feedback.

I wonder if people would be interested in setting up a few discussion posts where specific norms get called out to "participate in good faith but try to break these specific norms"

And people play a mix-and-match to see which ones are most fun, engaging and interesting for participants. This would probably end in disaster if we started tossing slurs willy-nilly, but sometimes while reading posts, I think people could cut down on the verbiage by 90% and keep the meaning.

Strong upvoted; these also feel closer to the "core" virtues to me, even though there's nothing wrong with Duncan's post.

Flag your inferences as inferences

Cultivating what Korzybski dubbed Consciousness of Abstraction (ie not unconsciously abstracting) improves things a lot eg noticing what metaphors are being deployed as part of an argument about the generalizability of your experience. To develop this, it was useful to first do the easier task of noticing when and how others are abstracting.

I've rewritten this post for the EA Forum, to help introduce more EAs to rationalist culture and norms. The rewrite goes into more detail about a lot of the points, explaining jargon, motivating some of the less intuitive norms, etc. I expect some folks will prefer that version, and some will prefer the LW version.

(One shortcoming of the EA Forum version is that it's less concise. Another shortcoming is that there's more chance I got stuff wrong, since I erred on the side of "spell things out more in the hope of conveying more of the spirit to people who are new to this stuff", rather than "leave more implicit so that the things I say out loud can all be things I feel really confident about".)

(Edit: Already fixed, no longer relevant.)

Try not to “win” arguments using asymmetric weapons (tools that work similarly well whether you're right or wrong).

Should be "symmetric". From Scott's post:

Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys.

You've also repeated the incorrect usage in two comments to this post.

Thanks, fixed!

There is an interesting variation of rule 1 (truth-seeking). According to common understanding, this rule seems to imply that if we argue for X, we should only do so if we believe X to more than 50%. Similarly rule 3. But recently a number of philosophers have argued that (at least in academic context) you can actually argue for interesting hypotheses without believing in them. This is sometimes called "championing", and described as a form of epistemic group-rationality, which says that sometimes individually irrational arguments can be group-rational.

The idea is that truth seeking is viewed as a competitive-collaborative process, which has benefits when people specialize in certain outsider theories and champion them. In some contexts it is fairly likely that some outsider theory is true, even though each individual outsider theory has a much lower probability than the competing mainstream theory. If everyone argued for the most likely (mainstream) theory, there would be too little "intellectual division of labor"; hardly anyone would bother arguing for individually unlikely theories.

(This recent essay might be interpreted as an argument for championing.)

It might be objected that the championers should be honest and report that they find the interesting theory they champion ultimately unlikely to be true. But this could have bad effects for the truth-seeking process of the group: Why should anyone feel challenged by someone advocating a provocative hypothesis when the advocators themselves don't believe it? The hypothesis would lose much of its provocativeness, and the challenged people wouldn't really feel challenged. It wouldn't encourage fruitful debate.

(This is can also be viewed as a solution to the disagreement paradox: Why could it ever be rational to disagree with our epistemic peers? Shouldn't we average our opinions? Answer: Averaging might be individually rational, but not group-rational.)

this rule seems to imply that if we argue for X, we should only do so if we believe X to more than 50%

Being an "argument for" is anti-inductive, an argument stops working in either direction once it's understood. You believe what you believe, at a level of credence you happen to have. You can make arguments. Others can change either understanding or belief in response to that. These things don't need to be related. And there is nothing special about 50%.

I don't get what you mean. Assuming you argue for X, but you don't believe X, it would seem something is wrong, at least from the individual rationality perspective. For example, you argue that it raining outside without you believing that it is raining outside. This could e.g. be classified as lying (deception) or bullshitting (you don't care about the truth).

Assuming you argue for X

What does "arguing for" mean? There's expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it's been considered, whether it had that effect or not. Repeating the argument won't present an expectation of changing the mind of a person who already knows it, in either direction, so the argument is no longer an "argument for". This is what I mean by anti-inductive.

Assuming you argue for X, but you don't believe X

Suppose you don't believe X, but someone doesn't understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an "argument for" X? Should it be withheld, keeping the other's understanding avoidably lacking?

What does "arguing for" mean? There's expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it's been considered, whether it had that effect or not.

Here is a proposal: A argues with Y for X iff A 1) claims that Y, and 2) that Y is evidence for X, in the sense that P(X|Y)>P(X|-Y). The latter can be considered true even if you already believe in Y.

Suppose you don't believe X, but someone doesn't understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an "argument for" X? Should it be withheld, keeping the other's understanding avoidably lacking?

I agree, that's a good argument.

The best arguments confer no evidence, they guide you in putting together the pieces you already hold.

Yeah, aka Socratic dialogue.

Alice: I don't believe X.

Bob: Don't you believe Y? And don't you believe If Y then X?

Alice: Okay I guess I do believe X.

The point is, conditional probability doesn't capture the effect of arguments.

It seems that arguments provide evidence, and Y is evidence for X if and only if . That is, when X and Y are positively probabilistically dependent. If I think that they are positively dependent, and you think that they are not, then this won't convince you of course.

Assuming you argue for X, but you don't believe X, it would seem something is wrong, at least from the individual rationality perspective.

Belief is a matter of degree. If someone else thinks it's 10% likely to be raining, and you believe it's 40% likely to be raining, then we could summarize that as "both of you think it's not raining". And if you share some of your evidence and reasoning for thinking the probability is more like 40% than 10%, then we could maybe say that this isn't really arguing for the proposition "it's raining", but rather the proposition "rain is likelier than you think" or "rain is 40% likely" or whatever.

But in both cases there's something a bit odd about phrasing things this way, something that cuts a bit skew to reality. In reality there's nothing special about the 50% point, and belief isn't a binary. So I think part of the objection here is: maybe what you're saying about belief and argument is technically true, but it's weird to think and speak that way because in fact the cognitive act of assigning 40% probability to something is very similar to the act of assigning 60% probability to something, and the act of citing evidence for rain when you have the former belief is often just completely identical to the act of citing evidence for rain when you have the latter belief.

The issue for discourse is that beliefs do come in degrees, but when expressing them they lose this feature. Declarative statements are mostly discrete. (Saying "It's raining outside" doesn't communicate how strongly you believe it, except to more than 50% -- but again, the fan of championing will deny even that in certain discourse contexts.)

Talking explicitly about probabilities is a workaround, a hack where we still make binary statements, just about probabilities. But talking about probabilities is kind of unnatural, and people (even rationalists) rarely do it. Notice how both of us made a lot of declarative statements without indicating our degrees of belief in them. The best we can do, without using explicit probabilities, is using qualifiers like "I believe that", "It might be that", "It seems that", "Probably", "Possibly", "Definitely", "I'm pretty sure that" etc. See https://raw.githubusercontent.com/zonination/perceptions/master/joy1.png

Examples of truth-seeking making you "give reasons for X" even though you don't "believe X":
- Everyone believes X is 2% but you think X is 15% because of reasons they aren't considering, and you tell them why
- Everyone believes X is 2%. You do science and tell everyone all your findings, some of which support X (and some of which don't support X).

We should bet on moonshots (low chance, high EV). This is what venture capitalists and startup founders do. I imagine this is what some artists, philosophers, comedians, and hipsters do as well, and I think it is truth-tending on net.
But I hate the norm that champions should lie. Instead, champions should only say untrue things if everyone knows the norms around that. Like lawyers in court or comedians on stage, or all fiction. 

Yeah, championing seems to border on deception, bullshitting, or even lying. But the group rationality argument says that it can be optimal when a few members of a group "over focus" (from an individual perspective) on an issue. These pull in different directions.

I think people can create an effectively unlimited number of "outsider theories" if they aren't concerned with how likely they are.  Do you think ALL of those should get their own champions?  If not, what criteria do you propose for which ones get champions and which don't?

Maybe it would be better to use a frame of "which arguments should we make?" rather than "which hypotheses should we argue for?"  Can we just say that you should only make arguments that you think are true, without talking about which camp those arguments favor?

(Though I don't want to ban discussions following the pattern "I can't spot a flaw in this argument, but I predict that someone else can, can anyone help me out?"  I guess I think you should be able to describe arguments you don't believe if you do it in quotation marks.)

I wish Brevity was considered another important rationalist virtue. Unfortunately, it isn't practiced as such, including by me.

In programming, "lines of code" is a cost, not an accomplishment. It's a proxy for something we care about (like the functionality and robustness of a program), but all other things being equal, we'd prefer the number to be as small as possible. Similarly, the number of words in our posts and comments is a cost for the things we actually care about (e.g. legible communication), and all other things being equal, we'd prefer this number to be as small as possible.

These are also related costs: complexity, jargon, parenthetical asides (like the ones in this comment), clarifications, footnotes, ...

That doesn't mean that the costs are never worth paying. Just that they shouldn't be paid mindlessly, and that brevity is too often subordinated to other virtues and goals.

Finally, there are some ways to add and edit text whose benefits imo usually outweigh their costs: like adding outlines and headings, or using formatting.

Yeah I think brevity straightforwardly should be considered one, at least on the margin.

After thinking more about it, declaring sth like brevity as a virtue might be outright required, because the other virtues and elements of discourse don't directly trade off against one another. So a perfectionist might try to optimize by fulfilling all of them, at the cost of writing absurdly long and hard-to-parse text. Hence there's value in naming some virtue that's opposed to the others, as a counterbalance, to make the tradeoffs explicit.

I think considering brevity, for its own sake, to be an important rationalist virtue is unlikely to prove beneficial for maintaining, or raising, the quality of rationalist discourse. That's because it is a poorly defined goal that could easily be misinterpreted as encouraging undesirable tradeoffs at the expense of, for example, clarity of communication, laying out of examples to aid in understanding of a point, or making explicit potentially dry details such as the epistemic status of a belief, or the cruxes upon which a position hinges. 

There is truth to the points you've brought up though, and thinking about about how brevity could be incorporated into a list of rationalist virtues has brought two ideas to mind:

1. It seems to me that this could be considered an aspect of purpose-minding. If you know your purpose, and keep clearly in mind why you're having a conversation, then an appropriate level of brevity should be the natural result. The costs of brevity, or lack thereof, can be payed as needed according to what best fits your purpose. A good example of this is this post here on lesswrong, and the longer, but less jargony, version of it that exists on the EA forum.

2. The idea of epistemic legibility feels like it includes the importance of brevity while also making the tradeoffs that brevity, or lack thereof, involves more explicit than directly stating brevity as a rationalist virtue. For example a shorter piece of writing that cites fewer sources is more likely to be read in full rather than skimmed, and more likely to have its sources checked rather than having readers simply hope that they provide the support that the author claims. This is in contrast to a longer piece of writing that cites more sources which allows an author to more thoroughly explain their position, or demonstrate greater support for claims that they make. No matter how long or short a piece of writing is, there are always benefits and costs to be considered.

While writing this out I noticed that there was a specific point you made that did not sit well with me, and which both of the ideas above address.

Similarly, the number of words in our posts and comments is a cost for the things we actually care about (e.g. legible communication), and all other things being equal, we'd prefer this number to be as small as possible.

To me this feels like focusing on the theoretical ideal of brevity at the expense of the practical reality of brevity. All other things are never equal, and I believe the preference should be for having precisely as many words as necessary, for whatever specific purpose and context a piece of writing is intended for.

I realize that "we'd prefer this number to be as small as possible" could be interpreted as equivalent to "the preference should be for having precisely as many words as necessary", but the difference in implications  between these phrases, and the difference in their potential for unfortunate interpretations, does not seem at all trivial to me.

As an example, something that I've seen discussed both on here, and on the EA forum, is the struggle to get new writers to participate in posting and commenting. This is a struggle that I feel very keenly as I started reading lesswrong many years ago, but have (to my own great misfortune) avoided posting and commenting for various reasons. If I think about a hypothetical new poster who wants to embody the ideals and virtues of rationalist discourse, asking them to have their writing use as small a number of words as possible feels like a relatively intimidating request when compared to asking that they consider the purpose and context of their writing and try to find an appropriate length with that in mind. The latter framing also feels much more conducive to experimenting, failing, and learning to do better.

To be clear, I didn't mean that all LW posts and comments should be maximally short, merely that it would be better if brevity or a related virtue (like "ease of being read") were considered as part of an equation to balance. Because I currently feel like we're erring towards writing stuff that's far longer than would be warranted if there was some virtue which could counterbalance spending extra paragraphs on buying diminishing returns in virtues like legibility (where e.g. the first footnote is often very valuable, but the fifth is less so).

If I think about a hypothetical new poster who wants to embody the ideals and virtues of rationalist discourse, asking them to have their writing use as small a number of words as possible feels like a relatively intimidating request when compared to asking that they consider the purpose and context of their writing and try to find an appropriate length with that in mind. The latter framing also feels much more conducive to experimenting, failing, and learning to do better.

I actually think that, if the community considered and practiced brevity as one of our virtues, the site would be more welcoming to new posters, not less. The notion of writing my first comment on this site in 2023, rather than 2013, feels daunting to me. Right now I imagine it feels like you have to dot all your i's and cross all your t's before you can get started, whereas I'm pretty sure the standards for new commenters were far lower in the beginning of the site.

And one thing I find particularly daunting, and would imo find even more daunting as a newcomer, is that it feels like the median post and comment are incredibly long. And that, in order to fit in, one also has to go to such great lengths in everything one writes.

I appreciate the clarity and thoroughness of this post. It's a useful distillation of communication norms that nurture understanding and truth-seeking. As someone who has sometimes struggled with getting my point across effectively, these guidelines serve as a solid reminder to stay grounded in both purpose and goodwill. It's reassuring to know that there's an ever-evolving community working towards refining the art of conversation for collective growth.

Valence-Owning

Could you please give a definition of the word valence? The definition I found doesn't make sense to me: https://en.wiktionary.org/wiki/valence

Basically: whether something is good or bad, enjoyable or unpleasant, desirable or undesirable, interesting or boring, etc. It's the aspect of experience that evaluates some things as better or worse to varying degrees and in various respects.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

Reality-Minding. Keep your eye on the ball, hug the query, and don’t lose sight of object-level reality.

 

A relevant post: Consume fiction wisely

TLDR: Fiction is often harmful for your mind, and it is often made to manipulate you.

The post got surprisingly controversial. It seems that even in this community many people are disturbed by the idea that watching Hollywood movies or reading fantasy - is harmful for cognition. 

i may be should (and probably will not) write my own post about Goodwill. instead i will say in comment what Goodwill is about, by my definition. 

Goodwill, the way i see it, on the emotional level, is basically respect and cooperation. when someone make an argument, do you try to see to what area in ConceptSpace they are trying to gesturing, and then asking clarifying questions to understand, or do you round it up to the nearest stupid position, and not even see the actual argument being made? do you even see then saying something incoherent and try to parse it, instead of proving it wrong?

the standard definition of Goodwill does not include the ways in which failure of Goodwill is failure of rationality. is failure of seeing what someone is trying to say, to understand their position and their framing.

civility is good for its own sake. but almost everyone who decided to be uncivil end up strawmanning their opponents, end up with more wrong map of the world. what may look like forgiveness from outside, for rationalist, should look from inside like remembering that we expect short inferential distances and that politics wrecks your ability to do math and your believes filter your receptions, so depends on your side in argument.


i gained my understanding of those phenomenons mostly from the Rational Blogosphere, and saw it as part of rationality. there is important difference between person executing the algorithm "being civil and forgiving", and people executing algorithm "remember about biases and inferential distances, and try to overcome them", that implemented by understanding the importance of cooperating even after perceived defection in noisy environment, in the prisoner's dilemma, and by assuming that communication is hard ind miscommunications are frequent, etc.

I think that's right, but in my list I'm trying to factor out non-strawmanning as "alternative-minding", and civility under "goodwill".

I think there are anti-strawmanning benefits to being friendly, but I'm wary of trying to cash out everything that's a good idea as "oh yeah, this is good because it helps individuals see the truth better", when that's not actually true for every good idea.

In this case, I think there are two things worth keeping distinct: the goal of understanding others' views in a discussion, and the goal of making discussion happen at all. Civility helps keep social environments fun and chill enough that people stick around, are interested in engaging, and don't go into the conversation feeling triggered or defensive. That's worth protecting, IMO, even if there's no risk that yelling at people (or whatever) will directly cause you to straw-man them.

so i thought about you comment and i understand why we think about that in different ways.

in my model of the world, there is important concept - Goodwill. there are arrows that point toward it, things that create goodwill - niceness, same side politically, personal relationship, all sort of things. there are also things that destroy goodwill, or even move it to the negative numbers.

there are arrows that come out of this Goodwill node in my casual graph. things like System1 understand what actually said, tend to react nicely to things, able to pass ITT. some things you can get other ways - people can be polite to people they hate, especially on the internet. but there are things that i saw only as result of Goodwill. and System1 correct interpretation is one of them/ maybe it's possible - but i never saw it. and the politeness you get without Goodwill, is shallow. people's System1 notice that in body language, and even in writing.

now, you can dial back on needless insulting and condescension. those are adversarial moves that can be chose consciously or avoided, even with effort. but from my point of view, when there is so little Goodwill left, the chance for good discussion already lost. it can only be bad and very bad. avoiding very bad is important! but my aim in such situations is to leave the discussion when the goodwill come close to zero, and have mental alarm screaming at me if i ever in the negative numbers of feel like the other person have negative numbers of Goodwill toward me.

so, basically, in my model of the world, there is ONE node, Goodwill. in the world, there is no different things. you write: "even if there's no risk that yelling at people (or whatever) will directly cause you to straw-man them.". but in my model, such situation is impossible! yelling at people WILL cause you to strawman them. 

in my model of the world, this fact is not public knowledge, and my model regarding that is important part of what i want to communicate when I'm talking about Goodwill. 

thanks for the conversion! it's the clearest way i ever described my concept of Goodwill, and it was useful for my to formulate that in words.
 

[-]Ruby1y0-1

Curate. I want to curate this post in the same manner that Raemon curated Basics of Rationalist Discourse. This post contains a list of non-universal/non-standard ways for people to communicate, that do allow for better truthseeking. These are two recent posts on the topic, but I'd be keen to see more exploration of how do we communicate better, and how we can quickly get many more new people to pick up these ideals and methods.

[+][comment deleted]11mo10