"Man stands face to face with the irrational. He feels within him his longing for happiness and for reason. The absurd is born of this confrontation between the human need and the unreasonable silence of the world." - Albert Camus, The Myth of Sisyphus and Other Essays

These days, a little unreasonable silence of the world actually sounds kind of nice! With apologies to Camus, I think a 21st-century Sisyphus would have been condemned to endlessly respond to annoying internet comments.

I'm not talking about outright trolling or doxxing. That's 21st-century Prometheus.

Sisyphus isn't tormented by trollish insults, but by the compulsion to engage in ambiguous forms of effortful and frustrating dialog that he knows won't lead anywhere but feels like he has to participate in anyway. He's responding to comments full of ill-informed overconfidence, self-righteous activist monoperspective, and the crappiest of drive-by criticisms, which I call PONDS ("prickly, opaque, nitpicky, disengaged, and shallow"). He himself, at some point in his life, has written a comment that's just as bad as the stupidity he's responding to, and he angrily empathizes with the other person's state of mind while wishing they'd just listen and learn! Every time he perfects his reply and hits "submit," the conversation rolls back downhill again.

Stopping Out Loud is a way of wedging the boulder of a bad conversation in place. Sisyphus declares that he will no longer be reading or replying to responses to the comment he's about to post. He's leaving the boulder halfway up the hill. Anyone trying to drag him down into a pointless fight or fruitless back-and-forth is, well, S.O.L.

Stopping Out Loud helps Sisyphus avoid the perception of social defeat that comes with letting the other person "have the last word." It also avoids disincentivizes further attention-getting and nastiness, and helps him feel like he's in control of his actions.

Sisyphus sometimes includes extra information when he Stops Out Loud.

  • Why he's stopping.
  • A clear, final statement of the point he wanted to make.
  • Conditions under which he'd revisit the thread.
  • An invitation or disinvitation to get in touch when he and the other person have had a chance to cool off.
  • A suggestion to continue the debate via PM, where it's sometimes easier to speak freely without feeling like the whole internet is looking on.

This is also helpful to the other person, who might just be engaging in Socratic Grilling. It allows them to avoid wasting further effort in crafting a reply, and informs an immature but good-faith debater that they should consider approaching conversation differently next time. The audience also gains information about the way the conversation had on Sisyphus.

18

30 comments, sorted by Click to highlight new comments since: Today at 4:02 PM
New Comment

I find that it is obviously bad form to do this, of course, and would be immediately used by the worst parts of the internet commentariat the instant they found out about it. There should not be a norm of concluding the argument with your statement and things being over just because you say so. As you surely know, the internet already has a norm of just walking away from discussions, and you can always just walk away, but no one should give your walking away special status (which includes you.).

Now, it's perfectly acceptable to decide to stop. We always do at some point. Perhaps a better way of doing this sort of thing, is just to write words in your exit that you truly believe can standup for themselves, and then walk away, without making a point that you are doing some special thing.

There are ways to end your participation in a conversation that are bad form. However, I disagree that any of my suggestions here are bad form, much less obviously bad form.

I think that people do make updates on their perception of social status based on the manner in which people carry on, as well as end discussions. Given that this occurs, my goal here is to encourage people to consider how anticipation of these perception issues drive their conversational behavior, expand the menu of options, and encourage a more considered decision.

My way of encouraging a more considered decision is to put a concept handle on "stopping out loud," and articulate some of the motivations and implications for doing so. By doing so, I hope people who want to can install it as a thing they consciously consider prior to or while engaging in conversation on the internet that they thing has a likelihood of turning negative.

The end result of executing this is that, as you say, you would

write words in your exit that you truly believe can standup for themselves, and then walk away, without making a point that you are doing some special thing.

Since we disagree on whether it is obviously bad form, I'd like you to consider this: what is the actual way in which the average person would use this? My belief is that the average person will simply use this as a way to 'win' the argument whenever the argument stops being fun for them. If it's not fun, they should stop (unless it is important not too for some reason), but humans are already obsessed with winning social games, so we shouldn't encourage new ways of winning that are neutral as to actual content.

I think I understand the failure mode you're pointing at. It might go something like this:

"Bob posts a well-thought-out analysis of a political topic. Carl replies with a poorly-thought-through but rhetorically powerful comment that Bob's analysis is wrong, he's a bad person for thinking that way, that the whole analysis is causing bad effects in the world, and Carl's not going to stick around to listen to Bob trying to defend himself."

This does seem like a bad outcome. I would argue that this already frequently takes place. In fact, it's in anticipation of Carl's tactic here that Bob might refrain from posting his analysis in the first place.

So promoting "stopping out loud" as a norm seems to asymmetrically offer resources for making sound arguments to the Bobs of the world, since the Carls are already making use of it as a rhetorical coversation-chilling device.

To be clear, I think that the appropriateness of the technique is contextual. For example, it's probably not something that Bob would want to do if Carl is a family member who Bob wants to continue a relationship with. Instead, I see it as most useful in internet discussions with strangers, where there only things to protect are freedom of conversation and individual wellbeing.

Low effort drive by commenting is already a thing, we agree on that, but people should still be mindful of giving status to things that make it work much better. Your proposal isn't just a proposal that people can and should stop before letting things take over their lives too much, but a proposal to give the opting out of replying enhanced social status (compared to where it is now). Anything that gives out social status for an act drastically increases how often it is used, but while you think that's a good thing (since it encourages people to stop rather than be obsessive), I can't help but think the unintended side effect is the stronger and more relevant bit.

I didn’t call for enhanced status, I just explored the status implications and motivations for why people do what they do.

I also see this as a prescription for people who have valuable things to say, but who are presently stopped by the anticipation of 10 rounds of hostile energy-sucking rhetoric-slinging in response. This isn’t a proposal that everybody switch from debate to “high effort drive-by commenting.”

Perhaps this post would have benefitted from linking to my “Oath of Reply” post, where I outline a way of building trust with the other person for a longer and thoughtful debate. I see SOL as one among several contextually-appropriate tools in the toolkit for approaching conversations on the internet with more skill. While that message could be dumbed down and make some outcomes worse rather than better, I thought that in this case, it was better to write a short post than to put in a lot of boilerplate about when this should or should not be used.

I can't see the original post as anything else. If it were just about feeling free to stop, the advice would be more like. "You don't need to worry about stopping. There isn't ay problem with just letting those words you already wrote stand for themselves. Stopping isn't the same as losing." Instead, this is about making public declarations of doing 'being done' while trying to force having the 'last word'.

I'm not saying you should never publicly state you are done, just that it shouldn't be whole thing.

If it were just about feeling free to stop, the advice would be more like. "You don't need to worry about stopping. There isn't ay problem with just letting those words you already wrote stand for themselves. Stopping isn't the same as losing."

I don't agree that this is the better or only way to communicate this advice, and I don't think it's wholly accurate. Here is a different way to outline the structure of my argument:

Point 1: There is a real, important, and immutable causal link between perseverence in debate and perceptions of "winning." "Winning" includes perceptions of being factually correct, morally right, and politically strong. "Losing" includes perceptions of being factually incorrect, morally wrong, and politically weak.

  • Point 1a: Terminating participation in hostile debate is typically perceived as losing.
    • This is where I disagree with you: for many audiences and contexts, stopping, depending on the manner in which it is done, is the same as losing, whether or not we would like for this to be the case.
  • Point 1b: Participating in and losing a hostile debate is usually more psychologically and politically painful than not participating in the first place.

Point 2: Stopping out loud is a way of terminating debate that is not as much perceived as losing (although it might be perceived as rude), and is less psychologically and politically painful than persevering in a hostile debate.

  • Point 2a: Stopping out loud can be done in a hostile or non-hostile manner, and in an hostile or non-hostile context.

Point 3: Stopping out loud is a "symmetric weapon." Anybody can do it, and it does not intrinsically favor truth-seeking.

  • Point 3a: People who are seeking to win a debate at all costs, and who are not truth-seeking, will tend to employ all symmetric weapons at their disposal. One such weapon is the creation of a hostile context, and another weapon is stopping out loud in a hostile manner. We should start by assuming that the symmetric weapon of "stopping out loud" is already being used to near the maximum extent by non-truth-seekers.
  • Point 3b: People who are truth-seeking may find the anticipated psychological and political pain of participating in, then existing, and thereby appearing to "lose" debates to dissuade them from participating in the first place. Hence, the potential for a hostile response by non-truth-seekers has a chilling effect on speech by truth-seekers. This is bad.
  • Point 3c: People who are truth-seeking are likely not using symmetric weapons to nearly the same extent as non-truth-seekers.
    • Conclusion 1: Truth-seekers should consider non-hostile "stopping out loud" in contexts that have been made hostile by others as a way to participate in hostile environments without subjecting themselves to unnecessary psychological abuse and political cost. They should accept the necessity of being rude in order to do this.
    • Conclusion 2: Because "stopping out loud" is already being used to nearly the maximum extent by non-truth-seekers, but not by truth-seekers, promoting non-hostile uses of it in the hostile environments that necessitate motivate its use in the first place will have an asymmetrically truth-seeking effect.

This argument suggests that all non-hostile symmetric weapons should be used more often by truth-seekers, in order to overcome the chilling effect created by non-truth-seekers using symmetric weapons in a hostile manner. I don't know if I subscribe to this strong claim. I happen to think that stopping out loud feels like a non-problematic symmetric weapon to promote among those attempting to seek or speak the truth in hostile epistemic environments. I would have to consider individual other symmetric weapons and see if I think this same argumentative structure applies to them.

Please feel free to address any aspect of this argument that you disagree with - there's a wide surface area for disagreement.

I suspect that ultimately, my credence in these points comes down to my personal experience and intuition, and that shouldn't count much from an outside perspective. One example is that I edit a couple Wikipedia articles, one on a charged current political topic. We have experienced a lot of hectoring in the talk page over our decisions about what information to include. In reading these harsh criticisms (on a page with > 100k daily pageviews getting regular news coverage), there's a combination of a sense of importance in getting the information right, an endless potential stream of abuse from anonymous commenters making different versions of the same poorly-thought-through critique over and over again, and a limited personal budget of time and energy to put into addressing them.

In this context, stopping out loud is how I create space for myself to address the criticisms without feeling psychologically pulled to go ten rounds with some anonymous IP address calling my character into question for reporting mainstream media coverage and scientific findings, while still making it feel meaningful to give a substantive rebuttal to their critique in case they, or other readers, are actually interested in the issue and might happen to consider my argument.

My fear is that if I ghost in the face of this hostility, the hostile critics will take over the talk page and create a perception among other editors that it's too scary to resist them and keep the true, important, and Wikipedia-standards-compliant information these hostile critics wish to suppress on the Wikipedia page.

A third option would be to delete the criticisms as they come in, but that would be the rankest of censorship, far worse in my mind than any other alternative.

Caught between the problems of endless abusive and likely unproductive debate, the perception of weakness that ghosting creates, and the terrible nuclear option of censorship, I choose stopping out loud as the least bad option among these four choices.

If you can recommend a better alternative, I would absolutely be receptive to it!

You make a pretty good argument. I still think you are wrong. While many of your claims are are very close to correct, I do think that they are all subtly wrong in ways I unfortunately can't really explain besides stating that they don't cohere entirely with the way the world seems to work to me, and I trust my intuition on these issues.

There is one claim I can confidently disagree with though, that the 'symmetric weapons' in debate are all already used to near their maximum extent by the bad actors. I think they use such weapons to the maximum extent that makes them feel like they are winning. The difference here is subtle, but I think the disagreement is crucial. If it felt like winning enough, every post would involve 'stopping out-loud', and we are nowhere near that point. What factors make a person feel like they are winning instead of losing?

There are too many to express, but I think the important variable here is expected social approval. I expect you agree to its importance because it is the load-bearing part of your entire reasoning for how we need to be able to have good actors given more ability to stop when a conversation turns bad. So that they don't expect social disapproval for participating at all.

Bad actors still have most of the same drives as good actors. So far as we know, they are human, and thus are made the same way. Increase expected social approval for something, and it feels even more like winning, and people will do it more, in exactly the same way if they are good or bad actors. This could perversely make the good actors behave more like the bad ones too, which further muddies the waters.

You could make the point that they care about social approval from specific social circles, and thus encouraging something in one social circle doesn't necessarily increase it in others, but I would claim imitation learning would take care of that if it ever started to gain social approval in places it hasn't already. I believe the bad guys will definitely imitate any low-effort thing that the good ones do (but not the parts that actually make things better.).

Unfortunately, trolls are a fact of life. I would guess that the most way to remove the bad actors is to give out approval only for the things that directly make discourse better, but that can lock out a lot of good-faith actors too who do make things better overall, especially since it is a goodhartable metric, so it isn't really a solution.

Side note: I do always expect some manner of social disapproval when I post, and it does bother me very much, and it is a major reason I only comment in short bursts, but I just don't think this advice has more benefits than detriments to the overall discussion, even though it might make me more comfortable if I thought it was okay.

Hm. It seems like a key difference in our approach is that you are considering large extrapolations from present equilibria of indiscriminate use of SOL. By contrast, I am considering modest extrapolations of discriminate use of SOL.

You also focus on the idea of “raising the social status” of SOL, while I am focused on highlighting the availability of SOL as a tool to apply to social problems.

It’s sort of like I wrote a short manual of how to use a butterknife to pry open a soda pop tab, and you responded with “if you raise the social status of using butterknives as tools, people will start sticking them in electric sockets or getting WD-40 all over the rest of the silverware.”

This is kind of a terrible analogy, so I apologize - it doesn’t really capture the full nuance here.

It seems to me that the “dark arts” and “symmetric weapons” metaphors have two interpretations. One is that they should be eschewed by rationalists, both to avoid arms race dynamics and as a way of “tying yourself to the mast of truth.” Without wanting to put words in your mouth, this seems like it might be your view?

Another is that they should be used discriminately by rationalists alongside and to defend and spread the deployment of asymmetric weapons. The “arms race” metaphor might be a misleading metaphor. This is my view.

I’ve been interested in exploring these two views for some time, and maybe now is the right time to start doing so with a bit more focus.

Your approach here is inherently about raising the status of the tool. Everyone already knows of its existence as a possible tool. It mostly isn't used because it is very rude, and thus low status to many (anything that seems to violate your own self-concept seems low-status). There is, so far as I can tell, no other paradigm in which this post could even conceivably have the effect you clearly desire. The funny thing about this discussion is that I'm usually the one discounting the meaningfulness of status for causing things. (Side note, status is a proxy for the things people actually care about, and I'm not entirely sure what some of those things are, or how to weight the ones I am aware of.)

Your example is clearly meant to be absurd, and is thus not useful. My argument is clearly more like 'People are currently fighting with fists. You propose responsible people using brass knuckles to end them, and then walk away. If we raise the status of brass knuckles as a way of ending fights, people will start using brass knuckles a lot more in fights whether or not they are otherwise acting responsibly.' Sure, people already use things like brass knuckles to fight, and indeed much worse things, but I'd rather not escalate many of the fights beyond fist fights. The analogy is far from a perfect fit too, but it is much closer. In the real world, how do people tend to escalate to end a fight when they aren't willing to just walk away? (I haven't been in one since I was a child, and even then, few.)

Dark arts is definitely more what I would call them because of the implications of how  it corrupts you, but I didn't make that central to the argument I actually wrote down originally. I do think that a lot of average people would get very tempted by easy winning instead of the hard work of trying to understand and be understood. It is an important part of my resistance to the concept though.

(It should perhaps be noted that I have very poor social skills, perhaps because I could almost be mistaken for a hermit, and really don't control how I come across most of the time. I often do not know what tone my comments will be interpreted under.) I have observed an awful lot of how people behave though. People actually behave quite well under neutral conditions, but adding things that seem neutral can often lead to ending said incentive neutrality. Most people are genuinely trying to be good, but will quickly respond to incentives against that. 'Winning' without effort is an extremely tempting incentive, especially when you expect the other side to use it if you don't. We need better norms of discussion, not better ways to claim victory.

My only disagreement with calling it an arms race is that I think the very first steps will have significant negative effects too.

It is, of course, important not to keep wasting your time in a discussion that isn't worth it, but I worry more about other things. There obviously are people that my concerns don't help, but I hesitate to make things predictably worse because it might help something else. This is why I might be mistaken for conservative even though I am very aware of things that genuinely need to be made better, and know ways society could easily do them. Under your norm, there might very well be good people having good conversations that wouldn't otherwise occur, but I expect it would make various factions even less willing to communicate.

I have, not often, but many times, had good conversations that clearly seemed like they were going nowhere good, but only because I usually don't just end things when I think I won or when I get frustrated. These will even occur after I have decided that my discussion partner and I couldn't even possibly have anything left to say to each other.

Relatedly, I don't surrender in multiplayer games, and have often had good results from that. I don't actually play them often though unless other family members are involved though. Someone could probably make a good post on our disagreement using a multiplayer games analogy, but that someone probably isn't me.

Your approach here is inherently about raising the status of the tool. Everyone already knows of its existence as a possible tool. It mostly isn't used because it is very rude, and thus low status to many (anything that seems to violate your own self-concept seems low-status). There is, so far as I can tell, no other paradigm in which this post could even conceivably have the effect you clearly desire.

It seems like for you, SOL is a common, low-status technique in the general population, which I am proposing to raise to neutral status. For me, SOL among is an uncommon, neutral-status technique in the rationalist/truth-seeking population, which I am attempting to make more common.

We could gather data on the commonness and status perceptions of SOL-type comments, but right now, we are relying on our intuitions. I don't see a way of making progress in this regard without empirical study.

The "fight" analogy doesn't work for me. In most fights, it is intrinsically bad for either person to be hurt, though subjectively good for selfish reasons to be the winner.

By contrast, in debates in which one side is objectively right and the other objectively wrong, it is objectively better for the right side to win, for the common good.

I see SOL as being a tool for standing up to verbal intimidation or cheap criticism by speaking your truth, then walking away.

For example, imagine an English schoolteacher who hates science fiction. He frequently makes remarks during class panning the entire genre. One of his students is an aspiring sci-fi author, and spends the whole year writing stories that please the schoolteacher. At the end of the year, he finally works up his courage and turns in the sci-fi story he's most proud of. The schoolteacher calls him in after class, and starts systematically making fun of every science fiction element in the student's story. When he finishes, the student makes a well-thought-out defense of sci-fi as a genre, and of his own story as a work of literature. He concludes by saying "I've heard enough of your point of view, and I'm not going to listen to or respond to any more of your criticisms of science fiction or my stories." Then he leaves the room.

To me, there is nothing low-status or or rude about the student's use of SOL in this scenario. This scenario is also the central example of where I advocate making SOL more visible and available as a tool.

I have, not often, but many times, had good conversations that clearly seemed like they were going nowhere good, but only because I usually don't just end things when I think I won or when I get frustrated. These will even occur after I have decided that my discussion partner and I couldn't even possibly have anything left to say to each other.

I have also had this experience. I don't advocate using SOL every time a conversation seemed net negative in expectation. Ultimately, I think that the best way to promote healthier relationships with the internet is to make people aware of positive technique/situation/goal combinations. This is what I'm trying to do here.

I think you're worried that by spreading this technique/situation/goal combination, it also promotes the use of the technique for other goals and situations where we might not want it to be used. For example, we might worry that a student who sees the sci-fi student standing up to his teacher might then turn around and tell his math teacher "I've heard enough of you telling me that the F I got on my exam is my fault for not studying. I'm not going to listen to or respond to any more of your criticisms of me turning in work from Chegg."

That would be a bad outcome! And it's possible that lifting a universal, widely understood ban on SOL would cause more of the bad outcomes than the good outcomes.

My overall perspective is that it's better to share net-positive technique/situation/goal combinations, and accept the risk of misuse, than to suppress "risky techniques" like SOL entirely for fear they are misused. But I am not sure how to make progress toward consensus from what ultimately seems to be a difference in our empirical intuitions, and possibly in our moral valuations.

In the scenario of the teacher who hates science fiction, you expect us, the LW readers, to know in advance that science fiction isn't so bad. It's obvious to us that the teacher is arguing in bad faith.

But that's like saying that it's okay for a policeman to search someone without a warrant, and using as an example a case where it's obvious to the reader that the person is guilty. When one of the big problems with such a rule is that you can't limit it only to cases where someone is obviously guilty, or guilty at all.

The key difference between the student vs. policeman examples is that the policeman (in a country of laws) operates within a well-defined framework for using their powers that they are explicitly told they should put first and foremost in their job duties.

By contrast, there is much more ambiguity as to what constitutes appropriate feedback from teacher to student, and from student to teacher. There is also (in the example I gave) a fairly unambiguous breach of teacher communication norms with the student, in that the teacher is depicted as "systematically making fun of" the student's entire story. Navigating these breaches and ambiguities is the heart of the argument here.

A more apt comparison, in my mind, would be an example of an American policeman deciding to use force to arrest someone when that policeman can see that the person they're arresting was in the middle of attacking them. In America, it would not be appropriate to use force to arrest someone who was not resisting arrest. Sometimes, police officers do use force in ways that are widely viewed as inappropriate or criminal. However, we still rely on those officers to make judgment calls about whether force is appropriate, and can meaningfully discuss what officers should do in various types of concrete scenarios (i.e. "what if the arrestee is already on the ground and in cuffs, but still struggling?"), despite realizing that in an actual scenario, we have to take the cop's word for it that the person was "still struggling."

I mostly agree with your characterizations of my perspective in this post, so I'll focus elsewhere.

I think that your refusal of the fight analogy is shortsighted. Debates as fights are a well-worn analogy for a reason. You even extensively use the term 'weapon' yourself in analogy. We can't very well discuss weapons if we can't discuss fighting. Also, you are clearly interested in what makes people win or lose in a 'debate' and they are largely the same things that make people win in a fight, just more status oriented. (Without winning or losing, instead of a debate it is simply a conversation. People then add back in winning and losing to conversations to make them debates, because we're perverse that way.) People definitely get emotionally injured in debates at times too.

We have debates instead of collaborative truth seeking because it is easier that way. Fights (whether physical or analogical) are often easier (due to simplicity) than coming to a resolution of our conflicts through non-violent talking. In a debate, we then use our understanding of how others will score us to decide how well we are doing in the fight (which may or may not be followed up by actual judging from others as feedback.)

It does seem pretty unlikely that we will resolve the outcome of what would happen if your advice were to become popular just by discussing it, though I think the result is very obvious. Trying to even get the information even on how popular and what status the technique currently has seems like it would require extensive formal study, which means this will likely remain an unresolved thing. (There is probably some research somewhere on this, but who knows where.)

I would like to note that I disagree strongly with you on when and how, in your analogy on science-fiction and and a teacher, the conflict is likely to be occurring. I see it more as, the teacher says something about science-fiction being inherently unrealistic (likely without having even mentioned they would give lower grades on science-fiction) when giving a student a B- on an otherwise well-written assignment involving writing a coherent and realistic story, and the student fires back about the kinds of writing the teacher favors as being unrealistic (which may even be true), so the B- is clearly invalid, and refuses to take any feedback on the issue. Other students may then copy the technique of just ignoring what the teacher says too, feeling fully justified.

Also, I am not suggesting 'suppressing' the technique, just not enhancing its use. It is a thing that is bad but currently under control, and would not likely to see its downsides more thoroughly influencing the state of discourse.

I would like to note that I disagree strongly with you on when and how, in your analogy on science-fiction and and a teacher, the conflict is likely to be occurring. I see it more as, the teacher says something about science-fiction being inherently unrealistic (likely without having even mentioned they would give lower grades on science-fiction) when giving a student a B- on an otherwise well-written assignment involving writing a coherent and realistic story, and the student fires back about the kinds of writing the teacher favors as being unrealistic (which may even be true), so the B- is clearly invalid, and refuses to take any feedback on the issue. Other students may then copy the technique of just ignoring what the teacher says too, feeling fully justified.

 

You seem like a Kantian - searching for a universalizable law that makes society operate well. For Kant, it was never permissible to lie. In his view, behaviors were either permissible in all contexts, or permissible in no contexts. The moral code that society should follow was the set of rules that could be universally imposed, context-free, and still have society function well. He bit the bullet: if an axe murder was at the door, demanding you tell him where his intended victim was hiding, you were morally forbidden from lying to him - although I don't know if it was required to tell him what he wanted to know.

Hence, when I offer a situation which (in my mind), the student was clearly the victim of a year-long campaign of degrading his literary talents and preferences by a teacher with an axe to grind, your response is to consider a different situation in which the student is rejecting sound advice, or at least advice worthy of consideration, by a teacher with a respectful tone.

If SOL helps in protecting one's well-being the axe-grinding teacher, but also impairs our ability to receive useful feedback from constructive criticism, and if the common good depends more on receptivity to constructive criticism than to defense against destructive criticism, then we have to reject SOL entirely. It is not universalizable.

I'm a utilitarian, not a Kantian.

As a utilitarian, I allow that we shouldn't always evaluate individual actions on their own merits. Instead, we sometimes need to create rules that are net utility enhancing. However, I think that it's best when rules are as contextually-informed as possible, while Kant thought they should be minimally contextually informed, or really evaluated in a context-free manner, as in the axe murderer scenario.

Sometimes, we deal with the problem where contexts are misinterpreted, deliberately or accidentally. In situations like this, I think the right thing to do is lean toward more general, "outside-view," less contextual rules. Sometimes, the problem of context interpretation is so huge that we have to impose a crude, context-free rule to deal with the problem.

For example, the vast majority of people who might like to bring sharp objects onto an airplane are probably people who want them for nonviolent purposes. We could consider a rule that it's permissible to bring sharp objects onto planes, as long as you don't intend to hijack the aircraft. However, the problem of terrorism is so important that we impose a blanket ban on sharp objects, entirely ignoring contexts like "this is an old lady with her knitting needles, she obviously wants to pass the time on the plane knitting a sweater." This is the right thing to do to make rules about what you can bring on planes, in my view. It happens to align with the Kantian tendency to make blanket rules.

By contrast, in the school setting, I think that there exist both students who reject constructive criticism, and teachers who treat their students badly. I think that ultimately, both student and teacher, in collaboration with their social networks, must interpret context and decide what behaviors seem right and wrong in fielding conflicts. Hence, I think that the decision for students in these situations to SOL is neither obviously right nor obviously wrong. It would be contextual, and I think that the student and teacher have to be the interpreters of that context.

Likewise in internet comments. There is typically no outside authority to appeal to, and the aspects of the conversation that inform its overall context are ambiguous. Participants therefore must ultimately make contextual decisions for themselves, to decide whether or not SOL is appropriate.

Since I can see contexts where SOL would be both appropriate and inappropriate, I think it is basically morally neutral, and feels to me like a status-neutral move that is not inherently rude. The key, then, is to parse these contexts and try to generalize to identify when SOL is appropriate. If it is not a common/visible technique in situations where it would be appropriate and helpful, then the thing to do is to raise awareness of how to apply it in that context. This is why I wrote the post. I didn't describe the context too elaborately, because I thought that to most readers, the problem was coming up with the SOL technique, not figuring out what sort of context I meant.

As a note, I engage with fight-related metaphors partly because I am trying to engage with previous LW-linked writings on the subject, and partly because it has taken a while to figure out a more appropriate metaphor that captures how I think about these issues. Now that I have, I would probably prefer to drop the fight metaphor, because I think that an "intimidation" and "context interpretation" metaphor captures better how I parse this whole issue.

Accusations of Kantianism, huh? Definitely not Kantian, but not utilitarian either. Kantian ethics are, frankly, stupid (though honorable in their own way), since they don't take the actual circumstances into account. We can't just escape the muddle by condensing every category of moral question into a single answer apiece. Utilitarianism is clearly incalculable in the vast majority of cases and thus usually a way to justify its users own prejudices (whether they are good ones or ill. Utilitarianism doesn't actually care whether the person using it has good or ill intentions, since utility isn't even defined.). In what way is it superior to just saying, use math when appropriate? In circumstances where either one really does work, they're pretty simple, so sure, use them. Pragmatic use of simple solutions is a virtue. (Have I given enough evidence that I believe in virtue ethics yet?)

I believe your arguments for SOL use idealized settings for them since they only work at all in those circumstances, and it is easier to think about where your paradigm works even when arguing in good faith. I see easy arguments about why it's clearly counterproductive to do SOL even in your analogy, but you believe that the math somehow works out in its favor there (perhaps because the semester is ending?) without clearly differentiating how the people involved will only do it then? That really doesn't seem like human nature. Human nature doesn't go away to make math easier. (If you want to be truly utilitarian about it, you need math, which needs a full study of the effects, which is completely impractical.)

SOL is not morally neutral for the obvious reason that it will have given results if introduced into the repertoire, and further results if encouraged, that will clearly lean away from neutrality. If a given intervention will reasonably have certain moral effects, it is not neutral. The color blue is not a moral issue, but outlawing it, or only allowing it, would both be clearly immoral. SOL itself likely does not have a huge effect, but it will definitely have one -we're just arguing about which way it will go.

What actual rule do you have that would allow it to have the effects you want not clearly be less than the undesirable ones, even from a utilitarian standpoint? It is a generalized win-any-argument tool that contributes nothing to the argument itself. And no, you can't just argue that people can use utilitarianism since they can't calculate the result (especially in real time.) (and any calculations will clearly be biased by obvious self-justification features of human nature).

Under what circumstances do you know it is good to use? In which of those is it superior to other tools you could use? Under what circumstances should you not use it? None of those are supplied, and I believe it is because it is too difficult to know when using it would be appropriate; people naturally supply answers that we know are wrong (whenever someone won't just admit your argument is better). That coupled with obvious and significant negative effects means we should lean very strongly toward other tools.

The problem with viewing things through "context interpretation" is that context is only shared within (relatively) very small sub-groups of humanity. It is vastly harder than you think. For example, (I would guess) the hardest parts of learning a new language aren't grammar, meaning, and syntax, they are knowing what will be understood, what you will be expected to understand, and which things will be elided at a given point. Thus, in order to make things work as you expect outside of that sub-group (which may even just be outside of the one person), you have to supply the full context.

If this wasn't the case, I could simply make my posts:

Point A

Point B

Point C

without further elaboration, and you could reply

X

Y

Z

and so on.

The easiest way would be to provide a rule for classifying contexts under your rule, but even that has been rather problematic, see deontology vs utilitarianism vs virtue ethics in the first place, where we can't even share very sparse meta-contexts efficiently (though it is worth understanding the moral frameworks used by the vast majority of people who don't use pure intuition, of course.). Deontology and utilitarianism both require a lot of explanation when told to college students, and virtue ethics only doesn't because religious people talk of it so frequently (though many people who believe in virtue ethics are not religious, such as myself.).

You could probably come up with a first approximation, but then you'd have to figure out if said rule could even make sense across the four different moral frameworks (even if everyone was trying to be good.). (Ironically, it almost certainly wouldn't work for deontology since you would formulate it as an approximation of utilitarianism.) If it didn't you'd then have a duty to determine if that mattered, and then why not if you kept it.

You could come up with separate rules for each approach, but then all of them would have to still be good when interacting with each other.

I think your main issue with my critiques is that I think things can always get much worse, that this is much more likely than getting much better in the short term if we just do whatever sounds right at the moment, and when changing things, we need to be very careful about that. (Despite that, things do tend to get much better over time because people genuinely do try to improve our lives.)

Whereas you (seem to) think it is easy to make things better, that things are unlikely to get much worse, and marginal care will prevent things from getting worse, and so we should rush ahead and improve faster.

I'd like to try and orient this dialogue around the areas where I think we might make progress. There are two aspects of your response that I can't agree with you on, which I'll list without necessarily feeling excited about focusing further conversation on them (but feel free to pursue if you wish). IMO:

  • All moral schemes face profound calculability/evaluation challenges, and all of them deal with this with some practical heuristic.
  • I've done a fair bit of work to define specific contexts in which I think SOL is appropriate; I think that both of us have made quite a bit of reference to abstract and idealized scenarios. I don't think we should pick on each other about that.

I agree with you that context interpretation is very challenging. This is why I advocate it as a key part of deciding when it's appropriate to SOL, and have tried to spell out a couple such contexts here in some detail. We can suggest some rules of thumb that might generalize across contexts:

  • Anticipation of abusive or deliberately time-wasting or baiting responses from the other person.
  • Anonymity, questionable motives, and minimal investment to participate for debaters, which makes it difficult to build trust and employ other solutions.
  • Common good in sharing information, with little private benefit.
  • A conversation that has an important objective/scientific/mathematical aspect, so that the right answer isn't purely subjective.
  • Openness of the conversational forum to all participants.

I think your main issue with my critiques is that I think things can always get much worse, that this is much more likely than getting much better in the short term if we just do whatever sounds right at the moment, and when changing things, we need to be very careful about that. (Despite that, things do tend to get much better over time because people genuinely do try to improve our lives.)

Whereas you (seem to) think it is easy to make things better, that things are unlikely to get much worse, and marginal care will prevent things from getting worse, and so we should rush ahead and improve faster.

I actually agree with you that it's easy to make things worse by taking ill-considered actions, and yet people typically put enough consideration and are responsive enough to the feedback they get that things move in a positive direction over time.

However, I think that in this case, the "ill-considered action" that seems "easy" is to stay silent in the face of hostility. This is widespread, common advice: block people on Facebook who are unpleasant, avoid zones on the internet that are populated by trolls, divert the conversation away from charged topics, avoid nasty people as much as possible.

The hard truth, to me, is that we actually have a social duty in some situations to enter hostile environments and confront people who are trying to suppress the truth with honesty and forthrightness. Yet it is too big an ask to require people to do this, while also shouldering large amounts of abuse and time-wasting. Hence, tools are required that balance these competing demands, and SOL is one of them. I think that it is intrinsically morally neutral, but when used in this way, SOL is virtuous relative to the alternative of avoiding the conversation or staying silent. There may be a better alternative in some contexts, and for some people, but not always.

I would find it most compelling if you engaged with one or the other of the specific scenarios I outlined (the sci-fi teacher, under the circumstances I was suggesting of a cruel teacher discouraging a young student over the teacher's genre preferences; or the case of editing Wikipedia), and suggested an alternative approach that both allows engagement with the hostile environment while also protecting the person from abuse and time-wasting.

I don't really think it will help, but even in your version of the student teacher interaction, it is still obviously negative. 

1)Of minor but obvious importance, it is very confrontational and not beneficial. It in no way helps the student, it in no way helps the teacher, it doesn't help any bystanders that hear about it. All it does is add a moment of satisfaction at telling off the teacher, and continues burning bridges (which the student obviously doesn't care about, but isn't good utilitarian thinking for a student, nor taking into account the feelings of the teacher for more universalist types.). (Telling off a person who is acting badly can be virtuous though.)

2)Crucially, the student knew exactly what they were doing by writing the science fiction for a class where the teacher was obviously telling them not to write science fiction. They picked this fight intentionally. (This is not necessarily a bad thing, but it needs to be justified too.) 

The student could have also given a defense of sci-fi in class when the teacher kept saying negative things about it, that would have been more on point without being the one to actually escalate. (It would not necessarily have gone any better, of course.)

3)There is no internal difference for the student between the teacher mocking it for being science fiction, and making good points about typical failure modes of science fiction and then the student's writing that said student didn't want to hear, so there is no principled way to determine whether it really is this circumstance. The student can only see these things from the inside. It would be much more obvious if the student had previously asked the teacher their exact reasons for being so anti-sci-fi. (This is not an expected solution though since it runs counter to most people's natures.)

Definitely not defending the teacher in this setup, of course. Taking the hypothetical as objective, it is an extremely unvirtuous way for a teacher to act. I don't have a strong objection to telling off the teacher here, which this definitely is, but setting up things for it to go this way is clearly a bad thing, and the student did that at least semi-intentionally.

The defense of sci-fi being put this way makes it much less likely to be heard. Telling the other students why sci-fi is good has a much better chance of working. If everyone is persuaded about the incorrectness of the teacher's bigotry against the genre, that pretty effectively neuters the effect of the teacher's view without the negative side effects.

The Wikipedia example is harder to know how to interact with because I have little experience with being an editor of any kind, and only occasionally use Wikipedia more than for a moment, but it is well known that Wikipedia has some despotic editors too, that rule by fiat and keep pages wrong. It makes perfect sense for the people that when people want to fix something clearly wrong with a page, that they will be very frustrated when not allowed to do it.  Many people will be wrong when thinking so, but so will many of the people assuming they don't have a point just because it looks similar to a previous rejected point.

Better than SOL for Wikipedia would probably be to come up with a reasonable policy for the level of evidence required to change a particular controversial part of the page, along with a list of reasons why it is the way it is (this should also include self-reference on the level of evidence required to change the policy for when to change the policy on changing the part). This should be based on the current level of evidence that established it the way it is now (including both the evidence for and against). Also an explanation of what kinds of argument are given what kinds of weight, and an explanation of that (though this can be more general) would be good.

This is much more difficult in the short term than just stopping, but I don't really see this as much different than what the policy should be even on parts where there is no disagreement on completely non-controversial  pages. This should lead to higher quality pages. A reasonable policy on when things deserve a response would also be a good thing. (These all can obviously be abused too.)

Off-topic trolling can of course just be removed, but on topic trolling should just be treated by the strength of its (non)arguments. People are notoriously bad (because of normal human motivations) at telling the difference between good and bad versions of on topic engagement in places where there is intended to be mass use. Your job as an editor is to neutrally use good evidence, not just that which you agree with. (I expect based on our exchanges that you find this frustrating, but probably do a reasonable job.)

Does engaging in your exact examples really help bridge the gap between our positions or really increase comprehension? I think it is clear how things could be done better than SOL, by never letting it get to that point, by being thorough, and/or by simply side stepping it, and making the arguments to the other people.

(Side note: By this point in our many exchanges, I do think I've fairly effectively determined why I had an initial reaction that it was obviously wrong to endorse the technique, and I believe my reasons are clearly correct. There are simply better approaches, and even just being okay with bare stopping is still better in most circumstances.)

I agree with you that the student/teacher example can be interpreted as an unwise and unsympathetic decision on the part of the student. My view is that the student could have felt stifled for a year by his creative writing teacher. After taking a risk to show what sort of writing he's really passionate about, the teacher dressed him down in an unnecessarily cruel way. The student could cultivate an ability to react dispassionately to such destructive criticism and simply walk away.

On the other hand, it seems like a valuable skill to me to reject stifling and to vocally stand up for oneself in such situations. After all, it's not just the literary criticism that's a factor in this situation, but also the power dynamic in the student/teacher relationship, which the teacher seems to be abusing. While it's hard to predict the results for this particular student, his teacher, and the other students who this teacher will teach in the future, the student's reaction to me would probably read as a benefit to himself and others, as well as pleasantly admirable irrespective of utility (I am not a perfect utility maximizer), if I observed it in real life.

I can therefore appreciate your contrasting viewpoint, but can't agree that it is "obviously" negative. I think it is probably positive, but does have a chance of a negative result. While I've outlined some rules of thumb that would generalize across situations, I do think that the particularities of each case matter a lot, and we'll never have enough detail from a hypothetical example to resolve it conclusively (and what would be the point of trying?).

Better than SOL for Wikipedia would probably be to come up with a reasonable policy for the level of evidence required to change a particular controversial part of the page, along with a list of reasons why it is the way it is (this should also include self-reference on the level of evidence required to change the policy for when to change the policy on changing the part). This should be based on the current level of evidence that established it the way it is now (including both the evidence for and against). Also an explanation of what kinds of argument are given what kinds of weight, and an explanation of that (though this can be more general) would be good.

... This should lead to higher quality pages. A reasonable policy on when things deserve a response would also be a good thing. (These all can obviously be abused too.)

I agree that it might be possible to design complex systems to structure the way argument and evidence get synthesized into Wikipedia changes. SOL, to me, is a tool to use when such structured systems do not exist, and will not for the foreseeable future.

This is much more difficult in the short term than just stopping, but I don't really see this as much different than what the policy should be even on parts where there is no disagreement on completely non-controversial  pages.

I didn't really follow this sentence, could you say it a different way?

Your job as an editor is to neutrally use good evidence, not just that which you agree with. (I expect based on our exchanges that you find this frustrating, but probably do a reasonable job.)

This is an uncalled-for dig in the context of a pretty deep and friendly discussion. I'd appreciate it if you didn't repeat this sort of move in the future, or I will terminate my side of the conversation.

I think it is clear how things could be done better than SOL, by never letting it get to that point, by being thorough, and/or by simply side stepping it, and making the arguments to the other people.

Remember that half the purpose of SOL is to avoid being on the receiving end of verbal abuse in the context of a hostile environment. The hostile environment is assumed, so "never letting it get to that point" is kind of a dodge. Being thorough and side stepping may decrease the attack surface, and that is beneficial. However, none of these fully address the problem of verbal abuse online, and it's not hard to find examples of very thorough arguments addressed to the audience resulting in verbal abuse in the comments anyway. Hence, I don't see these as full solutions for the problem SOL is meant to address.

By contrast, "bare stopping" does not accomplish the other half of the purpose of SOL, which is to convey information to the audience.

So to me, none of your proposed solutions fully solve the problem that SOL does. They are not "better approaches" to the problem SOL is meant to solve, any more than a screwdriver is a "better approach" to nailing than a hammer. They are useful in related circumstances, they do similar things, and yet you cannot use them interchangeably.

I reject your claim that what I said about needing to neutrally use good evidence as an editor was a dig, since it was simply an explanation saying what the duties of an editor are (and I very clearly stated that I believed you were likely to do it well, unless you're somehow objecting to me calling it frustrating, which is just obvious?), because it was necessary when talking about what the policy should be. This failure of communication is exactly what I was talking about with people being unable to tell good vs. bad engagement on a topic. You assumed I was making a dig when I was doing nothing of the sort. (It is, of course, possible someone else would say exactly that as a dig.) My arguments should be evaluated on their own merits, not on whether or not someone guesses they are made for good or bad reasons.

Your duties as an editor do, in fact, determine what you should be doing, for the most obvious of reasons. The neutral use of good evidence is vital to the official reasons for Wikipedia's very existence. (It is very explicit that there should be neutrality, though that is often not true in practice since everyone involved is human.)

I'll expand the sentence you said you wanted rephrased  into an equivalent paragraph: I am advocating a difficult policy on how to deal with changing Wikipedia pages explicitly based on levels of evidence on controversial parts. It would be much easier to use the SOL approach than to do this. The levels of evidence policy, however is not significantly different than what I believe should be the policy on every page, not just on controversial ones. It does not simply seem like a burden (though it clearly is one), but a way to improve Wikipedia. (It is also very much in keeping with the intended spirit of Wikipedia).

I do agree that telling off a person acting badly in the sci-fi example can be a good thing, (and would note that the satisfaction it engenders is a positive under a utilitarianism), and I'm perfectly willing to accept that you want it to be that way as a reason for supporting it -we'll simply disagree that it somehow makes up for all of the obvious negative effects. You don't seem to have engaged at all with the idea that intentionally making it get to that point was a clearly bad thing under most moral systems (which was the main point I made). Your assumption that things have to go to the very end is what seems like a 'dodge' to me.

To make something clear, how we get to a specific situation matters. When one side has intentionally provoked a conflict, they don't get to act the same way they could reasonably act when they didn't; this is true even if the side that engaged in it is morally correct to do so. It matters that the student chose this conflict. The teacher acts very badly, but is merely continuing things. When you choose conflict rather than cooperation in a shared endeavor, you have a stronger moral duty to use means that are clearly good to the extent possible. The student had better means available.

I made multiple points on alternatives to bare stopping which I endorse, all of which are clearly related to the further purposes you claim SOL has. In the sci-fi example, I advocated making an actual defense of sci-fi earlier in the string of events, preferably a calm and well-reasoned one, rather than intentionally violating expectations on an assignment to pick a fight. Then, when they've said their piece, they can stop. The ball is then in the teacher's court, and they can choose how they will react. The student can then stop if there is no point in dialogue with the teacher, without having lost, nor having implemented a bad policy, or they can continue if it is worthwhile to do so.

In the Wikipedia case, making a real policy on how, when, and why engagement will happen is the correct approach. This is much more in keeping with the ideals of a community made encyclopedia than simply ignoring people whose viewpoints pattern-match ones that have been annoying before, and provides a good reason for why failure to respond is not an good proxy for losing. (Care must be taken so that it is not a despotic policy.)

Online harassment is a guaranteed thing. SOL gives more attention to the simple troll than it deserves as compared to simply ignoring it. More dedicated trolls can simply stalk you across the site, and SOL would encourage them, so unless you enjoy being trolled, it doesn't seem worth using. You can continue to talk to other participants in the discussion while ignoring them, if you are still interested in the rest of the discussion.

The only place explicitly stopping that way has a use case is when you have a duty to be involved with all genuine participants in the conversation (this duty can be self-imposed), and need to explain why you think someone isn't, but that is definitely a niche case.

Alternately, you can simply restrict the size and emotional tenor of your responses, and let the conversation die naturally in many cases. This doesn't work if others are feeding the troll, but most trolls rely on the reactions of the people they are trolling directly.

If there is a whole pack of trolls, simply avoiding them seems clearly best (unless it rises to the level where you can and should remove them.).

Low-effort drive-by comments is a case where ghosting is way better than responding with respect.  As AAB says, it's always contextual - there are places and people where effort is justified and you can raise the level with some effort.  But there's a whole lot of online conversations where there are multiple groups of people discussing things at different levels, and you should just ignore the ones that don't interest you.

I feel like this has been recommended before on LW (perhaps https://www.lesswrong.com/posts/Bgt236xrd5j2uAnQJ/leaving-orbit#JEHHGMtvy9fGjwDGw).  For most cases where you're part of the forum/community/discussion intentionally and still intend to do so in the future, I recommend being very clear that you will likely read further responses, but not respond.  Getting the last word and then cutting the connection is pretty rude.

I also recommend, in some cases, just ghosting.  There are plenty of low-value discussions where there's either no goodwill to maintain, or no actual value in further commentary, and it's best to just leave and not come back.  If Sisyphus doesn't actually care if the rock is up or down, he should just wander over to some more pleasant part of Hades.

Part of this is identifying and de-fanging the compulsion portion of the experience, and moving back toward goal-directed behaviors.

I agree with you that just ghosting often seems preferable to stopping out loud, but not always. There are occasions where a bully is truly at risk of dominating a meaningful online conversational space, and maintaining your presence and voice there without wasting energy on the bully is actually important. I also agree that reading but not responding is an option. The challenge, to me, is that when you expect the response to be abusive, but still feel like it's important to say your say, I think it's important to reinforce that people needn't feel that writing a comment comes with a commitment to put themselves through the experience of reading the abusive reply.

Part of this is identifying and de-fanging the compulsion portion of the experience, and moving back toward goal-directed behaviors.

Yes, that is one of the central motivations for using this technique. I'm a verbal thinker, and for obscure psychological reasons, it really helps me to write or say what I'm doing, rather than just to think it internally or just doing it. Maybe that is uncommon, but I personally think it's common enough to address.

The linked Socratic Grilling post has as an explicit thesis that treating the grilling as an hostile act should "never, ever say that"

There is no duty to have infinite stamina to continue a discussion but if one side runs out of stamina it doesn't mean tha the other side automatically did something wrong.

One of the chief reasons to keep on responding is the fear the audience inference would be adverse to the quitter. The fear of what it might be can often be greater than what it actually is. The other point to balance is that some audience will draw a conclusion of "eating the bait" ie negative interference from engaging in the discussion. But even if the inference for stopping would be negative sometimes it is better to stop digging the hole you are in and accept your losses.

I agree with each of your points.

treating the grilling as an hostile act should "never, ever say that"

The internet typically contains more role ambiguity and outright hostility than a classroom setting. So I think we need different norms.

Ideally, I'd like to promote thought-patterns like:

"This conversation feels 40% hostile and 80% unproductive. That's too much for me to continue, but I want to say my say to get my point across to any good-faith readers, or to my conversation partner in case I'm misinterpreting them. So I'm going to Stop Out Loud at the end of this comment." 

Really fantastic conflict mitigation advice. Many people do not often stop to think about why they are communicating and with this lack of organized direction, it is easy to fall into more emotional patterns of communicating. People's desires to appeal to others come out more and constructive discourse is no longer possible. Stopping Out Loud does a great job of recognizing these types of things and organizing it into words in a way that really halts what otherwise often becomes a runaway train. 

One point I would add that is I feel that when it comes to online trolling or criticism, more often than not someone is not interested in constructive dialog in the first place and is typically reacting to their own personal ideas being challenged. This type of intention usually leads to people responding negatively to SOL and they will attempt to re-frame your attempt at to de-escalate as you "running away" or refusing to voice your opinions in front of others because you realize how "weak" your arguments are. While I am not sure there is a way to avoid this, it is helpful to be prepared for this type of response when you do go into SOL. 

I'm glad you like it! I agree with you that employing SOL means that you're giving up the ability to have the last word. My thought is that you already lack that control in most cases. SOL is about recognizing that fact, then responding to it as best you can prior to extricating yourself from the conversation.

I've tried this before, but I usually end up breaking my word and getting sucked back into the argument lol. Nowadays the way I avoid these problems is by refusing to go anywhere near any place on the internet where people are talking about anything controversial! But yes, this is very good advice.

I understand, but I have to point out that you've kind of failed at that here. A number of aspects of the community are, in fact, controversial. This is, of course, bound to happen if you spend any significant time on a place where comments happen at all, especially when they have a norm where people plainly come to provide their own in-depth commentary.

New to LessWrong?