Posts

Sorted by New

Wiki Contributions

Comments

This is an excellent post, thank you for making it. I don't have anything to add to the discussion right now, other than sharing my strategy for boundary violations where I can't sufficiently judge the benignness/traumatizing worst-case outcome: 

"Unless you tell me not to, I'm going to hug you now."

Works as long as the other party is in a condition to understand speech - because even <desperate wail> signals me to stop,.

To me it sounds as if the "don't steelman" + ITT could be achieved by starmanning: https://centerforinquiry.org/blog/how-to-star-man-arguing-from-compassion/

I'll give a quick outline of my own approach to this issue. Disclaimer: This is where I mention that I'm on the autism spectrum, so this is me neckbearding my way out of sucking at all of this.

I'm going with privacy erosion as an example: Someone On The Internet is arguing that there should be no privacy On The Internet (because of the children).

First, I assume that my opponent is arguing in good faith and try to reverse-engineer their mindset.

  • If some of my axioms were <what I guess theirs might be>, would I agree with them? 
    Is there a benefit I'm not seeing? 
    Examples:
    • Assuming that every human being is rotten at the core and that a paedophile is lurking in each of us, could enough surveillance actually make children safer?
    • If I was rotten at the core and a paedophile was waiting to come out, maybe enough surveillance would make me stay on the straight and narrow? (In that case, I probably wouldn't admit that to myself and be very unreasonable!)
  • If I had made <hypothetical experience>, would this standpoint be viable, cost/benefit wise? 
    • Maybe they were doxxed by a qAnon mob. Their perceived cost might be zero or net-negative, but they have a lot to gain (the police could finally track down the bad guys!).
    • Maybe they're traumatized in some way.  Traumatized people aren't very rational, and they might not care about the cost.

Second I run the same process assuming they aren't arguing in good faith.
That doesn't mean that they are aware of that. Humans are extremely good at lying to themselves, myself not excluded, see Elephant in the Brain.

  • They could be trolling. (Is this a cry for attention? For help? For sympathy?)
    or
    They could be trying to increase their status by lowering mine.
    (Fortunately, NVC will obliterate both of those patterns and leave me on the moral high ground)
  • They could be signalling to their ingroup by making the right noises, and this was never an argument.

In any case, they have been in a position where saying that stupid/hateful/hurtful/uncharitable/etc thing was the outcome of their optimization strategies — out of all the dialogue options available, they chose this. People in general can be incredibly stupid (again myself not excluded), but we are very good at optimizing for something. If I find what that is they've been optimizing for, I can understand their position, and probably pass their ITT.

And finally, if the stupid/hateful/hurtful/uncharitable/etc statement was directed at me and I'm emotionally compromised, I go through the final loop:

  • their statement is about them, not about me.

I find that I never need to be charitable, because I can always provide reasons (not excuses) why people would be acting the way they do.

 

Maybe I'm completely wrong and there are better ways to go about it, If so, I'd love to hear!