If your website looks like this people don't need to read your content in order to tell that you're a crazy person who is out of touch with how he comes off and doesn't have basic competencies like "realize that this is terrible, hire a professional". Just scroll through without reading any of it, and with your defense against the dark arts primed and ready, tell me how likely you feel that the content is some brilliant insight into the nature of time itself. It's a real signal that credibly conveys information about how unlikely this person is to have something to say which is worth listening to. Signalling that you can't make a pretty website when you can is dishonest, and the fact that you would be hindering yourself by doing so makes it no better.
When you know what you're doing, there's nothing "dark" about looking like it.
a "steel man" is an improvement of someone's position or argument that is harder to defeat than their originally stated position or argument.
This seems compatible with both, to me. "You're likely to underestimate the risks, and you can die even on a short trip" is a stronger argument than "You should always wear your seat belt because it is NEVER safe to be in a car without a seat belt", and cannot be so easily defeated as saying "Parked in the garage. Checkmate".
Reading through the hyperbole to the reasonable point underneath is still an example of addressing "the best form of the other person's argument", and it's not the one they presented.
I think the conflicting narratives tend to come from different sides of the conflict, and that people generally want the institutions that they're part of (and which give them status) to remain high status. It just doesn't always work. What I'm talking about is more like.. okay, so Chael Sonnen makes a great example here both because he's great at it and because it makes for a non-political example. Chael Sonnen is a professional fighter who intentionally plays the role of the "heel". He'll say ridiculous things with a straight face, like telling the greatest fighter in the world that he "absolutely sucks" or telling a story that a couple Brazilian fighters (the Nogueira brothers) mistook a bus for a horse and tried to feed it a carrot and sticking to it.When people try to "fact check" Chael Sonnen, it doesn't matter because not only does he not care that what he's saying is true, he's not even bound by any expectation of you believing him. The bus/carrot story was his way of explaining that he didn't mean to offend any Brazilians, and the only reason he said that offensive stuff online is that he was unaware that they had computers in Brazil. The whole point of being a heel is to provoke a response, and in order to do that all he has to do is have the tiniest sliver of potential truth there and not break character. The bus/carrot story wouldn't have worked if the fighters from a clearly more technologically advanced country than him, even though it's pretty darn far from "they actually think buses are horses, and it's plausible that Chael didn't know they have computers". If your attempt to call Chael out on his BS is to "fact check" whether he was even there to see a potential bus/horse confusion or to point out that if anything, they're more likely to mistake a bus for a Llama, you're missing the entire point of the BS in the first place. The only way to respond is the way Big Nog actually did, which is to laugh it off as the ridiculous story it is.The problem is that while you might be able to laugh off a silly story about how you mistook a horse for a carrot, people like Chael (if they're any good at what they do) will be able to find things you're sensitive about. You can't so easily "just laugh off" him saying that you absolutely suck even if you're the best in the world, because he was a good enough fighter that he nearly won that first match. Bullshitters like Chael will find the things that are difficult for you to entertain as potentially true and make you go there. If there's any truth there, you'll have to admit to it or end up making yourself look like a fool. This brings up the other type of non-truthtelling that commonly occurs which is the counterpart to this. Actually expecting to be believed means opening yourself to the possibility of being wrong and demonstrating that you're not threatened by this. If I say it's raining outside and expect you to actually believe me, I have to be able to say "hey, I'll open the door and show you!", and I have to look like I'll be surprised if you don't believe me once you get outside. If I start saying "How DARE you insinuate that I might be lying about the rain!" and generally take the bait that BSers like Chael leave, I show that it's not that I want you to genuinely believe me so much as I want you to shut your mouth and not challenge my ideas. It's a 2+2=5 situation now, and that's a whole nother thing to expect. In these cases there still isn't the same pressure to conform to the truth needed if you expect to be believed, and your real constraint is how much power you have to pressure the other person into silence/conformity.The biggest threat to truth, as I see it, is that when people get threatened by ideas that they don't want to be true, they try to 2+2=5 at it. Sometimes they'll do the same thing even when the belief they're trying to enforce is actually the correct one, and it causes just as much problems because can't trust someone saying "Don't you DARE question" even when they follow it up with "2+2=4", and unless you can do the math yourself you can't know what to believe. To give a recent example, I found a document written by a virologist PhD about why the COVID pandemic is very unlikely to have come from a lab and it was more thorough and covered more possibilities I hadn't yet seen anyone cover, which was really cool. The problem is that when I actually checked his sources, they didn't all say what he said they said. I sent him a message asking whether I was missing something in a particular reference, and his response was basically "Ah, yeah. It's not in that one it's in another one from China that has been deleted and doesn't exist anymore." and went on to cite the next part of his document as if there's nothing wrong with making blatantly false implications that the sources one gives support the point one made, and the only reason I could even be asking about it is that I hadn't read the following paragraph about something else. When I pointed out that conspiracy minded people are likely to latch on to any little reason to not trust him and that in order to be persuasive to his target audience he should probably correct it and note the change, he did not respond and did not correct his document. And he wonders why we have conspiracy theories.Bullshitters like Chael can sometimes lose (or fail to form) their grip on reality and let their untruths actually start to impact things in a negative way, and that's a problem. However, it's important to realize that the fuel that sustains these people is the over-reaching attempts to enforce "2+2=what I want you to say it does", and if you just do the math and laugh it off when he straight face says that 2+2=22, there's no more oppressive bullshit for him to eat and fuel his trolling bullshit.
You don't want your interlocutor to feel like you are either misrepresenting or humiliating him. Improving an argument is still desirable, but don't sour the debate.
There are a couple different things I sometimes see conflated together under the label "steel man".
As an example, imagine you're talking to the mother of a young man who was killed by a drunk driver on the way to the corner store, and whose life could likely have been saved if he had been wearing a seat belt. This mom might be a bit emotional when she says "NEVER get in a car without your seat belt on! It's NEVER safe!", and interpreted completely literally it is clearly bad advice based on a false premise.
One way to respond would be to say "Well, that's pretty clearly wrong, since sitting in a car in your garage isn't dangerous without a seat belt on. If you were to make a non-terrible argument for wearing seat belts all the time, you might say that it's good to get in the habit so that you're more likely to do it when there is real danger", and then respond to the new argument. The mother in this case is likely to feel both misrepresented and condescended to. I wouldn't call this steel manning.
Another thing you could do is to say "Hm. Before I respond, let me make sure I'm understanding you right. You're saying that driving with a seat belt is almost always dangerous (save for obvious cases like "moving the car from the driveway into the garage") and that the temptation to say "Oh, that rarely happens!"/"it won't happen to me!"/"it's only a short trip!" is so dangerously dismissive of real risk that it's almost never worth trusting that impulse when the cost of failure is death and the cost of putting a seat belt on is negligible. Is that right?". In that case, you're more likely to get a resounding "YES!" in response, even though that not only isn't literally what she said, it also contradicts the "NEVER" in her statement. It's not "trying to come up with a better argument, because yours is shit", it's "trying to understand the actual thing you're trying to express, rather than getting hung up on irrelevant distractions when you don't express it perfectly and/or literally". Even if you interpret wrong, you're not going to get bad reactions because you're checking for understanding rather than putting words in their mouth, and you're responding to the thing they are actually trying to communicate. This is the thing I think was being pointed at in the original description of "steel man", and is something worth striving for.
I think another distinction worth making here is whether the person "bullshitting"/"lying" even expects or intends to be believed. It's possible to have "not care whether the things he says describe reality correctly" and still be saying it because you expect people to take you seriously and believe you, and I'd still call that lying.
It's quite a different thing when that expectation is no longer there.
I used "flat earthers" as an exaggerated example to highlight the dynamics the way a caricature might highlight the shape of a chin, but the dynamics remain and can be important even and especially in relationships which you'd like to be close simply because there's more reason to get things closer to "right".The reason I brought up "arrogance"/"humility" is because the failure modes you brought up of "not listening" and "having obvious bias without reflecting on it and getting rid of it" are failures of arrogance. A bit more humility makes you more likely to listen and to question whether your reasoning is sound. As you mention though, there is another dimension to worry about which is the axis you might label "emotional safety" or "security" (i.e. that thing that drives guarded/defensive behavior when it's not there in sufficient amounts).When you get defensive behavior (perhaps in the form of "not listening" or whatever), cooperative and productive conversation requires that you back up and get the "emotional safety" requirements fulfilled before continuing on. Your proposed response assumes that the "safety" alarm is caused by an overreach on what I'd call the "respect" dimension. If you simply back down and consider that you might be the one in the wrong this will often satisfy the "safety" requirement because expecting more relative respect can be threatening. It can also be epistemically beneficial for you if and only if it was a genuine overreach.My point isn't "who cares about emotional safety, let them filter themselves out if they can't handle the truth [as I see it]", but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can't regulate at all, and therefore is free to wander without correction.While people do tend to mirror your cognitive algorithm so long as it is visible to them, it's not always immediately visible and so you can get into situations where you *have been* very careful to make sure that you're not the one that is making a mistake and since it hasn't been perceived you can still get "not listening" and the like anyway. In these kinds of situations it's important to back up and make it visible, but that doesn't necessarily mean questioning yourself again. Often this means listening to them explain their view and ends up looking almost the same, but I think the distinctions are important because of the other possibilities they help to highlight.The shared cognitive algorithm I'd rather end up in is one where I put my objections aside and listen when people have something they feel confident in, and one where when I have something I'm confident in they'll do the same. It makes things run a lot more smoothly and efficiently when mutual confidence is allowed, rather than treated as something that has to be avoided at all costs, and so it's nice to have a shared algorithm that can gracefully handle these kinds of things.
It seems to me that I'm explaining something reasonable, and they're not understanding it because of some obvious bias, which should be apparent to them.
But, in order for them to notice that, from inside the situation, they'd have to run the check of:
TRIGGER: Notice that the other person isn't convinced by my argument
ACTION: Hmm, check if I might be mistaken in some way. If I were deeply confused about this, how would I know?
The fact that the other person isn’t convinced by your argument is only evidence that you’re mistaken to the extent you’d expect this other person would be convinced by good arguments. For your friends and people who have earned your respect this action is a good response, but in the more general case it might be hard to get yourself to apply it faithfully because really, when the flat earther isn’t convinced are you honestly going to consider whether you’re actually the one that’s wrong?
The more general approach is to refuse to engage in false humility/false respect and make yourself choose between being genuinely provocative and inviting (potentially accurate) accusations of arrogance or else finding some real humility. For the trigger you give, I’d suggest the tentative alternate action of “stick my neck out and offer for it to be chopped off”, and only if that action makes you feel a bit uneasy do you start hedging and wondering “maybe I’m overstepping”.
For example, maybe you’re arguing politics and they scoffed at your assertion that policy X is better than policy Y or whatever, and it strikes you as arrogant for them to just dismiss out of hand ideas which you’ve thought very hard about. You could wonder whether you’re the arrogant one, and that you really should have thought harder before presenting such scoffable ideas and asked for their expertise before forming an opinion — and in some cases that’ll be the right play. In other cases though, you can can be pretty sure that you’re not the arrogant one, and so you can say “you think I’m being arrogant by thinking I can trust my thinking here to be at least worth addressing?” and give them the chance to say “Yes”.
You can ask this question because “I’m not sure if I am being arrogant here, and I want to make sure not to overstep”, but you can also ask because it’s so obvious what the answer is that when you give them an opening and invite their real belief they’ll have little option but to realize “You’re right, that’s arrogant of me. Sorry”. It can’t be a statement disguised as a question and you really do have to listen to their answer and take it in whatever it is, but you don’t have to pretend to be uncertain of what the answer is or what they will believe it to be under reflection. “Hey, so I’m assuming you’re just acting out of habit and if so that’s fine, but you don’t really think it’s arrogant of me to have an opinion here, do you?” or “Can you honestly tell me that I’m being arrogant here”. It doesn’t really matter whether you say it because “you want to point out to people when they aren’t behaving consistently with their beliefs”, or because “I want to find out whether they really believe that this behavior is appropriate”, or because “I want to find out whether I’m actually the one in the wrong here”. The important point is conspicuously removing any option you have for weaseling out of noticing when you’re wrong so that even when you are confident that it’s the other guy in the wrong, should your beliefs make false predictions it will come up and be absolutely unmissable.
With close friends or rationalist groups, you might agree in advance that there's a "or I don't want to tell you about what I did" attached to every statement about your life, or have a short abbreviation equivalent to that.
This already exists, and the degree of “or I’m not telling the truth” is communicated nonverbally.
For example, when my wife early in her pregnancy we attended the wedding of one of her friends, and a friend noticed that she wasn’t drinking “her” drink and asked “Oh my gosh, are you pregnant!?”. My wife’s response was to smile and say “yep” and then take a sip of beer. The reason this worked for both 1) causing her friend to conclude that she [probably] wasn’t pregnant and 2) not feeling like her trust was betrayed later is that the response was given “jokingly”, which means “don’t put too much weight into the seriousness of this statement”. A similar response could be “No, don’t you think I’d have immediately told you immediately if I were pregnant?”, again, said jokingly so as to highlight the potential for “no, I suppose you might not want to share if it’s that early”. It still communicates “No, or else I have a good reason for not wanting to tell you”.
If you want to be able to feel betrayed when their answer is misleading, you have to get a sincere sounding answer first, and “refuses to stop joking and be serious” is one way that people communicate their reluctance to give a real answer. Pushing for a serious answer after this is clear is typically seen as bad manners, and so it’s easy to go from joking around to a flat “don’t pry” when needed without seeming like you have anything to hide. Because after all, if they weren’t prying they’d have just accepted the joking/not-entirely-serious answer as good enough.
Understand that the urge to breath is driven by the body’s desire to rid itself of carbon dioxide (CO2)--not (as some assume) your body's desire to take in oxygen (O2).
Interestingly enough, this isn't entirely true. If you get a pulse oximeter and a bottle of oxygen you can have some fun with it.
Because of the nonlinearity in the oxygen dissociation curve, oxygen saturation tends to hold pretty steady for a while and then really tank quickly, whereas CO2 discomfort builds more uniformly. In my experience, when I get that really "panicked" feeling and start breathing again, the pulse oximiter on my finger shows my saturation tank shortly after (there's a bit of a delay, which is useful here for knowing that it's not the numbers on the display causing the distress).
If it were just CO2 causing the urge to breathe, CO2 contractions and the urge to breathe should come on in the exact same way when breathing pure oxygen, and this is not the case. Instead of coming on at ~2-2.5min and being quite uncomfortable, they didn't start until four minutes and were very very mild. I've broken five minutes when I was training more, and it was psychologically quite difficult. Compartively speaking, 5 minutes on pure O2 was downright trivial, and at 7 minutes it wasn't any harder. The only reason I stopped the experiment then is that I started feeling narcosis from the CO2 and figured I should do some more research about hypercapnia (too much CO2) before pushing further.
Along those same lines, rebreather divers sometimes drown when they pass out due to hypercapnia, and while you'd think it'd be way too uncomfortable to miss, this doesn't seem (always) to be the case. In my own experiments, rebreathing a scrubberless bag of oxygen did get uncomfortable quickly, but when they did a blind study on it five out of twenty people failed to notice that there was no CO2 being removed in 5 minutes.
At the same time, a scrubbed bag with no oxygen replacement is completely comfortable even as the lights go out, so low O2 alone isn't enough to trigger that panic.
Certainly not in any obvious way like people that suffer repeated blows to the head. There's some debate over whether loss of motor control (they call it "samba" because it's kinda like you start dancing involuntarily) can cause damage that makes it more likely to happen again in the future, but I haven't been able to find any evidence that there is any damage at all in normal training and even the former seems to be controversial.