jimmy

Comments

On Changing Minds That Aren't Mine, and The Instinct to Surrender.

Originally, I had earned a reputation on the server for my patience, my ability to defuse heated disagreements and give everyone every single chance to offer a good reason for why they held their positions. That slowed and stopped. I got angrier, ruder, more sarcastic, less willing to listen to people. Why should I, in the face of a dozen arguments that ended without any change?  [...] What’s the point of getting people mad? [...] what’s the point in listening to someone who might be scarcely better than noise


This seems like the core of it right here.

You started out decently enough when you had patience and were willing to listen, but your willingness to listen was built on expectations of how easily people would change their minds or offer up compelling reason to change your own, and those expectations aren't panning out. You don't want to have to give up on people -- especially those who set out to become rational -- being able to change their minds, or on your own ability to be effective in this way. Yet you can't help but notice that your expectations aren't being fulfilled, and this brings up some important questions on what you're doing and whether it's even worth it. You don't want to "just give up", yet you're struggling to find room for optimism, and so you're finding yourself just frustrated and doing neither "optimism" nor "giving up" well.

Sound like a fair enough summary?

There is, of course, the classic solution: Get stronger. If I could convince them I was right or get convinced that they’re right, that would nicely remove the dissonance.

The answer is in part this, yes.

It is definitely possible to intentionally steer things such that people either change your mind or change their own. It is not easy.

It is not easy in two different ways. One is that people's beliefs generally aren't built on the small set of "facts" that they give to support them. They're built on a whole lot more than that, a lot of it isn't very legible, and a lot of the time people aren't very aware of or honest about what their beliefs are actually built on. This means that even when you're doing things perfectly* and making steady progress towards converging on beliefs, it will probably take longer than you'd think, and this can be discouraging if you don't know to expect it.

The other way it's hard is that you have to regulate yourself in tricky ways. If you're getting frustrated, you're doing something wrong. If you're getting frustrated and not instantly pivoting to the direction that alleviates the frustration, you're doing that wrong too. It's hard to even know what direction to pivot sometimes. Getting this right takes a lot of self-observation and correction so as to train yourself to balance the considerations better and better. Think of it as a skill to be trained.

* "Perfectly" as in "Not wasting motion". Not slipping the clutch and putting energy into heat rather than motion. You might still be in the wrong gear. Even big illegible messes can be fast when you can use "high gear" effectively. In that case it's Aumann agreement about whose understanding to trust how far, rather than conveying the object level understanding itself.

And, of course, there is the psychological option- just get over it.

The answer is partly this too, though perhaps not in the way you'd think.

It's (usually) not about just dropping things altogether, but rather integrating the unfortunate information into your worldview so that it stops feeling like an alarm and more like a known-issue to be solved.

Hardly seemed appropriate to be happy about anything when it came to politics. Everyone is dreadfully underinformed, and those with the greatest instincts towards kindness and systemic changes may nevertheless cause great harm

This, for example, isn't an "Oh, whatever, NBD". You know how well things could go if people could be not-stupid about things. If people could listen to each other, and could say things worth listening to. If people who were about "kindness" knew they had to do their homework and ensure good outcomes before they could feel good about themselves for being "kind". And you see a lot of not that. It sucks.

It's definitely a problem to be solved rather than one to be accepted as "just how things are". However, it is also currently how things are, and it's not the kind of problem that can be solved by flinching at it until it no longer exists to bother us -- the way we might be able to flinch away from a hot stove and prevent "I'm burning" from being a true thing we need to deal with.

We have to mourn the loss of what we thought we had, just as we have to to when someone we cared about doesn't get the rest of the life we were hoping for. There's lots of little "Aw, and that means this won't get to happen either", and a lot of "But WHY?" until we've updated our maps and we're happy that we're no longer neglecting to learn lessons that might come back to bite us again.

Some people aren't worth convincing, and aren't worth trying to learn from. It's easier to let those slide when you know exactly what you're aiming for, and what exact cues you'd need to see before it'd be worth your time to pivot.


With Trump in office, I struggled to imagine how anyone could possibly change their view. If you like him, any argument against him seems motivated by hatred and partisanship to the point of being easily dismissed. If you don’t, then how could you possibly credit any idea or statement of himself or his party as worthwhile in the face of his monumental evils.

Let's use this for an example.

Say I disagreed with your take on Trump because I thought you liked him too much. I don't know you and you don't know me, so I can't rest on having built a reputation on not being a hateful partisan and instead thinking things through. With that in mind, I'd probably do my best to pace where you're coming from. I'll show you exactly how cool all of the cool thing Trump has done are (or on the other side, exactly how uncool all the uncool things are), and when I'm done, I'll ask you if I'm missing anything. And I'll listen. Maybe I'm actually missing something about how (un)cool Trump is, even if I think it's quite unlikely. Maybe you'll teach me something about how you (and people like you) think, and maybe I care about that -- I am choosing to engage with you, after all.

After I have proven to your satisfaction that not only do I get where you're coming from, I don't downplay the importance of what you see at all, do you really believe that you'd still see me as "a hateful partisan" -- or on the other side, "too easily looking past Trump's monumental evils"? If you do slip into that mode of operation and I notice and stop to address it with an actual openness to seeing why you might see me that way, do you think you'd be able to continue holding the "he's just a hater" frame without kinda noticing to yourself that you're wrong about this and weakening your ability to keep hold of this pretense if it keeps happening?

Or do you see it as likely that you might be curious about how I can get everything you do, not dismiss any of it, and still think you're missing something important? Might you even consider it meaningful that I don't come to the same conclusion before you understand what my reasoning is well enough that I'd sign off on it?

You still probably aren't going to flip your vote in a twenty minute conversation, but what if it were more? Do you think you could hang out with someone like that for a week without weakening some holds on things you were holding onto for less than fully-informed-and-rational reasons? Do you think that maybe, if the things you were missing turned out to be important and surprising enough, you might even change your vote despite still hating all the things about the other guy that you hated going in?


The question is just whether the person is worth the effort. Or perhaps, worth practicing your skills with.

Making Vaccine

Being as charitable as the facts allow is great. Starting to shy away from some of the facts so that one can be more charitable than they allow isn't.

The whole point is that this moderators actions aren't justifiable. If they have a "/r/neoliberal isn't the place for medicine, period" stance, that would be justifiable. If the mod deleted the post and said "I don't know how to judge these well so I'm deleting it to be safe, but it's important if true so please let me know why I should approve it", then that would be justifiable as well, even if he ultimately made the wrong call there too. 

What that mod actually did, if I'm reading correctly, is to make an active claim that the link is "misinformation" and then ban the person who posted it without giving any avenue to be proven wrong.  Playing doctor by asserting truths about medical statements, when one is not competent or qualified to do so, getting it wrong when getting it wrong is harmful, and then shutting down avenues where your mistakes can be shown, is not justifiable behavior. It's shameful behavior, and that mod ought to feel very bad about his or herself until they correct their mistakes and stop harming people out of their own hubris. The charity that there is room for is along the lines of "Maybe the line about misinformation was an uncharitable paraphrase rather than a direct quote" and "Hey, everyone makes mistakes, and even mistakes of hubris can be atoned for" -- not justifying the [if the story is what it seems to be] clearly and very bad behavior itself.

Making Vaccine

I think this is inaccurately charitable. It's never the case that a moderator has "no way" to know whether it checks out or not. If "Hey, this sounds like it could be dangerous misinfo, how can I know it's not so that I can approve your post?" is too much work and they can't tell the good from bad within the amount of work they're willing to put in, then they are a bad moderator -- at least, with respect to this kind of post. Even if you can't solve all or even most cases, leaving a "I could be wrong, and I'm open to being surprised" line on all decisions is trivial and can catch the most egregious moderation failures.

Maybe that's acceptable from a neoliberal moderator since it's not the core topic, but the test is "When confronted with evidence that they can correctly evaluate as showing them to have been wrong, do they say 'oops!' and update accordingly, or do they double down and make excuses for doing the wrong thing and not update". I don't know the mod in question, but the former answer is the exception and the latter is the rule. If the rejection note was "Medical stuff isn't allowed because I'm not qualified to sort the good from the bad", then I'd say "fair enough". But actively claiming "Spreading dangerous misinfo!" is rarely done with epistemic humility out of necessity and almost always done out of the kind of epistemic hubris that has gotten us into this mess by denying that there's an upcoming pandemic, denying that masks work and are important, and now denying that we can and should dare to vaccinate in ways that deviate from the phase 3 clinical trials. This kind of behavior is hugely destructive and is largely the result of enabled laziness, so it's really not something we ought to be making excuses for.

Covid 1/28: Muddling Through

Running some quick numbers on that Israel "Stop living in fear after being vaccinated" thing, it looks like Israel's current 7-day average is about 8000 cases/day, so with a population of 9 million we should expect about 110 cases/day out of 125k vaccinated if vaccines did nothing and people didn't change their behavior. What they actually got was 20.. over what time period? Vaccines clearly work to a wonderful extent, but is it really to the "Don't think twice about going out partying then visiting immune compromised and unvaccinated grandma" level?

On an unrelated note, aren't these mRNA vaccines supposed to produce a lot more antibodies than COVID? Shouldn't that show up on a COVID antibody test? Because in my experience they did not.

Everything Okay

"G" fits my own understanding best: "Not Okay" is a generalized alarm state, and the ambiguity is a feature, not a bug.

(Generally) we have an expectation that things are supposed to be "Okay" so when they're not, this conflict is uncomfortable and draws attention to the fact that "something is wrong!". What exactly it takes to provoke this alarm into going off depends on the person/context/mindset because it depends on (what they realize) they haven't already taken into account, and that's kinda the point. For example, if you're on a boat and notice that you're on a collision course with a rock you might panic a bit and think "We have to change course!!!", which is an example of "things not being okay". However, the driver might already see the rock and is Okay because the "trajectory" he's on includes turning away from the rock so there's no danger. And of course, other passengers may be in Okay Mode because they fail to see the rock or because they kinda see the rock but they are averse to being Not Okay and therefore try to ignore it as long as possible.

In that light, "Everything is Okay" is reassurance that the alarm can be dismissed. Maybe it's because the driver already sees the rock. Maybe it's because our "boat" is actually a hovercraft which will float right over the rock without issue. Maybe we actually will hit the rock, but there's nothing we can do to not hit the rock, and the damages will be acceptable. Getting people back into Okay Mode is in exercise in getting people to believe that one of these is true, and you don't necessarily have to specify which one if they trust you, and if the details are important that's what the rest of the conversation is for.

The best way to get the benefits of ‘okay’ in avoiding giant stress balls, while still retaining the motivation to act and address problems or opportunities is to "just" engage with the situation without holding back.

Okay, so we're headed for a rock, now what? If that's alarming then it's alarming. Are we actually going to hit it if we simply dismiss the alarm and go back to autopilot? If so, would that be more costly than the cost of the stress needed to avert it? What can we actually do to stop it? Can we just talk to the driver? Is that likely to work?

If that's likely to work and you're on track to doing that, then "can we sanely go back to autopilot?" can evaluate as "yes" again and we can go back to Okay Mode -- at least, until the driver doesn't listen and we no longer expect out autopilot to handle the situation satisfactorily. You get to go back to Okay Mode as soon as you've taken the new information into account and gotten back on a track you're willing to accept over the costs of stressing more.


"The Kensho thing", as I see it, is the recognition that these alarms aren't "fundamental truths" where the meaning resides. They're momentary alarms that call for the redirection of one's attention, and the ultimate place that everything resolves to after doing your homework and integrating all the information is back to a state which calls for no alarms. That's why it's not "nothing matters, everything is equally good" or "you'll feel good no matter what once you're enlightened" -- it's just "Things are okay,  on a fundamental level alarms are not called for, behaviors are, and it's my job to figure out which. If I'm not okay with them that signals a problem with me in that I have not yet integrated all the information available and gotten back on my best-possible-track". So when your friend dies or you realize that humanity is going to be obliterated, it's not "Lol, that's fine", it's room to keep a drive to not only do something about it, a drive to stare reality in the face as much as you can manage, to regulate how much you stare at painful truths so that you keep your responses productive, and a desire to up one's ability to handle unpleasant conflict.

 How should one react to those who are primarily optimizing for being in Okay Mode at the expense of other concerns

Fundamentally, it's a problem of aversion to unpleasant conflict. Sometimes they won't actually see the problem here so it can be complicated by their endorsement of avoidance, but even in those cases it's probably most productive to ignore their own narratives and instead directly address the thing that's causing them to want to avoid.

Shoving in their face more reasons to be Not Okay is likely to trigger more avoidance, so instead of trying to argue why "Here's how closing your eyes means you're more likely to fail to avoid the rock, and therefore kill everyone. Can you imagine how unfun drowning will be?" (which I would expect to lead to more rationalizations/avoidance), I'd focus on helping them be comfortable. More "Yeah, it's super unfun for things to be Not Okay, and I can't blame you for not wanting to do it more than necessary"/"Yes, it's super important to be able to be able to regulate one's own level of Okayness, since being an emotional wreck often makes things worse, and it's good that you don't fail in that way".

Of course, you don't want to just make them comfortable staying in Okay Mode because then there's no motivation to switch, so when there's a little more room to introduce unpleasant ideas without causing folding you can place a little more emphasis on "it's good that you fail in that way", and how completely avoiding stress isn't ideal or consequence free either.

It's a bit of a balancing act, and more easily said than done. You have to be able to pull off sincerity when you reassure them that you get where they're coming from and that it's actually better than doing the thing they fear their option is, and without "Not Okaying" at them by pushing them "It's Not Okay that you feel Okay!". It's a lot easier when you can be Okay that they're in Okay mode because they're Not Okay with being Not Okay, partially just because externalizing ones alarms as a flinch is rarely the most helpful way of doing things. But also because if you're Okay you can "go first" and give them a proof of concept and reference example for what it looks like to stare at the uncomfortable thing (or uncomfortable things in general) and stay in Okay Mode. It helps them know "Hey, this is actually possible", and feel like you might even be able to help them get closer to it.


or those who are using Okay as a weapon?

Again, I'd just completely disregard their narratives on this one. They're implying that if you're Not Okay, then it's a "you problem". So what? Make sure they're wrong and demonstrate it.

"God, it's just a little fib. Are you okay??"

"Not really. I think honesty about these kinds of things is actually extremely important, and I'm still trying to figure out where I went wrong expecting not to have that happen"

Or

"Yeah, no, I'm fine. I just want to make sure that these people know your history when deciding how much to trust you".

In Defense of Twitter's Decision to Ban Trump

"Content moderation" is not always a bad thing, but you can't jump directly from "Content moderation can be important" to "Banning Trump, on balance, will not be harmful". 

The important value behind freedom of association is not in conflict with the important value behind freedom of speech, and it's possible to decline to associate with someone without it being a violation of the latter principle. If LW bans someone because they're [perceived to be] a spammer that provides no value to the forum, then there's no freedom of speech issue. If LW starts banning people for proposing ideas that are counter to the beliefs of the moderators because it's easier to pretend you're right if you don't have to address challenging arguments, then that's bad content moderation and LW would certainly suffer for it.

The question isn't over whether "it's possible for moderation to be good", it's whether the ban was motivated in part or full by an attempt to avoid having to deal with something that is more persuasive than Twitter would like it to be. If this is the case, then it does change the ultimate point.

What would you expect the world to look like if that weren't at all part of the motivation? 

What would you expect the world to look like if it were a bigger part of the motivation than Twitter et al would like to admit?

Motive Ambiguity

The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?

 

It sounds like the question is essentially "How to do hard mode?".

On a small scale, it's not super intimidating. Just do the right thing and take your spouse to the place you both like. Be someone who cares about finding good outcomes for both of you, and marry someone who sees it. There are real gains here, and with the annoyance you save yourself by not sacrificing for the sake of showing sacrifice, you can maintain motivation to sacrifice when the payoff is actually worth it -- and to find opportunities to do so. When you can see that you don't actually need to display that costly signal, it's usually a pretty easy choice to make.

Forging a deeper and more efficient connection does require allowing potential for conflict so that you can distinguish yourself from the person who is only doing things for shallow/selfish reasons. Distinguish yourself by showing willingness to entertain such accusations, knowing that the truth will show through. Invite those conflicts when you have enough slack to turn it into play, and keep enough slack that you can. "Does this dress make my ass look fat?" -- can you pull off "The *dress* doesn't, no" and get a laugh, or are you stuck where there's only one acceptable answer? If you can, demonstrate that it's okay to suggest the "unthinkable" and keep poking until you can find the edge of the envelope. If not, or when you've reached the point where you can't, then stop and ask why. Address the problem. Rinse and repeat with the next harder thing, as you become ready to.

On a larger scale, it gets a lot harder. You can no longer afford to just walk away from anyone who doesn't already mostly get it, and you don't have so much time and attention to work. There are things you can do, and I don't want to suggest that it's "not doable". You can start to presuppose the framings that you've worked hard to create and justify in the past, using stories from past experience and social proof to support them in the cases where you're challenged -- which might be less than you think, since the ability to presuppose such things without preemptively flinching defensively can be powerful subcommunication. You can start to build social groups/communities/institutions to scale these principles, and spread to the extent that your extra ability to direct motivation towards good outcomes allows you to out-compete the alternatives.

I just don't get the impression that there's any "easy" answer. If you want people to donate to your political campaign even though you won't play favorites like the other guy will, I think you have to genuinely have to be able to expect that your donors will be more personally rewarded by the larger total pie and recognition of doing the right thing than they will in the alternative where they donate to have someone fight to give them more of a smaller pie -- and are perceived however you let that be perceived.
 

Number-guessing protocol?

This answer is great because it takes the problem with the initial game (one person gets to update and the other doesn't) and returns the symmetry by allowing both players to update. The end result shows who is better at Aumann updating and should get you closer to the real answer.

If you'd rather know who has the best private beliefs to start with, you can resolve the asymmetry in the other direction and make everyone commit to their numbers before hearing anyone else's. This adds a slight bit of complexity if you can't trust the competitors to be honest, but it's easily solved by either paper/pencil or everyone texting their answer to the person who is going to keep their phone in their pocket and say their answer first.

Covid 11/19: Don’t Do Stupid Things

The official recommendations are crazy low. Zvi's recommendation here of 5000IU/day is the number I normally hear from smart people who have actually done their research. 

The RCT showing vitamin D to help with covid used quite a bit. This converter from mg to IU suggests that the dose is at least somewhere around 20k on the first day and a total of 40k over the course of the week. The form they used (calcifediol) is also more potent, and if I'm understanding the following comment from the paper correctly, that means the actual number is closer to 200k/400k. (I'm a bit rushed on this, so it's worth double checking here)

In addition, calcifediol is more potent when compared to oral vitamin D3 [43]. In subjects with a deficient state of vitamin D, and administering physiological doses (up to 25 μg or 1000 IU daily, approximately 1 in 3 molecules of vitamin D appears as 25OHD; the efficacy of conversion is lower (about 1 in 10 molecules) when pharmacological doses of vitamin D/25OHD are used. [42]

I've always been confused why the official recommendations for vitamin D are so darn low, but it seems that there might be an answer that is fairly straight forward (and not very flattering to the those coming up with the recommended values). It looks like it might be a simple conflation between "standard error of the mean" and "standard deviation" of the population itself.

Simpson's paradox and the tyranny of strata

(If you're worried about the difference being due to random chance, feel free to multiply the number of animals by a million.)

[...]

They vary from these patterns, but never enough that they are flying the same route on the same day at the same time at the same time of year. If you want to compare, you can group flights by cities or day or time or season, but not all of them.

 

The problem you're using Simpson's paradox to point at does not have this same property of "multiplying the size of the data set by arbitrarily large numbers doesn't help". If you can keep taking data until randomness chance is no issue, then they will end up having sufficient data in all the same subgroups, and you can just read the correct answer off the last million times they both flew in the same city/day/time/season simultaneously.

The problem you're pointing at fundamentally boils down to not having enough data to force your conclusions, and therefore needing to make judgement about how important season is compared to time of day so that you can determine when conditioning on more factors will help relevance more than it will hurt by adding more noise.

Load More