Bayesian Charity

I'm not understanding your disagreement. Of course popularity is just a prior. The less popular a given position, the stronger your prior should be against the other person holding it. Doing that will lead you to be less wrong about what the other person means by what they're saying.

What is the difference between what you said and "your prior for whether someone believes an unpopular position should be lower the less popular the position is, and you should update your prior based on how clear their statement was"?

Holiday Pitch: Reflecting on Covid and Connection

I think "Survival Day" evokes the themes you're going for

Holiday Pitch: Reflecting on Covid and Connection

I love this idea. Some of my thoughts:

I would like if the script for the event had some fill in the blanks where people can express themselves. Reading from a script can build connection, but more so if people are encouraged to show some of their own uniqueness as part of it.

Perhaps there could be a tradition where 20-30 or so people are invited and attend, but anyone is welcome to observe (i.e. join the video stream, but as read only or just have their mic muted). The worries about event size are real, but also it would be nice if people who didn't have an event to attend could still participate somehow, and technology allows that to happen in a non-intrusive way .

Egoism In Disguise

I do not fully understand the point you are making in (1). I don't see anything specifically to disagree with, but also don't see how it's in conflict with anything in the OP. I hold that my feelings are my basic unit of value because that's what I care about. If a different person cares about different things, that's their decision. My feelings are in constant flux, and will often change. Is that somehow in conflict with something I've said? My thoughts on egoism are more fully fleshed out in the linked post.

I'm mostly ignoring (2) because it will get me off on a tangent about evopsych, and that's not the discussion I want to get into at the moment. Suffice it to say that I think when I admit that the idea of human flourishing makes me smile, I am admitting to not being completely selfish.

On (3), I again don't have much disagreement. I'm not advocating for selfishness in the sense of not caring about anyone else. I'm just asking us to recognize that our preferences are subjective and not binding on anyone else. Those preferences are obviously complicated and sometimes self-contradictory. Egoism is not Objectivism.

Ms. Blue, meet Mr. Green

I don't know how correct this post is, but I highly approve of it as an effort to understand and humanize a pretty strong outgroup.

The Behavioral Economics of Welfare

Theory 4: people only care about poor people in the in-group. People like welfare when it helps people similar to themselves, and hate it when it "enables" stupid lazy people in the out-group. This is often heavily racial - the countries with the most generous welfare systems are also the most racial homogeneous. So whether one votes in favor of welfare is determined by who they imagine being helped. This can often change rapidly, leaving public opinion on welfare to be a confusing mess.

This is a good post about how to become a more reliable person. I often find it incredibly frustrating when people flake on me, and I really know very few people who don't consistently flake on plans.

I think this is good advice for people who want to become more reliable. However, I often suspect that the people who consistently flake don't want to be more reliable. My suspicion is that people wish to signal behavior by *saying* they're going to do something without actually doing it. Much like a politician making campaign promises, there are social rewards for indicating interest in activities, and you don't necessarily lose those all of those rewards by failing to show up. I often feel like I'm just being told what I want to hear. So really, if we are to become more reliable, we may need to become more honest.