I am not one of the tagged people but I certainly would not so agree. One reason I would not so agree is because I have talked to leftist people (prominence debatable) who celebrated the 10/7 attacks, and when I asked them whether they support Hamas, they were coherently able to answer "no, but I support armed resistance against Israel and don't generally condemn actions that fall in that category, even when I don't approve of or condone the group organizing those actions generally." One way to know what people believe and support is to ask them. (Of course, I don't think this is a morally acceptable position either, and conversation ensued! But it's clearly not "supporting Hamas" in any sense that can support your original claims.)
My social circles also include many leftists, including student organizers and somewhat well-known online figures, so I separately suspect that you're vastly overestimating the proportion of self-identified leftists who celebrated the attacks in any meaningful sense, but that's probably not the crux here.
I have a map of the world. I live on it.
I assume this is in the style of Steven Wright? It is in fact just a Steven Wright joke.
I think some of these are funny but most are quite bad. The rest of this comment is just my appraisal of the jokes I thought were interesting. These two:
A man spends fifteen years writing an 800,000 word rationalist novel. It's about how to make good decisions. He posts it for free. Seven people finish it. Three of them become his enemies.
Every great thinker has one weird fan who understands them better than anyone and also cannot be allowed near the main account.
are the funny ones that as far as I can tell are original, they have some structural/pacing issues but they work. This one:
There's a type of online guy whose whole thing is being slightly ahead of the curve. Not far enough to be a visionary. Just enough to be annoyed at everyone else for six months until they catch up. Then he moves on to being annoyed about the next thing. He's never happy. He's always right.
has a good setup but does not deliver. This one:
The thing about having a nemesis is that you have to keep it proportional. Too much energy and you look obsessed. Too little and it's not a nemesis, it's just a guy you don't like. The sweet spot is thinking about them exactly as often as they think about you, which means you're both trapped forever.
is structurally the best (actually has a button!) but damn is it just too wry to work. The rest are quite bad. I continue to think that frontier models are basically unfunny, but Claude is the least unfunny. (This was true when I checked GPT-5 vs. Sonnet 4.5 vs Gemini 2.5 Pro vs Grok 4 a month or so back, I am not convinced Opus 4.5 is funnier than Sonnet but it understands comedic rhythm a bit better.)
Sure, this seems more plausible. I'm sure I'd still object to your understanding of some moral and practical dimensions of monogamy, but I'm also sure you're aware of that so talking about it is unlikely to be productive for either of us. I'd ask that you reconsider the use of the word "category" if you have this discussion with others in the future, this is just not what it means.
I agree, this is my point! If being poly means "my partner going on a date with someone else and my partner playing board games with someone else aren't separated by a category distinction", then I would expect there to be poly spectrum people (that is, people who understand these categories the same way you do and identify themselves as poly) who treat these things as if they're in the same category; that is, who treat them both as a valid place to have a relationship boundary if there's mutual agreement that this is the best way forward. But I'm not aware of any poly people who do this. A person who is fine with their partner dating others but maybe not going home with them is clearly some amount of poly, and a person who isn't fine with their partner dating others but is fine with their partner having a board game night is clearly not poly, poly people would I think ~all agree with this, and this is obviously a category distinction! So it seems like while poly people might not care about the category distinction much, and might treat the categories more similarly than I would, they all recognize it and use it and in fact it's impossible meaningfully be poly without recognizing and using it, so I'm a bit confused as to why you claim not to recognize it.
EDIT: Arguably this a minor point. I make it anyway because I think poly people are generally somewhat to largely mistaken about what polyamory is, and this causes (a) many poly people to try to argue that monogamous relationships are fundamentally flawed and (b) many people to try to be poly when it doesn't actually work for them. The posts that Elizabeth is responding to exhibit (a) and your original comment reiterates them (you accept as valid reasons to not be polyamorous: physical/social/emotional deficiency, and this is all). And when the justification for being poly ends up being (b) (in this case, a claim I see as being obviously wrong about whether a certain category distinction exists), this makes me worry that some people are poly as a matter of matter of ideology rather than as a matter of preference, and so may try to convince themselves or others to be poly against preference, and in fact this is exactly what we see.
My interpretation of polyamory is basically that "my partner went to play boardgames with friends" and "my partner is on a date with someone" are in the same category.
I think if I had this perspective I would be poly, but also I am not convinced that this is a meaningful way to understand ~any poly people? For the following reason: all of the primary poly relationships I'm aware of are pretty explicit about what they do and don't allow - certain dates are okay, certain types of sex are okay, other things require prior notification, some things require discussion, etc. It seems like every configuration of "some types dating and sex are okay but other types of dating and sex aren't, or not by default" exists (which to be clear is cool and reasonable). But I'm not aware of any poly relationships where the rules are "we can't date other people or have sex with other people at all but we can play board games with other people", which makes me think that in practice, poly people recognize and use a distinction between these things.
Perhaps I'm misunderstanding what you mean by "category" here? Or perhaps the polyamory I've encountered just doesn't resemble yours?
I think I have a better understanding of your position now! I'm still a bit confused by your use of the word "bad", it seems like you're using it to mean something other than "could meaningfully be made better". Semantically, I don't really know what you're referring to when you say "the exposure itself" - the point here is that there is no such thing as the exposure itself! It is not always meaningful to split things up. There is a thing that I would call true openness and you might call something like necessary vulnerability (which you don't necessarily need to believe exists), and that thing entails the potential for deeper social connection and the potential for emotional harm, but this just does not mean we can separate it into a connection part and a harm part. I think I'm back to my original objection basically: we should not always do goal factoring because our goals do not always factor. The point of factoring something is to break it into parts which can basically be optimized separately, with some consideration of cross-interactions, but when the cross-interactions dominate the factoring obscures the goal.
I'm also not convinced that people get confused this way? Maybe there is a way to define "bad" that makes this confusion even coherent, but I can't think of such a way. The only way I can imagine a person endorsing the claim that the exposure itself is good is as a strong rejection of the premise that the thing that is actually good is separable from the exposure. Because, after all, if exposure under certain conditions (something like: exposure to a person I have good reason to trust, having thought about and addressed the ways it could be solvably bad, in pursuit of something I value more than I am afraid of the risk of potential pain) always corresponds with a good that is worth taking on that exposure, then every conceptually-possible version of that exposure is worth taking on net. What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good? Maybe you can say that there's no category difference between the sort of exposure that can be productively eliminated and the sort of exposure that can't, but the fact that I can describe the difference between these categories seems to suggest otherwise. The only way I can see for this to fail is for the description I gave to be incoherent, which only seems possible if one of the categories is empty.
On the other hand I think many people are miscalibrated on this sort of calculation, such that they either take more or less emotional risk than they ideally would, and I explained earlier why I'm very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure. I expect any sort of truly separate accounting to involve optimization on the risk side without consideration of the trust side, and because the effects on the trust side are subtle and harder to remember (in the sense that the sort of trust I care about is really really good in my experience, it's the sort of thing that takes basically all of my cognition to fully experience, so when any part of my cognition is focused elsewhere I cannot accurately remember how good it is), this will tend to lose out to an unseparated approach.
(This part I have no credence to say, so feel free to dismiss it with as much prejudice as is justified, but this:
Ok, I'm going to do this thing, and it has some exposure to harm, and that part is bad, but the exposure has some subtle positive effects too, and also it is truly eternally inseparable from the goods, and it's worth it overall, so I'm going to do it.
really does not seem like the sort of thought process that could properly calibrate a person's exposure to emotional risk! My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing.)
EDIT: I think we've nailed down our disagreement about the object-level thing and we're unlikely to come to agree on that, it seems like the remaining discussion is just about which distinctions are useful. Maybe this is the same disagreement and we're unlikely to come to agree about this either? My preference is to talk about vulnerability by default, with the understanding vulnerability is a contingent part of certain social goods, but in some cases the vulnerability can be trimmed without infringing on the social goods, so I would talk about unnecessary or excessive vulnerability in those cases. My understanding of your preference is to talk about vulnerability by default, with the understanding that vulnerability is the (strictly bad) exposure to emotional pain that often accompanies some social interactions. But it's at least plausible that vulnerability could be a contingent part of certain social goods, so in discussing those sorts of social goods at least as hypothetical objects, you'd refer to something like necessary vulnerability? And in cases where vulnerability could in theory be trimmed away by a sufficiently-refined self-model, but where that level of refinement is not easy to achieve and in practice the right thing to do is to proceed under the theoretically-resolvable uncertainty, something like worthwhile vulnerability? And then our disagreements in your language would be: I think that necessary vulnerability actually exists in theory, and that the set of necessary or worthwhile vulnerability is big enough that we shouldn't separate it from primitive vulnerability, and you would take the opposite on both of those claims. Am I understanding correctly?
I agree that this is the crux but I don't see how this is different from what we've been talking about? In particular, I'm trying to argue that these notions have a big intersection, and maybe even that the second kind is a subset of the first kind (there are types of openness and trust for which we can eliminate all the excess exposure to harm, but I think they're qualitatively different from the best kinds of openness and trust; if you think the difference is not qualitative, or that it's obviated when we consider exposure to harm correctly, then it wouldn't be a subset.) As a concrete example, I'm trying to argue that the sort of interaction that involves honestly exposing a core belief to another person and asking for an outside perspective, with the goal of correcting that belief if it's mistaken, is not just practically but necessarily in the intersection (it clearly requires openness and I'm trying to argue that it also requires exposure to harm for minds worth being.) Following that, I'm trying to argue that separating these concepts is a bad idea because, while this makes it easier to talk about the sorts of excess exposure we can and should eliminate, it makes it harder to recognize the exposure that we can't or shouldn't eliminate, and we lose more than we gain in this trade.
I agree that you haven't made that claim but I'm struggling to find an interpretation of what you've written that doesn't imply it. In particular, in my model of your position, this is exactly the claim "vulnerability itself is bad (although it may accompany good things)" applied to the sort of vulnerability that is the risk of changing one's identity-bearing beliefs. Maybe the following will help me pin down your position better:
That's the opposite of what I'm saying. I'm saying try to figure out why it's painful--what is being damaged / hurt--and then try to protect that thing even more. Then I'm saying that sometimes, when you've done that, it doesn't hurt to do the thing that previously did hurt, but there's nothing unwholesome here; rather, you've healed an unnecessary wound / exposure.
I agree that this is a plausible procedure and sometimes works, but how often do you expect this to work? Is it plausible to you that sometimes you figure out why it's painful, but that knowledge doesn't make it less painful, and yet the thing you're afraid of doing is still the thing you're supposed to do? Or does this not happen on your model of identity risk and vulnerability?
EDIT: I guess I should mention that I'm aware this is the opposite of what you're saying, and my understanding is that this is very nearly the opposite of the statement you disclaim at the end here. We agree that people should be able to change their minds, and that sometimes the process of changing one's mind seems painful. So either people should be able to change their minds despite the risk of pain, or people should be able to rearrange their mind until the process is not painful, and if it's the latter, then an especially well-arranged mind would be able to do this quickly and would not anticipate pain in the first place. I'm not sure where you disagree with this chain of reasoning and I'm not sure I see where you can.
I think you have the gist, yes, and I think we disagree about the frequency and strength of this harm. If someone I know well told me that they had something vulnerable to share, I'd understand them as saying (modulo different auto-interpretations of mental state) that they're much more exposed to this specific type of harm than normal in the conversation they expect to follow. Of course other, more solvable forms of vulnerability exist, but the people I'm close to basically know this and know me well enough to know that I also know this, so when they disclose vulnerability, marginal improvements are usually not available. I also think (though I can't be sure) that this effect is actually quite strong for most people and for many of their beliefs.
I should note: there are contexts where I expect marginal improvements to be available! For example, as a teacher I often need to coordinate make-up exams or lectures with students, and this is often because the students are experiencing things that are difficult to share. When vulnerability is just an obstacle to disclosure, I think I agree with you fully. I don't think this case is typical of vulnerability.
I guess the last point of disagreement is the claim that this is something most people should try to fortify against over time. More concretely, that most people I interact with should try to fortify against this over time, on the assumptions that you accurately believe that people in your social sphere don't experience this type of harm strongly, that I accurately believe that people in my social sphere do experience it strongly, and that if you believe most people in your sphere should tone it down, you'd believe so even more strongly for people in my sphere.
For me, this type of fear is a load-bearing component in the preservation of my personal identity, and I suspect that things are similar for most people. I don't think it's a coincidence that the rationalist community has very high rates of psychosis and is the only community I'm aware of that treats unusual numbness to this sort of pain as an unalloyed and universal virtue! I think most people would agree that it's good to be able to change your mind even when it's painful, especially when it's painful. But for most communities, the claim that it shouldn't be painful to change your mind on a certain subject coincides with the claim that that subject shouldn't be a core pillar of one's identity. The claim that it shouldn't be painful to change your mind on any subject, that the pain is basically a cognitive flaw, albeit understandable and forgivable and common, seems unique to this community.
(Also sorry for sentence structure here, I couldn't figure out how to word this in a maximally-readable way for some reason. Thank you for reading me closely, I appreciate the effort.)
(I should note that I think this effect is real and underdiscussed.)
Solving alignment usually means one of the following: developing an intelligence recipe which instills the resulting intelligence with arbitrary values+specifying human values well, or developing an intelligence recipe for which the only attractor is within the space of human values. It might be the case that, under current recipes and their nontrivial modifications, there aren't that many attractors, but because gradient descent is not how human intelligence works, the attractors are not the same as they are for humans. That is, the first system capable of self-improvement might be able to reasonable infer that its successor will share its values, even if it can't give its successor arbitrary values.