The serious answer would be:Incel = low status, implying that someone is an incel and deserves to be stuck in his toxic safe space is a mockery or at least a status jab, the fact you ignored the fact I wrote status jab/mockery and insisted only on mockery and only in the context of this specific post hints as motivated reasoning (Choosing to ignore the bigger picture and artificially limiting the limits of the discussion to minimize the attack surface without any good reason).The mocking answer would be:These autistic rationalists can't even sense obvious mockery and deserve to be ignored by normal people
OP is usually used to note the original poster and not the original post, and the first quote is taken from one of the links in this post and is absolutely a status jab, he assumes his critic is a celibate (even though the quoted comment doesn't imply anything like that) and if you don't parse "they deserve their safe spaces" as a status jab/mockery I think you're not reading the social subtext correctly here - but I'm not sure how to communicate this in a manner you will find acceptable.
"I never had the patience to argue with these commenters and I’m going to start blocking them for sheer tediousness. Those celibate men who declare themselves beyond redemption deserve their safe spaces,"https://putanumonit.com/2021/05/30/easily-top-20/
"I don't have a chart on this one, but I get dozens of replies from men complaining about the impossibility of dating and here's the brutal truth I learned: the most important variable for dating success is not height or income or extraversion. It's not being a whiny little bitch."https://twitter.com/yashkaf/status/1461416614939742216
I just wanted to say that your posts about sexuality represent in my opinion the worst tendencies of the rationalist scene, The only way for me to dispute them in the object level is to go to socially-unaccepted truths and to CW topics. So that's why I'm sticking to the meta-level here. But on the meta-level the pattern is something like the following:
I don't mind also posting criticism on your object-level claims if I'll get approval from mods to go to very uncomfortable places. But in general, the way you victim-blame incels is downright sociopathic and I would wish you at least stop doing that.
There is another approach that says something along the line of not all farm-factories animals have the same treatment, for example the median cow is treated way better than the median chicken, I for one would have to guess that cows are net positive, and chickens are probably net negative (and probably even have worse lives than wild animals)
CEV was written in 2004, fun theory 13 years ago. I couldn't find any recent MIRI paper that was about metaethics (Granted I haven't gone through all of them). The metaethics question is important just as much as the control question for any utilitarian (What good will it be to control an AI only for it to be aligned with some really bad values, an AI-controlled by a sadistic sociopath is infinitely worse than a paper-clip-maximizer). Yet all the research is focused on control, and it's very hard not to be cynical about it. If some people believe they are creating a god, it's selfishly prudent to make sure you're the one holding the reigns to this god. I don't get why having some blind trust in the benevolence of Peter Thiel (who finances this) or other people who will suddenly have godly powers to care for all humanity seems naive with all we know about how power corrupts and how competitive and selfish people are. Most people are not utilitarians, so as a quasi-utilitarian I'm pretty terrified of what kind of world will be created with an AI-controlled by the typical non-utilitarian person.
If you try to quantify it, humans on average probably spend over 95% (Conservative estimation) of their time and resources on non-utilitarian causes. True utilitarian behavior Is extremely rare and all other moral behaviors seem to be either elaborate status games or extended self-interest . The typical human is way closer under any relevant quantified KPI to being completely selfish than being a utilitarian.  - Investing in your family/friends is in a way selfish, from a genes/alliances (respectively) perspective.
The fact that AI alignment research is 99% about control, and 1% (maybe less?) about metaethics (In the context of how do we even aggregate the utility function of all humanity) hints at what is really going on, and that's enough said.
I have also made a similar comment a few weeks ago, In fact, this point seems to me so trivial yet corrosive that I find it outright bizarre it's not being tackled/taken seriously by the AI alignment community.
Relevant Joke:I told my son, “You will marry the girl I choose.”He said, “NO!”I told him, “She is Bill Gates’ daughter.”He said, “OK.”I called Bill Gates and said, “I want your daughter to marry my son.”Bill Gates said, “NO.”I told Bill Gates, My son is the CEO of World Bank.”Bill Gates said, “OK.”I called the President of World Bank and asked him to make my son the CEO.He said, “NO.”I told him, “My son is Bill Gates’ son-in-law.”He said, “OK.”This is how politics works.