Opportunity costs. There's a lot more useful things one can do, such as actually work on projects that reduce existential risk, or run one's own blog, or build a following on social media such as X to influence the zeitgeist.
Thanks a lot for reposting this.
Duncan's post is actually quite insightful. The core point is an integration of the guess-vs-ask culture dichotomy such that we have a more accurate model of what is going on (one that involves tracking the social context and implications of your actions on the environment and the other people, and the other people's actions, and so on).
I also believe that your attempts at posting good-faith critiques in the comments of most LW posts are costlier to you and the community you care about, than they are beneficial. You are swimming upstream and that is unsustainable. Your efforts are best spent elsewhere.
Nevertheless, every single example you bring up above was in fact unpleasant for me, some substantially so—while reasonable conclusions were reached (and in many cases I found the discussion fruitful in the end), the tone in your comments was one that put me on edge and sucked up a lot of my mental energy.
Some of them look positively cooperative to me, and do not look like Said thought ill of you in any way, nor that it would look bad if you replied or didn't reply to those messages.
Am I correct in stating that the main reason it is unpleasant and scary is because you felt socially threatened in those moments? As in, your standing in the social group you considered LessWrong to be, and that you considered that you were a part of? And a part of the obligation to reply involved a feeling of wanting to defend yourself and your standing in the group, especially since a gigantic part of what gives someone status in a sphere like LW is your intellectual ability, or your ability to be right, or to not look dumb at the very least?
That is at least how I feel when I try to simulate why I'd feel the way you claim to have felt. And I empathize with that feeling.
What I mean is: if we want to know what would happen in a “counterfactual” case, it seems like the first thing to do is to say “now, by that do you mean to ask what would happen under physical intervention, or what would happen under logical intervention?” Right?
Yes.
Those would (could?) have different answers, and really do seem like different questions, so after realizing that they’re different questions, have we thereby resolved all confusions about “counterfactuals”?
I think that intervening on causality and logic are the only two ways one could intervene to create an outcome different from the one that actually occurs.
Or do some puzzles remain?
I don't work in the decision theory field, so I want someone else to answer this question.
For a more rigorous explanation, here's the relevant section from MacDermott et al., "Characterising Decision Theories with Mechanised Causal Graphs":
But in the Twin Prisoner’s Dilemma, one might interpret the policy node in two different ways, and the interpretation will affect the causal structure. We could interpret intervening on your policy ˜D as changing the physical result of the compilation of your source code, such that an intervention will only affect your decision D, and not that of your twin T . Under this physical notion of causality, we get fig. 3a, where there is a common cause S explaining the correlation between the agent’s policy and its twin’s.
But on the other hand, if we think of intervening on your policy as changing the way your source code compiles in all cases, then intervening on it will affect your opponent’s policy, which is compiled from the same code. In this case, we get the structure shown in fig. 3b, where an intervention on my policy would affect my twin’s policy. We can view this as an intervention on an abstract "logical" variable rather than an ordinary physical variable. We therefore call the resulting model a logical-causal model.
Pearl’s notion of causality is the physical one, but Pearl-style graphs have also been used in the decision theory literature to represent logical causality. One purpose of this paper is to show that mechanism variables are a useful addition to any graphical model being used in decision theory.
Personal experience observing certain trans women doing it in in-person and online group conversations, in a part of my social circle that is composed of queer and trans autistic people.
Thank you for the passionate comment.
Indeed, and I apologize for not being more diplomatic.
a lot of dating advice given to men doesn’t reflect base reality
I agree.
I think it is appropriate to recommend people do expensive things, even if they are speculative, as many of the people I have in mind are distressed about matters of love and sex and have a lot of disposable income.
Seems fine if your intention was to bring it to the attention of these people, sure. I still feel somewhat wary of pushing people to take steroids out of a desire to be perceived as more masculine: it can go really badly. In general, I am wary of recommending extreme interventions to people without having a lot more context of their situation. It is hard to steer complex systems to outcomes you desire.
Facial attractiveness is very consequential, hedonic returns on cosmetic surgery are generally very high, regret is very low, and it seems to me that basically everyone could benefit by improvements on the margin.
Seems very plausible to me. On the other hand, I believe that the highest ROI interventions for most lonely people do not look like getting cosmetic surgery done to improve their ability to date. Location, social circle, and social skills seem to matter a lot more. Perhaps you take it as a given that these things have been already optimized to the extent possible.
It shouldn’t be an issue that you banish the non-extreme cases from your mind. I’m assuming from the way you’re phrasing the stuff about homeless people that you’re indicating that you do take this attitude but on some level don’t really endorse it?
I was communicating how I deal with certain ugly facets of reality, not necessarily stating how I believe people should deal with these facets of reality. Would I ideally interact with such people from a place of empathy and not-worrying-about-myself? Sure.
Second, I think the facial attractiveness literature makes this tension make more sense. It seems that “feminine” features really are more beautiful—for basically anyone. Hence my recommendation that Asian men need to masculinize their bodies (don’t let yourself have an unathletic skinny-fat build), but feminize their faces (softer face contours really are just more universally appealing that you would think).
Okay, I see what you mean and consider it plausible.
I respect the courage in posting this on LessWrong and writing your thoughts out for all to hear and evaluate and judge you for. It is why I've decided to go out on a limb and even comment.
take steroids
Taking steroids usually leads to a permanent reduction of endogenous testosterone production, and infertility. I think it is quite irresponsible for you to recommend this, especially on LW, without the sensible caveats.
take HGH during critical growth periods
Unfortunately, this option is only available for teenagers with parents who are rich enough to be willing to pay for this (assuming the Asian male we are talking about here has started with an average height, and therefore is unlikely to have their health insurance pay for HGH).
lengthen your shins through surgery
From what I hear, this costs between 50k - 150k USD, and six months to an year of being bedridden to recover. In addition, it might make your legs more fragile when doing squats or deadlifts.
(also do the obvious: take GLP-1 agonists)
This is sane, and I would agree, if the person is overweight.
Alternatively, consider feminizing.
So if Asian men are perceived to be relatively unmasculine, you want them to feminize themselves? This is a stupid and confused statement.
I believe that what you mean is some sort of costly signalling via flamboyance, which does not necessarily feminize them as much as make them stand out and perhaps signal other things like having the wealth to invest in grooming and fashion, and having the social status to be able to stand out.
Saying Asian men need to feminize reminds me of certain trans women's rather insistent attempt to normalize the idea of effeminate boys transitioning for social acceptance, which is an idea I find quite distasteful (its okay for boys to cry and to be weak, and I personally really dislike people and cultures that traumatize young men for not meeting the constantly escalating standards of masculinity).
Schedule more plastic surgeries in general.
I see you expect people to have quite a lot of money to burn on fucking with their looks. I think I agree that plastic surgeries are likely a good investment for a young man with money burning a hole in their pocket and a face that they believe is suboptimal. Some young men truly are cursed with a face that makes me expect that no girl will find them sexually attractive, and I try to not think about it, in the same way that seeing a homeless person makes me anxious about the possibility of me being homeless and ruins the next five minutes of my life.
Don’t tell the people you’re sexually attracted to that you are doing this — that’s low status and induces guilt and ick.
You can tell them the de facto truth while communicating it in a way that makes it have no effect on how you are perceived.
Don’t ask Reddit, they will tell you you are imagining things and need therapy. Redditoid morality tells you that it is valid and beautiful to want limb lengthening surgery if you start way below average and want to go to the average, but it is mental illness to want to go from average to above average.
This also applies to you, and I think you've gone too far in the other direction.
Don’t be cynical or bitter or vengeful — do these things happily.
Utterly ridiculous, don't tell people how to feel.
Even if I'd agree with your conclusion, your argument seems quite incorrect to me.
the seeming lack of reliable feedback loops that give you some indication that you are pushing towards something practically useful in the end instead of just a bunch of cool math that nonetheless resides alone in its separate magisterium
That's what math always is. The applicability of any math depends on how well the mathematical models reflect the situation involved.
would build on that to say that for every powerfully predictive, but lossy and reductive mathematical model of a complex real-world system, there are a million times more similar-looking mathematical models that fail to capture the essence of the problem and ultimately don’t generalize well at all. And it’s only by grounding yourself to reality and hugging the query tight by engaging with real-world empirics that you can figure out if the approach you’ve chosen is in the former category as opposed to the latter.
It seems very unlikely to me that you'd have many 'similar-looking mathematical models'. If a class of real-world situations seems to be abstracted in multiple ways such that you have hundreds (not even millions) of mathematical models that supposedly could capture its essence, maybe you are making a mistake somewhere in your modelling. Abstract away the variations. From my experience, you may have a small bunch of mathematical models that could likely capture the essence of the class of real-world situations, and you may debate with your friends about which one is more appropriate, but you will not have 'multiple similar-looking models'.
Nevertheless, I agree with your general sentiment. I feel like humans will find it quite difficult make research progress without concrete feedback loops, and that actually trying stuff with existing examples of models (that is, the stuff that Anthropic and Apollo are doing, for example) provide valuable data points.
I also recommend maybe not spending so much time reading LessWrong and instead reading STEM textbooks.
I consider the banning of Said as a canary in the coal mine. I do not think it is worth the effort for people to call out non-alignment posts they consider confused, badly written, or just downright dumb.
(Alignment posts are an exception, mainly because I see people like John Wentworth and Steven Byrnes write really good counter-argument comments, and there's little to no drama or pushback by the post authors when it comes to such highly technical posts.)