"Usually people who do this much model building in this way, and say these things about it, turn out to be concerning, but sometimes they don't."
By this do you mean that:
John asserting that nonconsent is the baseline cis female preference in dating resembles to what is stated in openly misogynistic areas of the internet (redpilled/incel/altright), hence you feel he might be in the same category?
"John, I worry you're going to take bad models too seriously because you're systematically unable to see some kind of disconfirming evidence."
Would you be able give some more specific examples about the kind of disconfirming evidence you reckon John is missing? I think that would be the quickest way to show the weakness of his model.
I suppose one important difference is that people usually don't read assembly/compiled binaries but they do proofread AI generated code (at least most claim to). I think it would be easier to couple manual code with LLM generated, marking it via some in line comment to force the assistant to ignore it or ask for permission before changing anything there compared to inserting assembly into compiled code (plus non-assembly code should be mostly hardware independent). This suggests human level enhancements are going to stay feasible and coding assistants have larger gap to close than compilers did before removing 99.99% of lower level coding.
If the piece of knowledge is not actionable, probably bemoaning it is not a good use time either.
"Yet, Japanese wages are (on a per-hour basis) much lower than US ones, and I think that's largely because the management culture is overall even worse than in America. (And partly because of some large-scale embezzlement from Japanese corporations involving corrupt contracts to private companies, but that's beyond the scope of this post.)" - this is the first time I hear about this. Could you please share some information regarding why you think this is the case?
You could build an app that blocks scammers or a service that connects scammed people and pursues class action lawsuit to help them. You could also scam scammers themselves. You can recognize before other people that a company is a scam instead of the productive business it pretends to be and get rich by shorting it or gain fame and influence by proving it to the rest of the world.
I think the general message of the quote is that if one believes that they see the world much more accurately than (almost) anyone else, and yet they do not use this supposedly superior knowledge to make their own life better, they are actually not smart, but losers shifting blame.
At first I did not understand your comment, so almost downvoted it. However, GPT helped me understand the point, and just want to post what I think is the core of idea to make it easier for others:
-If rationalists want to address the social and epistemic issues postmodernism highlights (power, context, narrative, knowledge construction), they may need a stripped-down, formal version of postmodernism—just as decision-theory formalizations reduce existentialism to operational decision rules, at a cost.
-One of postmodernism’s central concerns is making sense of power, coercion, and violence—especially sexual violence—at a level of psychological and social realism that allows actual prediction and explanation. Three Worlds Collide and HPMOR handled these themes in a way that anyone with an understanding of postmodern analysis of power was filtered out from the community.
I agree with the first point.
The second point might be technically off: A lot of people do not come via TWC and HPMOR, and more importantly, people can acquire the understanding of postmodernism later. It is true though that LW is very mistake-theory focused and selects out (most) conflict theorist. This does not mean there are no rat or rat adjacent conflict theorists. However there is some selection effect pushing out those who are "pro postmodernism" but not those who are against it, even though both are conflict theorists: as mainstream ideas are (were?) primarily influenced/supported by pro postmodernists, mistake theory rats argued against them due to these ideas not reflecting reality. These are in turn used as ammunition/safe place for conservative/anti postmodern conflict theorists. In my experience (via meetups/forums), most rats are indeed cooperative mistake theorists, irrespective of whether they are left (e.g. EA types) or right (e.g. libertarians), but the very few conflict theorists seemed to be of the conservative kind. This is also a possible explanation why Vance is the most politically successful rat adjacent figure.
I am myself thoroughly confused on this point (and for what its worth, a lot of our experience seem to overlap), but I can provide some competing hypotheses:
Another way of pointing to the same concept is how a chain as a whole is a resilient thing, but this is because each link has enough give to absorb strain. So a system is made durable not by its components being unbreakable, but by ensuring that individual parts can bend/fail/adapt. A society can hence be enduring only if its parts can be sacrificed for the whole. If a single specific part is worth you more than anything else, the system/society may be traded away for it.
(I think this thought is from Nassim Taleb, but I am paraphrasing a lot and cannot pinpoint the exact source, likely it is Antifragile)
Why? Do you mean that cis women use height only to filter out males that are shorter than them?
If so, I do not think that is the case. Statistics from dating apps (e.g. https://x.com/TheRabbitHole84/status/1684629709001490432/photo/1 ) and anecdotal evidence suggest over 50% of American women filter out man below 6 feet in dating apps/sites even though only 1% of American women are 6 feet or taller.
This and the different distribution of ratings (https://shorturl.at/EZJ7L ) implies that the requirements are not absolute, but relative: majority of women aim for a top subsection (probably top decile?) male partner. Hence if all American males magically become one feet taller, likely this filter would increase to ~7 feet.
"...isn't the experience of me or women I know. Asking men out leads to boyfriends who are generally passive and offload a bunch of work onto you (even when they're BSDM tops). "
This is very interesting and a perspective I haven't considered. Now that I think about it, the women I know who are asking man out have a mixture of outcomes, and while tend to move towards high quality partners long term (especially if they are polyamorous), they indeed complain about having had very passive exes. I suspect asking out removes the filter for proactivity and they are falling back to the base rate with higher chance of getting passive partners due to prevalence in the population. Actually even worse if we assume proactive males are sorting themselves out from the available population. (There may be some additional factor potentially contributing to passivity, but haven't thought it through yet).
Another observation I have is that they tend to be tops or switches with top preference. Assuming John is correct about nonconsent preference being the prevalent attribute in the general population, I would say they are the inverse, with that being the minority here.
My sample size is single digit though, so YMMV.