If the piece of knowledge is not actionable, probably bemoaning it is not a good use time either.
"Yet, Japanese wages are (on a per-hour basis) much lower than US ones, and I think that's largely because the management culture is overall even worse than in America. (And partly because of some large-scale embezzlement from Japanese corporations involving corrupt contracts to private companies, but that's beyond the scope of this post.)" - this is the first time I hear about this. Could you please share some information regarding why you think this is the case?
You could build an app that blocks scammers or a service that connects scammed people and pursues class action lawsuit to help them. You could also scam scammers themselves. You can recognize before other people that a company is a scam instead of the productive business it pretends to be and get rich by shorting it or gain fame and influence by proving it to the rest of the world.
I think the general message of the quote is that if one believes that they see the world much more accurately than (almost) anyone else, and yet they do not use this supposedly superior knowledge to make their own life better, they are actually not smart, but losers shifting blame.
At first I did not understand your comment, so almost downvoted it. However, GPT helped me understand the point, and just want to post what I think is the core of idea to make it easier for others:
-If rationalists want to address the social and epistemic issues postmodernism highlights (power, context, narrative, knowledge construction), they may need a stripped-down, formal version of postmodernism—just as decision-theory formalizations reduce existentialism to operational decision rules, at a cost.
-One of postmodernism’s central concerns is making sense of power, coercion, and violence—especially sexual violence—at a level of psychological and social realism that allows actual prediction and explanation. Three Worlds Collide and HPMOR handled these themes in a way that anyone with an understanding of postmodern analysis of power was filtered out from the community.
I agree with the first point.
The second point might be technically off: A lot of people do not come via TWC and HPMOR, and more importantly, people can acquire the understanding of postmodernism later. It is true though that LW is very mistake-theory focused and selects out (most) conflict theorist. This does not mean there are no rat or rat adjacent conflict theorists. However there is some selection effect pushing out those who are "pro postmodernism" but not those who are against it, even though both are conflict theorists: as mainstream ideas are (were?) primarily influenced/supported by pro postmodernists, mistake theory rats argued against them due to these ideas not reflecting reality. These are in turn used as ammunition/safe place for conservative/anti postmodern conflict theorists. In my experience (via meetups/forums), most rats are indeed cooperative mistake theorists, irrespective of whether they are left (e.g. EA types) or right (e.g. libertarians), but the very few conflict theorists seemed to be of the conservative kind. This is also a possible explanation why Vance is the most politically successful rat adjacent figure.
I am myself thoroughly confused on this point (and for what its worth, a lot of our experience seem to overlap), but I can provide some competing hypotheses:
Another way of pointing to the same concept is how a chain as a whole is a resilient thing, but this is because each link has enough give to absorb strain. So a system is made durable not by its components being unbreakable, but by ensuring that individual parts can bend/fail/adapt. A society can hence be enduring only if its parts can be sacrificed for the whole. If a single specific part is worth you more than anything else, the system/society may be traded away for it.
(I think this thought is from Nassim Taleb, but I am paraphrasing a lot and cannot pinpoint the exact source, likely it is Antifragile)
Why? Do you mean that cis women use height only to filter out males that are shorter than them?
If so, I do not think that is the case. Statistics from dating apps (e.g. https://x.com/TheRabbitHole84/status/1684629709001490432/photo/1 ) and anecdotal evidence suggest over 50% of American women filter out man below 6 feet in dating apps/sites even though only 1% of American women are 6 feet or taller.
This and the different distribution of ratings (https://shorturl.at/EZJ7L ) implies that the requirements are not absolute, but relative: majority of women aim for a top subsection (probably top decile?) male partner. Hence if all American males magically become one feet taller, likely this filter would increase to ~7 feet.
Because "tall" is context dependent. In Laos the average male height is 163 cm (5"4). In the Netherlands it is 184 cm (6 ft). If your height is 180 cm, you are very tall in Laos, but below average in the Netherlands.
"What does democracy even mean when your vote can't even in principle influence the laws of where you live? Why should any populace grant its authority to enact certain laws to a larger entity that doesn't share its values? Etc."
The concept of nation state is already guilty of this all. The smallest legislature is your city/town/village council, followed by county, and in some cases even a regional legislature-like body. A nation state already takes most of the legislative rights from these and dilutes your votes with millions of other citizens.
Before nation states were invented in the 19th century*, afaik most European laws were actually pretty much locally made and enforced by the feudal lord or town council of the territory. It is feels unfathomable today, but back than a lot of towns had basically the same level of sovereignty as countries do now.
*Technically it started eroding earlier with kings trying to centralize power, but in a lot of places still was mostly intact until incorporation into nation states.
I suppose one important difference is that people usually don't read assembly/compiled binaries but they do proofread AI generated code (at least most claim to). I think it would be easier to couple manual code with LLM generated, marking it via some in line comment to force the assistant to ignore it or ask for permission before changing anything there compared to inserting assembly into compiled code (plus non-assembly code should be mostly hardware independent). This suggests human level enhancements are going to stay feasible and coding assistants have larger gap to close than compilers did before removing 99.99% of lower level coding.