Many people have been reconfigured to identify with the interests of the institutions that validate them as part of the price of admission to privilege - possibly a supermajority in “developed” countries - but otherwise I think you’re characterizing my perspective accurately, and it’s unclear to what extent the resulting preferences are the preferences of a person as we might naïvely imagine.
See also this thread - I don't expect these sorts of problems to be solved by 2026, and I'll be pleasantly surprised if I'm not constantly dealing with this sort of nonsense - or totally craven flipflopping - when asking LLMs for help with basic info by 2030.
I think a large part of the mysterious seeming banter -> sex transition is antinormative attitudes towards sex. For some large portion of people, the mate-seeking drive is tangled up with a desire for covertness, for which there is culturally specific[1] support.
"Romance" and "romanticism" seem to be fundamentally the (ideally mutual) intent to mate transgressively, "you and me against the world." As I understand it, "romance" is specifically a modern Western[2] phenomenon explicitly opposed to formal statelike systems of accountability.
Trinley Goldenberg alludes to the function of banter:
flirting leads to sex is by continually ramping up sexual tension by a sort of plausible deniability of sexual interest
But the important thing to understand is why people are seeking plausible deniability. Naturally the opposition to accountability is disinclined to give an honest account of itself, so people will tend to deflect from the central question onto tangential issues like the quality of banter, or vague pointers like "sexual tension." But if your sexuality isn't about being naughty and getting away with something, there's little point in mimicking the otherwise extremely inefficient plausible-deniability rituals (such as the ones described in the OP) needed to build inexplicit, covert mutual knowledge of attraction. Dancing works better for you because it is a virtue signal.
See also:
why/how you liked it … detailed feedback
Were you maybe thinking of a different comment than the one I replied to? These don’t seem to be present.
>Strongly upvoted. Great post. […] would love to read more like it.
I think this is what the upvote button is for.
>I disagree
If you’re not going to offer details this seems like it would have been better as an agree/disagree reaction.
By EoY 2026 I don't expect this to be a solved problem, though I expect people to find workarounds that involve lowered standards: https://benjaminrosshoffman.com/llms-for-language-learning/
By EoY 2030 I don't expect LLMs to usually not mess up tasks like this one (scroll down a bit for the geometry fail), though any particular example that gets famous enough can get Goodharted even with minor perturbations via jerry-rigging enough non-LLM modules together. My subjective expectation is that they'll still frequently fail the "strictly a word problem" version of such problems that require simple geometric reasoning about an object with multiple parts that isn't a typical word-problem object.
I don't expect them to be able to generate Dead Sea Scroll forgeries with predominantly novel content specified by the user, that hold up to good textual criticism, unless the good textual critics are all retired, dead, or marginalized. I don't expect them to be able to write consistently in non-anachronistic idiomatic Elizabethan English, though possibly they'll be able to write in Middle English.
Not sure these are strictly the "easiest" but they're examples where I expect LLMs to underperform their vibe by a LOT, while still getting better at the things that they're actually good at.
When the problematic adjudicator isn't the dominant one, one can either safely ignore them, or escalate to someone less problematic who does hold power, so there's no benefit in sabotage, and there's reputational harm.
Relatedly I think the only real solution to the "lying with statistics" problem is the formation of epistemic communities where you're allowed to accuse someone of lying with statistics, it's adjudicated with a preponderance-of-evidence standard, and both false accusations and evidence that you're lying with statistics are actually discrediting, proportionate to the severity of the offense and the confidence of the judgment.
That last bit seems wrong to me bc the "good location" premium is so large, e.g. https://www.crackshackormansion.com/. Davis and Palumbo (2006) estimated land value as 50% of residential real estate value, up from 32% in 1984, and home prices in aggregate have continued to rise for the same reasons.
Your "cannon fodder" argument got me thinking; I don't exactly think the argument depends on a new sort of fully distinct intelligence emerging, but rather a change in how our existing superorganisms are constituted. Modern states emerged in part as a mass-mobilization technology, and were therefore biased towards democracy. But as we learn to automate more things, smaller groups of humans better at implementing automation can outcompete larger groups of people mobilized by ideologies or other modern methods. If this keeps going, maybe we'll end up like the Solarians in Asimov's The Naked Sun for a while, a low-fertility skeleton crew of highly territorial lonesome tech-yeomen. If the skeleton crew is sufficiently infertile, it may leave behind a rigid set of automations that eventually collapse for want of maintenance by a living mind, much like the house in Ray Bradbury's story There Will Come Soft Rains.
I think the governance problems you're describing are hard for two distinct but related reasons worth addressing separately:
Disagreement on Values
Not everyone relates to governance and penalties the same way.
Some people have the naïvely appropriate attitude that governance is to protect public goods; if someone is likely to behave in ways that are dangerous or harmful to others we should identify them, and exclude them when needed to protect others. Punishments are a cost, and the point of graduated punishment and judicial process is to make sure they're imposed only in cases where the benefit outweighs the cost.
Other people relate to punishments as a scapegoating process where from time to time the vibes demand we other someone, and the narrative about why we're doing that is just part of the way we negotiate who gets othered. The fact that this is obviously illegitimate and parasitic on the naive attitude doesn't prevent it from being how a lot of people feel.
Some people instrumentalize the whole thing as a tool to be used against their personal enemies, or a way to demonstrate factional loyalty.
Yet others see some positive opportunity in being naughty and getting away with things, will try to help out others who seem cool, and will try to derail any investigation that threatens this sort of covert mutual protection league.
Obviously we're not going to get the naïvely good outcomes from a process where a large minority if not an outright majority of people are trying to derail the process one way or another, so there's little point in trying if we're not thinking about the adversarial problem explicitly.
Overly broad scope of penalties
One of the major strategic wins scapegoating has scored against pronormativity is that people seem to assume that the natural punishment for sufficiently objectionable behavior is banning and shunning, regardless of the scope of the behavior. In many cases this is a bad penalty to oppose, and opposing excessive penalties will motivate people otherwise sympathetic to the investigative process to opt out, derail, or even exit the community entirely, further worsening the problem of an antinormative majority.
For example, if someone tends to behave objectionably in certain ways to romantic partners, the obvious remedy is to make this information available to people considering dating them, so they can decide whether they find that behavior objectionable enough to avoid; excluding them from common spaces is just ridiculous.
I stopped engaging with or supporting the Bay Area EA / Rationalist Community's attempts to create public spaces, in response to a couple of cases where it seemed like the "investigative" process had been totally compromised by the compulsion to "splitting" behavior where formal community institutions were not trying to figure out and publicize what had happened and why and then come up with rational risk-mitigation measures, but instead trying to figure out who the "bad guy" was and exclude them. For instance, the Brent Dill "investigations" resulted in penalties but no clear findings, and then the only friend who had bothered to help me notice that I'd been participating in a culture of communal dishonesty around EA and AI risk, and figure out how to deconfuse myself about this, was banned for unspecified reasons. Functionally I think I'm justified in construing this as a coverup, not an attempt at enforcing generalizable norms.
Related:
https://benjaminrosshoffman.com/guilt-shame-and-depravity/
https://unstableontology.com/2021/04/12/on-commitments-to-anti-normativity/