I think this was a badly written post, and it appropriately got a lot of pushback.
Let my briefly try again: clarifying what I was trying to communicate.
Evolution did not succeed at aligning humans to the sole outer objective function of inclusive genetic fitness.
There are multiple possible reasons why evolution didn't succeed, and presumably multiple stacked problems.
But one thing that I've sometimes heard claimed or implied is that evolution couldn't possibly have succeeded at instilling inclusive genetic fitness as a goal, because individuals humans don't have inclusive genetic fitness as a concept.
Evolution could only have approximated that goal with a godshatter of adaptions to prefer various proxies to inclusive genetic fitness, where each proxy has to be close to the level of sensory-evidence. eg. Evolution can shape humans to like the taste of sugar, or the feeling of orgasm, or even to prefer sexy-looking people, or even to love their cousins (less than their brothers but more than their more distant relatives). But, it's claimed, evolution can't shape humans to desire their own inclusive genetic fitness directly, because it can't instill goals that aren't at the at the level of sensory-evidence.
And so it's not surprising that the proxies would completely deviate from the "intended" target, as soon as conditions changed.
Before the 20th century, not a single human being had an explicit concept of "inclusive genetic fitness", the sole and absolute obsession of the blind idiot god. We have no instinctive revulsion of condoms or oral sex. Our brains, those supreme reproductive organs, don't perform a check for reproductive efficacy before granting us sexual pleasure.
Why not? Why aren't we consciously obsessed with inclusive genetic fitness? Why did the Evolution-of-Humans Fairy create brains that would invent condoms? "It would have been so easy," thinks the human, who can design new complex systems in an afternoon.
The Evolution Fairy, as we all know, is obsessed with inclusive genetic fitness. When she decides which genes to promote to universality, she doesn't seem to take into account anything except the number of copies a gene produces. (How strange!)
But since the maker of intelligence is thus obsessed, why not create intelligent agents - you can't call them humans - who would likewise care purely about inclusive genetic fitness? Such agents would have sex only as a means of reproduction, and wouldn't bother with sex that involved birth control. They could eat food out of an explicitly reasoned belief that food was necessary to reproduce, not because they liked the taste, and so they wouldn't eat candy if it became detrimental to survival or reproduction. Post-menopausal women would babysit grandchildren until they became sick enough to be a net drain on resources, and would then commit suicide.
Supposedly, evolution can't produce an inclusive genetic fitness maximizer, not just that it happened not to.
However, this story is undercut by an example in which evolution was able to make an abstract concept (not just a bunch of sensory correlates of that concept in the ancestral environment) an optimization target that the human will apply it's full creative intelligence to achieving.
Social status seems like one such an example.
It's an abstract concept that many humans have as an actual long term optimization target (they'll implement plans over years to increase their prestige, they don't just have a myopic status-grabbing heuristic).
And humans seem to have have a desire for social status itself, or at least not just for a collection of sensory-evidence-level proxy measures that correlated in the ancestral environment, and which break down entirely when the environment changes.
(If you doubt this, compare status-seeking behavior to male sexual preferences. In the latter case, it looks much more like evolution did instill a bunch of specific desires for close-to-sensory-level features that were proxies for fertility and health: big breasts, long legs, unwrinkled skin. Heterosexual men find those features desirable, and finding out that a particular sexy woman is actually infertile doesn't change the desirability.
But in the case of status-seeking, I can't write a list of collection of near-sensory-level features that that people desire, independently of actual social prestige. The markers of status are enormously varied, by culture and subculture, and constantly changing. I bet that Steve Byrnes can point out a bunch of specific sensory evidence that the brain uses to construct the status concept (stuff like gaze length of conspecifics or something?), but the human motivation system isn't just optimizing for those physical proxy measures, or people wouldn't be motivated to get prestige on internet forums where people have reputations but never see each other's faces.)
This is suggestive that at least in some circumstances, evolution actually can shape an organism to have at least a specific abstract concept as a long term optimization target, and recruit the organism's own intelligence to identifying how how that concept applies in many varied environments.
This is not to say that evolution succeeded at aligning humans. It didn't. This also doesn't imply that alignment is easy. Maybe it is, maybe it isn't, but this argument doesn't establish that.
But it is to say that the specific story for why evolution failed at aligning humans to inclusive genetic fitness that I believed in say 2020, is incorrect, or at least incomplete.
That's totally right, until like 2020 or something the community was small and underresourced, such that things were gonna get dropped.
But I think we also did a somewhat bad job of effectively strategizing about how to approach the problem such that we ended up making worse allocation-of-effort choices than we could have, given the (unfair) benefit of hindsight.
I have a draft post about how I think we should have spent the period before takeoff started, in retrospect.
I think this was a major dropped ball. We had mostly ruled out political advocacy, so there was no one trying to do the "make connections with congresspeople" work that would have caused us to discover that someone had been thinking of this as an important issue for years.
That said, I know that several x-risk orgs have been in contact with his office in recent years.
It's important context that Sherman was concerned about Superintelligence risks, broadly construed, decades ago.
In 2007 he gave a speech in which he said:
There is one issue that I think is more explosive than even the spread of nuclear weapons: engineered intelligence. By that I mean, the efforts of computer engineers and bio-engineers who may create intelligence beyond that of a human being. In testimony at the House Science Committee1, the consensus of experts testifying was that in roughly 25 years we would have a computer that passed the Turing Test,2 and more importantly, exceeded human intelligence.
As we develop more intelligent computers, we will find them useful tools in creating ever more intelligent computers, a positive feedback loop. I don't know whether we will create the maniacal Hal from 2001, or the earnest Data from Star Trek --- or perhaps both.
There are those who say don't worry, even if a computer is intelligent and malevolent --- it is in a box and it cannot affect the world. But I believe that there are those of our species who sell hands to the Beelzebub, in return for a good stock tip.
It was kind of annoying. My feet didn't get that callused.
Yeah, I think this is a plausible story.
(FYI: this would be easier to read if there were line breaks between the numbered sections. As is, it's a bit of a wall of text.)
Let me be obtuse in return.
Why not?
I'm not sure if I expect motivated reasoning to come out better on average, even in domains where you might naively expect it to.
If it's not adaptive, why do humans do it? Do you think it used to be adaptive in the ancestral environment, but the world has changed?
And I think there is significant optimization pressure on catching this kind of thing, in part for reasons similar to the ones outlined in Elephant in the Brain, i.e., that we evolved in an environment where winning that cat and mouse game was a big part of adaptive success. But also just because people don't like being screwed, and so are on the lookout for this kind of behavior.
Isn't the standard story that this is why there's pressure for motivated reasoning instead of outright conscious deception and manipulation? The best way to fool others is to fool yourself.
What specific things does it beat roam at?
I have vauge plans to switch from Roam to LogSeq, but it's a bit annoying because I'll have to recreate some of the software I've built that is central to my workflows. Should I switch to remnote instead of LogSeq?