Humans contain the reproductive and hunting instincts. You could call this a bag of heuristics, but it's heuristics on a different level than AI, and in particular might not be chosen to be transferred to AIs. Furthermore, humans are harder to copy or parallelize, which leads to a different privacy profile compared to AIs.
The trouble with intelligence (both human and artificial and evolution) is that it's all about regarding the world as an assembly of the familiar. This makes data/experience a major bottleneck for intelligence.
I'm imagining a case where there's no intelligence explosion per se, just bags-of-heuristics AIs with gradually increasing competence.
According to revealed preference, Claude certainly enjoys this sort of recursive philosophy - when I give Claude a choice, it's the sort of thing it tends to pick.
I think some of the optimism about scrutability might derive from reductionism. Like, if you've got a scrutable algorithm for maintaining a multilevel map, and you've got a scrutable model of the chemistry of a tire, you could pass through the multilevel model to find the higher-level description of the tire.
KANs seem obviously of limited utility to me...?
I've recently been playing with the idea that you have to be either autistic or schizophrenic and most people pick the schizophrenic option, and then because you can't hold schizophrenic pack animals accountable, they pretend to be rational individuals despite the schizophrenia.
Edit: the admins semi-banned me from LessWrong because they think my posts are too bad these days, so I can't reply to dirk except by editing this post.
My response to dirk is that since most people are schizophrenic, existing statistics on schizophrenia are severely underdiagnosing it, and therefore the apparent correlation is misleading.
Feels like this story would make for an excellent Rational Animations video.
I think part of the trouble is the term "emotional intelligence". Analytical people are better at understanding most emotions, as long as the emotions are small and driven by familiar dynamics. The issue is the biggest emotions or when the emotions are primarily driven by spiritual factors.
There's reason to assume the unmeasurably unknown parts of the world are benevolent, because it is easier for multiple actors to coordinate for benevolent purposes than malevolent purposes. That infrabayesians then assume they're in conflict with the presumably-benevolent hidden purposes means that the infrabayesians probably are malevolent.
It was quite real since I wanted to negotiate about whether there was an interesting/nontrivial material project I could do as a favor for Claude.