The idea that sleep evolved to establish or avoid certain hunting patterns doesn't seem totally complete to me. There are very harsh penalties to not sleeping: you become less intelligent, less amiable, you might randomly lose consciousness. I would think that if sleep was merely meant to provide a schedule to our lives, we would've evolved more incentives that don't penalize us in other areas. I.e. a pack of early humans who enjoy sleep but can have individuals stay up all night to watch for predators with no penalty would out-compete other packs of early humans who can't. My guess is that there's some unknown function that sleep fulfills, and hence why the short sleeper gene(s) haven't spread throughout the population already.
This isn't to say I don't support the research, I'm pretty sure I suffer from sleep apnea myself, and work a job where falling asleep is both common and a hazard (yeah maybe I didn't think that one all the way through, suffering from daytime sleepiness due to the apnea). But I'm worried about what sort of long term effects will crop up, as any such drug or treatment just seems a little too good to be true.
It might be that I'm just unintentionally misconstruing your argument, but I think you're limiting "wizard power" to STEM fields, which is a mistake.
Napoleon was most definitely a "king" (or an emperor, if you want to be literal), but he was also very much directing the parade he was in front of. In a sense, he was a sociological engineer, having turned his country into a type of machine which he could direct towards securing his own vision of the world.
In contrast, consider "Dave". Dave has mastery over the various methods of creation you listed, he knows CAD, can program, etc. But he works for Apple, and is the team lead for creating the newest Iphone. What he creates is not up to him. Despite having wizard skills, Dave is more like a bureaucrat, high in "king power".
The STEM-type wizard is really good at solving very specific problems, like killing a disease or making crops grow better, but the Napoleon-type wizard probably operates more in the abstract, wrestling with bigger ideas, albeit with less direct control over them.
Generally, hypothetical hostile AGI is assumed to be made on software/hardware that's more advanced from what we have now. This makes sense, as Chat-GPT is very stupid in a lot of ways.
Has anyone considered purposefully creating a hostile AGI on this "stupid" software so we can wargame how a highly advanced, hostile AGI would act? Obviously the difference between what we have now and what we may have later will be quite large, but I think we could create a project were we "fight" stupid AIs, then slowly move up the intelligence ladder as new models come out, using our newfound knowledge of fighting hostile intelligence to mitigate the risk that comes with creating hostile AIs.
Has anyone ever thought of this? Also, what are your thoughts on this? Alignment and AI are not my specialties, but I thought this idea sounded interesting enough to share.
Very interesting. Question: How does putting humans into cryonic suspension relate or contribute to the metaphor, if at all?
You mention trying to establish rationalist norms in the group by yourself. Do you think that if there were two, three, four, or more people trying to do that, you would've seen more success? I'm reminded of this video:
See how everyone at the start is staring at the guy like he's crazy?
One person engaging in a set of norms in a group is just a weirdo.
Two people engaging in a set of norms in a group is just two weirdos.
But somewhere between three and ten weirdos creates a cascading effect, and then they aren't weirdos anymore.
I know first hand that making a non-rationalist space into a rationalist one is nearly impossible on your own, as I attempted to teach a political group of my leanings the use of rationalist principles, to zero success.
The other treatment I would attempt is having the organizers tell everyone to be as deliberately disagreeable as possible while still remaining intellectually rigorous. All of a sudden all this talk about fallacies and epistemology isn't just techno-babble by an annoying social misfit, but instead somebody just following the rules laid out at the start. From there you can push your values from a socially safe position, and possibly get other people who see your example onboard.
I'm putting these forward as hypotheses, I think there's a good chance that you're right and that most people are incapable of internalizing these principles.