Occasionally think about topics discussed here. Will post if I have any thoughts worth sharing.
I think somatic gene therapy, while technically possible in principal, is extremely unpromising for intelligence augmentation. Creating a super-genius is almost trivial with germ-line engineering. Provided we know enough causal variants, one needs to only make a low-hundreds number of edits to one cell to make someone smarter than any human that has ever lived. With somatic gene therapy you would almost certainly have to alter billions of cells to get anywhere.
Networking humans is interesting but we have nowhere close to the bandwidth needed now. As a rough guess lets suppose we need similar bandwidth to the corpus callosum, neuralink is ~5 OOMs off.
I suspect human intelligence enhancement will not progress much in the next 5 years, not counting human/ML hybrid systems.
Is this your position: there is no acceptable reason to deliberately optimize for s-risky things like sadism. And doing so to red-team s-risk detection is obviously madness. But possibly red-teaming conventional misalignment which would simply kill everyone if the absolute worst happened and is the default anyway maybe makes some sense?
I do sometimes worry that humanity is dumb enough to create “gain of function for s-risks” and call it alignment research.
I mean like the type of perception one needs to empty a random dishwasher, make a cup of coffee with a random coffee machine type of stuff, clean a room. Hunt and skin a rabbit.
I don't think reading/writing is very easy for humans - compared to perception and embodied tasks. My Morvec's paradox intuition here is maths is of a similar order of difficulty to what we have been very successfully automating in the last couple years, so I expect it will happen soon.
A lot of my confidence this will happen is this and a generalized Morvec's paradox-style "hard things are easy, easy things are hard" intuition.
One argument I've had for self-driving being hard is: humans drive many millions of miles before they get in fatal accidents. In this long tail, would it be that surprising if there were AGI complete problems within it? My understanding is Waymo and Cruise both use teleoperation in these cases. And one could imagine automating this, a God advising the ant in your analogy. But still, at that point you're just doing AGI research.