Waking up to reality. No, not that one. We're still dreaming.
Regardless of the object level merits of such topics, it's rational to notice that they're inflammatory in the extreme for the culture at large and that it's simply pragmatic (and good manners too!) to refrain from tarnishing the reputation of a forum with them.
I also suspect it's far less practically relevant than you think and even less so on a forum whose object level mission doesn't directly bear on the topic.
Learning networks are ubiquituous (if it can be modeled as a network and involves humans or biology it almost certainly is one) and the ones inside our skulls are less of a special case than we think.
If the neocortex is a general-purpose learning architecture as suggested by Numenta's Thousand Brains Theory of Intelligence, it becomes likely that cultural evolution has accumulated significant optimizations. My suspicion is that learning on cultural corpus progresses rapidly until somewhat above human level and then plateaus to some extent. Further progress may require compute and learning opportunities more comparable to humanity-as-a-whole than individuals.
AI alignment is a wicked problem. It won't be solved by any approach that fails to grapple with how deeply it mirrors self-alignment, child alignment, institutional alignment and many others.
doesn't correspond to anything real
There's a trivial sense in which this is false: any experience or utterance, no matter how insensible, is as much the result of a real cognitive process as the more sensible ones are.
There's another, less trivial sense that I feel is correct and often underappreciated: obfuscation of correspondence does not eliminate it. The frequency by which phenomena with shared features arise or persist is evidence of shared causal provenance, by some combination of universal principles or shared history.
After puzzling over the commonalities found in mystical and religious claims, I've come to see them as having some basis in subtle but detectable real patterns. The unintelligibility comes from the fact that neither mystics nor their listeners have a workable theory to explain the pattern. The mystic confabulates and the listener's response depends on whether they're able to match the output to patterns they perceive. No match, no sense.
The world is full of scale-free regularities that pop up across topics not unlike 2+2=4 does. Ever since I learned how common and useful this is, I've been in the habit of tracking cross-domain generalizations. That bit you read about biology, or psychology, or economics, just to name a few, is likely to apply to the others in some fashion.
ETA: I think I'm also tracking the meta of which domains seem to cross-generalize well. Translation is not always obvious but it's a learnable skill.
Did you write this reply using a different method? It has a different feel than the original post.
Partway through reading your post, I noticed that reading it felt similar to reading GPT-3-generated text. That quality seems shared by the replies using the technique. This isn't blinded so I can't rule out confirmation bias.
ETA: If the effect is real, it may have something to do with word choice or other statistical features of the text. It takes a paragraph or two to build and shorter texts feel harder to judge.
If AI alignment were downstream of civilization alignment, how could we tell? How would the world look different if it were/were not?
If AI alignment is downstream of civilization alignment, how would we pivot? I'd expect at least some generalizability between AI and non-AI alignment work and it would certainly be easier to learn from experience.
If such a thing existed, how could we know?