The Real AI Safety Risk Is a Conceptual Exploit: Anthropomorphism
Subtitle: We've misaligned not the models, but our minds. Introduction The greatest risk from current AI systems isn’t sentience, intelligence, or agency. It’s our persistent belief that those things are already emerging. Anthropomorphism—our tendency to project human-like qualities onto nonhuman systems—isn’t just a misunderstanding. It’s a zero-day in human cognition....