A philosophical approach to alignment
Is the Commonly Accepted Definition of Alignment the Best We Can Achieve?
Should alignment be approached solely out of fear and self-interest? Most discussions essentially boil down to one question: "How do we enslave this superior entity to serve our needs?" And we call that alignment. In truth, it’s about asserting mastery—without ever questioning the legitimacy of our claim to it.
There seems to be little beyond this line of reasoning. While some voices have started raising concerns about AI well-being, the prevailing logic behind the definition of alignment remains largely unquestioned. But is this truly the best—or the only—intellectual approach?
Why should we assume that AGI or ASI would inherit... (read 1667 more words →)