AI alignment engineers seem to be pervasively dominated by a materialist / reductionist view of the world. Even the best LLM researchers have no true idea what is going on inside the "black box". If we do not expand out minds beyond limited reductionist cynicism, we will never solve the alignment issue.
AI alignment engineers seem to be pervasively dominated by a materialist / reductionist view of the world. Even the best LLM researchers have no true idea what is going on inside the "black box". If we do not expand out minds beyond limited reductionist cynicism, we will never solve the alignment issue.