Introduction Leading AI researchers, transhumanists, and regulatory experts seem inclined to think that the most morally significant problem to solve in relation to AI is its alignment with human values. This is typically referred to quite simply as ‘the alignment problem’ - a term whose humanist associations are taken for...
Introduction
Leading AI researchers, transhumanists, and regulatory experts seem inclined to think that the most morally significant problem to solve in relation to AI is its alignment with human values. This is typically referred to quite simply as ‘the alignment problem’ - a term whose humanist associations are taken for granted. It’s seldom questioned whether human values are even rational and morally optimal. In this essay it will be argued that discussion of ‘AI Alignment’ should shift from a focus on alignment with human values to alignment with values that are rational.
Defending Consequentialism
The contention that moral beliefs are falsifiable is repudiated by those who contend that morality is altogether relative. Such people... (read 534 more words →)