x
The Alignment Problem: Machine Learning and Human Values — LessWrong