Towards an Axiology Approach to AI Alignement — LessWrong