Can we achieve AGI Alignment by balancing multiple human objectives? — LessWrong