General alignment plus human values, or alignment via human values? — LessWrong