Why Aligning an LLM is Hard, and How to Make it Easier — LessWrong