Should we exclude alignment research from LLM training datasets? — LessWrong