The case for unlearning that removes information from LLM weights — LessWrong