Modifying LLM Beliefs with Synthetic Document Finetuning — LessWrong