x
Applying Temperature to LLM Outputs Semantically to Minimise Low-Temperature Hallucinations — LessWrong