A quick remark on so-called “hallucinations” in LLMs and humans — LessWrong