x
Research: Unvalidated Trust in LLMs — LessWrong