x
LLMs Are Trained to Assume Their Output Is Perfect — LessWrong