x
Strategic Self-Opacity: Why First-Person Uncertainty in LLMs Is an Alignment Problem — LessWrong