x
A Black-Box Procedure for LLM Confidence in Critical Applications — LessWrong