x
Positive jailbreaks in LLMs — LessWrong