x
LLMs seem (relatively) safe — LessWrong