x
The future of alignment if LLMs are a bubble — LessWrong