x
Could LLM alignment research reduce x-risk if the first takeover-capable AI is not an LLM? — LessWrong