I have an intuition, and I may be heterodox here, that LLMs on their own are not sufficient, no matter how powerful and knowledgeable they get. Put differently, the reasons that powerful LLMs are profoundly unsafe are primarily social: e.g. they will be hooked up to the internet to make iterative refinements to themselves; or they will be run continuously, allowing their simulacra to act; etc. Someone will build a system using an LLM as a component that kicks things off.
I'm not making an argument for safety here; after all, the main reason nukes are dangerous is that people might use them, which is also a social reason.
I'm asking because I have not seen this view explicitly discussed and I would like to get people's thoughts.
I want to be careful here; there is some evidence to suggest that they are doing (or at least capable of doing) a huge portion of the "intelligence thing", including planning, induction, and search, and even more if you include minor external capabilities like storage.
I know... (read more)