Informatic developpers often say that the bug is usually situated in the chair/keyboqrd interface.
I fear that even very good RSP will be inefficient in case of sheer stupidity or deliberate malevolence.
There are already big threats:- nuclear weapons. The only thing wich protect us is the fact that only governements can use them, witch mean that our security depends on the fact that said governements will be responsible enough. So far it worked, but there is no actual protection against really insane leaders.- global warming. We know it since the begining of the century but we act as if we could indefinitely postpone the application of solutions, knowing full well that this is not true.
It doesn't bode well for the future.
Also one feature of RSPs would be to train IA so that they could not present major risks. But what if LLMs are developed as free software and anyone can train them in their own way? I don't see how we could control them or impose limits.
(English is not my mother tongue: please forgive my mistakes.)