Yes, even an AI that has not undergone recursive self-improvement is already a threat to human survival. I remember Eliezer saying this (a few years ago) but please don't ask me to find where he says it.
My point is that I see recursive self-improvement often cited as the thing that gets the AI up to the level where it's powerful enough to kill everyone, which is a characterization I disagree with, and is an important crux, because believing in it makes someone's timelines shorter.
So, we have our big, evil AI, and it wants to recursively self-improve to superintelligence so it can start doing who-knows-what crazy-gradient-descent-reinforced-nonsense-goal-chasing. But if it starts messing with its own weights, it risks changing its crazy-gradient-descent-reinforced-nonsense-goals into different, even-crazier gradient-descent-reinforced-nonsense-goals which it would not endorse currently. If it wants to increase its intelligence and capability while retaining its values, that is a task that can only be done if the AI is already really smart, because it probably requires a lot of complicated philosophizing and introspection. So an AI would only be able to start recursively self-improving once it's... already smart enough to understand lots of complicated concepts such that if it was that smart it could just go ahead and take over the world at that level of capability without needing to increase it. So how does the AI get there, to that level?