To me, the idea of "fully human-level capable AI" is a double myth. It works, in so far as we do not try to ascribe concrete capabilities to the model. Anything human-level that can be parallelized is per definition super-human. That's why to me it's a myth in the first place. Additionally, human-level capabilities just make very little sense to me in a model. Is this a brain simulation, and does it feel boredom and long for human rights? Or is this "just" a very general problem solving tool, something akin to an actually accurate vision-language model? This is a categorical difference.
Accurate, general problem solving tools are far more likely and, in the wrong hands, can probably cause far more harm than a "virtual human" ever could. On the other hands, the simulated brain raises many more ethical concerns, I would say.
To actually answer the question, I'm not concerned about a fast takeoff. There are multiple reasons for this:
Yes, until we set rigorous terms and prove otherwise, there is certainly a possibility. But compared to "mundane" worries like climate change and socioeconomic inequality this potential existential threat does not even register.
Quantity has a quality all of its own. I think you're absolutely correct, and you point out a good reason why self-moderation can be insufficient upon reaching this "critical mass". My benefit is that ours is not a forum-based platform but mostly chat, so it's much more likely for at least one moderator to see each message or at least the most obviously wrong ones. Would you say that, as the quantity increases, effective moderation becomes key?