This analogy is misleading because it pumps the intuition that we know how to generate the algorithmic innovations that would improve future performance, much as we know how to tie our shoelaces once we notice they are untied. This is not the case. Research programmes can and do stagnate for long periods because crucial insights are hard to come by and hard to implement correctly at scale. Predicting the timescale on which algorithmic innovations occur is a very different proposition from predicting the timescale on which it will be feasible to increase parameter count.
As some other commenters have said, the analogy with other species (flowers, ants, beavers, bears) seems flawed. Human beings are already (limited) generally intelligent agents. Part of what that means is that we have the ability to direct our cognitive powers to arbitrary problems in a way that other species do not (as far as we know!). To my mind, the way we carelessly destroy other species' environments and doom them to extinction is a function of both the disparity in both power and the disparity in generality, not just the former. That is not to say that a power disparity alone does not constitute an existential threat, but I don't see the analogy being of much use in reasoning about the nature of that threat.
If the above is correct, perhaps you are tempted to respond that a sufficiently advanced AI would replicate the generality gap as well as the power gap. However, I think the notion of generality that is relevant here (which, to be sure, is not the only meaningful notion) is a 0 to 1 phase transition. Our generality allows us to think about, predict, and notice things that could thwart our long term collective goals. Once we start noticing such things, there is no level of intelligence an unaligned third-party intelligence can reach which somehow puts us back in the position of not noticing, relative to that third-party intelligence.