I have some related discussion in Section 2.1 here. I think if I were writing the list, I would remove the assumption of bad faith from 4, i.e. my 4 choices would be:
And then I think your particular examples are a mix of 1 & 2 & (my now-more-broad) 4.
I feel like it's 4 ~ 1 > 2 > 3. The example of CNNs seems like this, where the artificial neural networks and actual brains face similar constraints and wind up with superficially similar solutions, but when you look at all the tricks that CNNs use (especially weight-sharing, but also architecture choices, choice of optimizer, etc.) they're not actually very biology-like, and were developed based on abstract considerations more than biological ones.
Many influential AI techniques either explicitly draw inspiration from or are similar to mechanisms found in biology.
Some basic reasons why this might be:
All of these play some role, and in differing degrees depending on the particular technique. The extent to which any of these explanations is generally the case may have strategically important implications for AI safety.
Insofar as 2 is more the case, then progress in neuroscience may be something of a limiting factor in further AI progress. It may also point to progress in techniques which more closely imitate brains (such as SNNs) being things to watch.
To the extent 1 or 2 are strongly the case, searching for a paradigm very different from deep learning which might be more interpretable may be hopeless (even more than it already is, let us say).
I'd be interested in hearing what people think about to what extent these four stories (or a different one I didn't think of) apply either in the general case or to specific techniques.