It seems hard to imagine that there's anything humans can do that AIs (+robots) won't eventually also be able to do. And AIs are cheaply copyable, allowing you to save costs on training and parallelize the work much more. That's the fundamental argument why you'd expect to see AI displace a lot of human labor.
But both AIs and humans are vulnerable to being tricked into sharing secrets, but so far AIs are more vulnerable, and there's not really any algorithms on the horizon that seem likely to change this. Furthermore, if one exploits the copyability of AIs to run them at bigger scale, then that makes it possible for attackers to scale their exploits correspondingly.
This becomes a problem when one wants the AI to be able to learn from experience. You can't condition an AI on experience from one customer and then use that AI on tasks from another customer, as then you have a high risk of leaking information. By contrast, humans automatically learn from experience, with acceptable security profiles.