Here's the most succinct and high information thing I can contribute.
Right now, each of these AI systems you describe, if they are using deep-learning at all, is using a hand-rolled solution.
You may notice that the general problems these AI systems are trying to solve are all in very similar forms to each other. You have some [measurements] -> [some desired eventual outcome or desired classification]. You then need to subdivide the problem into separate submodules, and in many problems the submodules are going to be the same as everyone else's way to solve the problem.
For example, you are going to want to classify and segment the images from a video feed into a state space of [identity, locations]. So does everyone else.
Similarly at a broader level, even if some of your algorithms have a different state space, the form of your algorithm is the same as everyone else.
And when you talk about your higher level graph - especially for realtime control - your system architecture is actually going to be identical to everyone else's realtime system. You have a clock, you have deadlines, you have a directed graph, you have safety requirements. This code in particular is really expensive and difficult to get right - something you want to share with everyone else.
So the next major step forward is platforming. There will be some convergence to a few common platforms (and probably a round of platform wars than ultimately end up with 1-3 winners like every other format and tech war in the past). The platforms will handle:
a. Training and development of common components
b. Payment and cross-licensing agreements
c. Model selection and design
d. Compiling models to target-specific bytecode
e. Systems code for realtime system graphs
f. RTOS, driver components for realtime systems
g. (c&d) will have to be shared in common across a variety of neural network compute platforms. There's about 100 of them now, Google's "TPUs" are one of the earlier ones.
h. Probably housekeeping like DRM, updates, etc will end up getting platformed as well.
All this reuse means that larger and larger parts of AI systems will be shared with every other AI system. Moreover, common elements - solving the same problem - will automatically get better over time as the shared parts get updated. This is how you get to a really smart factory robot that doesn't get fooled by a piece of gum someone dropped - because it classifies it to [trash] because it's sharing that part of the system with other robotic systems.
There is no economic justification to individually make that robot able to ID unexpected pieces of debris, but if it's licensing a set of shared components that have this feature baked in, it will have that as well.
As a side note, this is why talk of a possible coming "AI winter" is bullshit. We may not reach AI sentience for many more decades, but there is still enormous room for forward progress.