1) Is it because of regulations?
2) Is it because robustness in the real world (or just robustness in general) turns out to be very hard for current AI systems, and robustness is much more important for self driving cars than other areas where we have seen more rapid AI progress?
3) Is it because no one is trying very hard? I.e. AI companies are spending much less compute on self driving car AIs compared to language models and image generators. If this is the case, why? Do they not expect to see a lot of profit from self driving cars?
Some other reason or some combination of the above?
I'm mostly interested in learning to what extent 2 is the cause, since this has implications in AI forecast, both timelines and what trajectories to expect.