This is probably more contentious. But I believe that the concept of "intelligence" is unhelpful and causes confusion. Typically, Legg-Hutter intelligence does not seem to require any "embodied intelligence".
I would rather stress two key properties of an algorithm: the quality of the algorithm's world model and its (long-term) planning capabilities. It seems to me (but maybe I'm wrong) that "embodied intelligence" is not very relevant to world model inference and planning capabilities.
By the way, I've just realized that the Wikipedia page on AI ethics begins with robots. 😤