Humans are really good at moving in space. We do motor planning and especially error correction much better than existing robot control systems.
Could a transformer model trained on a large library of annotated human movement data serve as a controller for a humanoid robot or robotic limb? My impression is that a movement model might not be useful for directly controlling servos, because human and robot bodies are so different. But perhaps it could improve the motor planning and error correction layer?
The data would presumably take the form of a 3D wireframe model of the limb/body and its trajectory through space, the goal of the movement ("pour water from this cup to that cup") and some rating of success or failure.
I don't have experience in either LLMs/transformer models or robotics so this question might miss some obvious points, but I couldn't get the idea out of my head!
Hey, I remember your medical miracle post. I enjoyed it!
"Objectively" for me would translate to "biomarker" i.e., a bio-physical signal that predicts a clinical outcome. Note that for depression and many psychological issues this means that we find the biomarkers by asking people how they feel...but maybe this is ok because we do huge studies with good controls, and the biomarkers may take on a life of their own after they are identified.
I'm assuming you mean biomarkers for psychological / mental health outcomes specifically. This is spiritually pretty close to what my lab studies - ways to predict how TMS will affect individuals, and adjust it to make it work better in... (read more)