"Massachusetts Institute of Technology (MIT) researchers have developed an algorithm that enables a robot to quickly learn an individual's preference for a certain task and adapt accordingly to help complete the task."
It would be interesting to see what kind of issues they will run into. (Granted this is a very restricted environment)
I would prefer far narrower titles for developments like this. There's a three-step industrial task that's done on multiple objects near each other. Some people prefer to do the first step for all the objects, then the second step, then the third- other people prefer to do all three steps on one object, then move to the next. This is a robot designed to learn from subtle cues (like the person not hammering a bolt after they place it, because the person wants them to place a different bolt) which of the two strategies the person wants to do. (There may actually be more strategies, but those two seem like the dominant ones.)
It's more classifying the workers- and responding appropriately- than it is about 'wants,' even though the classification is want-based.
Yeah, as usual journalism generalizes claims for both sensationalism and comprehensibility purposes. I tried to downplay it a bit with choice of words, but still ended up channeling the original writing.
The thing that I find interesting is that these semi-autonomous systems might run into issues of defining utility (this is particularly true for systems with some level of danger, autonomous cars/drones). It might be an area where people will start feeling a need for formalization that can lead to some academics getting into FAI territory (which is good I think).
Indeed; it's definitely on topic and interesting work, and I expect that simple people-reading models like this will do tremendous amounts of good and make significant progress in parallel to the first-principles work that SI appears to be doing.