[ Question ]

Which approach is most promising for aligned AGI?

byChris_Leong10d8th Jan 20194 comments


I know that this is something of a speculative question and that people will have wildly different views here, but it seems important to try to have an idea of which approaches are more likely than others to lead to AGI. It's okay to argue that multiple approaches are promising, but you might want to consider seperate answers if you have a reasonable amount to write about each approach.

1 Answers

G Gordon Worley III

Jan 08, 2019


Well, I don't know that we know enough to say what is most promising, but I'm most excited to explore is my own approach that suggests we need to investigate ways to get the content of AI and human thought aligned along preference ordering. I don't think this is by any means easy, but I don't really see another practical framework in which to approach this. This framework of course admits many possible techniques, but I think it's useful to keep in mind and not get confused (as often happens in existing imitation learning papers) about how much we can know about the values of humans and AIs.