Hi everybody, first post. I've been delving into AI safety and theoretical AI work with more commitment over the past couple weeks. Something that has repeatedly sat my gears in motion is definitions of intelligence or assumptions about superintelligence that feel very anthropocentric. For instance, I get the sense that when people define intelligence as something like the "ability to pursue objectives in a variety of situations," they're baking into it a set of objectives and situations that line up with human objectives and situations. There are a lot of possible objectives and situations. Another example is the assumption that as you move up in intelligence, you only add new problem-solving ability. I think lots of beings we might label as less intelligent than us can solve problems that we can't. Are there researchers/writers that you think bring a less anthropocentric view to these big questions in AI? Have you found this line of interrogation to be fruitful or is it just quibbling or definitions? 

New to LessWrong?

New Answer
New Comment