Time horizon is clearly an important dimension of difficulty given an agent(in this case a human), there is some overlap in the difficulty of AI today to do long horizon tasks which humans can, while excelling at tasks that take humans less time(from problem statement to solution). I think you acknowledge as much in the end of your post. Here I'll try to make a stronger case for studying time horizons and extrapolating to AI performance.
Following the Alberta Plan, we can model intelligence as an online learning agent that continually senses, maintains a learned world model, plans/searches within it, acts, and learns from observation–action–reward at every time step. See Sutton, Bowling, &... (read 579 more words →)
Time horizon is clearly an important dimension of difficulty given an agent(in this case a human), there is some overlap in the difficulty of AI today to do long horizon tasks which humans can, while excelling at tasks that take humans less time(from problem statement to solution). I think you acknowledge as much in the end of your post. Here I'll try to make a stronger case for studying time horizons and extrapolating to AI performance.
Following the Alberta Plan, we can model intelligence as an online learning agent that continually senses, maintains a learned world model, plans/searches within it, acts, and learns from observation–action–reward at every time step. See Sutton, Bowling, &... (read 579 more words →)