People usually train animals (well, at least pets) "manually" (i.e. using a human-in-the-loop to generate something like a reward signal).  
People usually train AI systems "automatically" (i.e. using a programmatic reward signal).

What if we tried to train animals "automatically"?  I imagine this would be a great line of research for someone just starting their academic career.  I would expect:
1) It would provide nice, convincing examples of specification gaming.
2) It would outperform manual training in some cases.  This could help show that animals are smarter than they've been shown to be, so far.

2 comments, sorted by Click to highlight new comments since:

In certain instances animals attempted to avoid shock by grounding themselves. As in Watson's early finger avoidance study (1916), electrical continuity is only maintained if the rat is in contact between two electrodes or, in our case, two electrified bars comprising the grid floor of the operant chamber. Occasionally, animals balanced their hind paws on a single bar while propping their front paws on a side of the operant chamber, thereby breaking the continuity of the electrical circuit and avoiding footshock.

There are probably a lot of things that people do with animals that can be viewed as "automatic training", but I don't think people are viewing them this way, or trying to create richer reward signals that would encourage the animals to demonstrate increasingly impressive feats of intelligence.