I find that a fundamental premise towards the development of AI is the ability to dissect all the internal calculations used by the human mind to make rational decisions, and then replicate those calculations in an electronic system. On a surface level, this works very well for many kinds of problem solving. In our everyday lives, we use logical reasoning to make rational decisions all the time, with the understanding that some decisions come from emotion and intuition instead. There are also some cognitive behavior that is completely subconscious or autonomous (such as walking or talking), which ends up being the most challenging skills to replicate for a machine.

For any kind of occupation a human performs (whether it is a doctor, writer, soldier, engineer, etc.), the skills they use at that job come in one of two forms: transferable skills and non-transferable skills. A transferable skill is effectively a skill where the rational calculations and algorithms needed to perform it are fully understood and documented, and therefore can be easily instructed to someone with a sufficient amount of intelligence. A non-transferable skill is a skill whose rational calculations and algorithms are not fully understood. Humans are still capable of performing that skill (expertly so), but this ability comes from sheer experience rather than any instructions.

Because we live in a universe that is rationally-consistent, then it must be concluded that all skills must have some algorithm behind it, regardless if it is transferable or not. Therefore, human experience must somehow construct the algorithms necessary to perform a non-transferable skill, but these calculations are done subconsciously, and thus appears from an outside perspective to be sheer intuition.

In designing artificial intelligence, a transferable skill is the easiest to program: as long as we understand the algorithms used by our own mind, then it is trivial to design that same algorithm in software. A computer is essentially a blank slate that will absorb whatever instructions is given to it.

The problem is that any occupation that is worth paying money for must have some non-transferable skills. If a job had nothing but transferable skills, then their role is easily replaceable by anyone who can be taught to do the same thing (including a machine). So a fundamental reason that career jobs exist at all is because the vast majority of skills that humans do are non-transferable.

Throughout history, some skills started out as non-transferable, but gradually became more transferable over time as humans started developing algorithms and formulas to describe the actions that they were already doing. This is the process of meta-cognition, or the ability to consciously think about what the mind normally subconsciously.

Part of the beauty of deep reinforcement learning is that it is the best current attempt to simulate what is essentially human intuition: the software explores various approaches to the same task, and by sheer experience develops its own algorithm to perform that task. One difference is that computers have no consciousness, so there is no division between conscious and subconscious awareness.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 11:39 AM

computers have no consciousness

Um... citation please?

I’m not sure about your distinction betwen transferrable and non-transferrable skills, or for that matter how much this comment affects the thrust of your post. But plenty of things can be taught, even though “the rational calculations and algorithms needed to perform it” are not “fully understood and documented”. Physical skills, for example: yoga, driving a car, sports of all sorts, the skills of drawing and painting. These things are learned not only by doing them, but by instruction and feedback from a teacher or coach.

ETA: Now, rationality skills, those seem to be a lot less transferrable than any of those I mentioned.